37,113 research outputs found

    Extending Yioop! With Geographical Location Local Search

    Get PDF
    It is often useful when doing an internet search to get results based on our current location. For example, we might want such results when we search on restaurants, car service center, or hospitals. Current open source search engines like those based on Nutch do not provide this facility. Commercial engines like Google and Yahoo! provide this facility so it would be useful to incorporate it in an open source alternative. The goal of this project is to include location aware search in Yioop!(Pollett, 2012) by using geographical data from OpenStreetMap(“Open Street map wiki”, 2012) and hostip.info (“DMOZ”, n.d.) database to geolocate IP addresses

    Km4City Ontology Building vs Data Harvesting and Cleaning for Smart-city Services

    Get PDF
    Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-Store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection

    Historical collaborative geocoding

    Full text link
    The latest developments in digital have provided large data sets that can increasingly easily be accessed and used. These data sets often contain indirect localisation information, such as historical addresses. Historical geocoding is the process of transforming the indirect localisation information to direct localisation that can be placed on a map, which enables spatial analysis and cross-referencing. Many efficient geocoders exist for current addresses, but they do not deal with the temporal aspect and are based on a strict hierarchy (..., city, street, house number) that is hard or impossible to use with historical data. Indeed historical data are full of uncertainties (temporal aspect, semantic aspect, spatial precision, confidence in historical source, ...) that can not be resolved, as there is no way to go back in time to check. We propose an open source, open data, extensible solution for geocoding that is based on the building of gazetteers composed of geohistorical objects extracted from historical topographical maps. Once the gazetteers are available, geocoding an historical address is a matter of finding the geohistorical object in the gazetteers that is the best match to the historical address. The matching criteriae are customisable and include several dimensions (fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is to facilitate historical work, we also propose web-based user interfaces that help geocode (one address or batch mode) and display over current or historical topographical maps, so that they can be checked and collaboratively edited. The system is tested on Paris city for the 19-20th centuries, shows high returns rate and is fast enough to be used interactively.Comment: WORKING PAPE

    Multi-Paradigm Reasoning for Access to Heterogeneous GIS

    Get PDF
    Accessing and querying geographical data in a uniform way has become easier in recent years. Emerging standards like WFS turn the web into a geospatial web services enabled place. Mediation architectures like VirGIS overcome syntactical and semantical heterogeneity between several distributed sources. On mobile devices, however, this kind of solution is not suitable, due to limitations, mostly regarding bandwidth, computation power, and available storage space. The aim of this paper is to present a solution for providing powerful reasoning mechanisms accessible from mobile applications and involving data from several heterogeneous sources. By adapting contents to time and location, mobile web information systems can not only increase the value and suitability of the service itself, but can substantially reduce the amount of data delivered to users. Because many problems pertain to infrastructures and transportation in general and to way finding in particular, one cornerstone of the architecture is higher level reasoning on graph networks with the Multi-Paradigm Location Language MPLL. A mediation architecture is used as a “graph provider” in order to transfer the load of computation to the best suited component – graph construction and transformation for example being heavy on resources. Reasoning in general can be conducted either near the “source” or near the end user, depending on the specific use case. The concepts underlying the proposal described in this paper are illustrated by a typical and concrete scenario for web applications

    Semantic Cross-View Matching

    Full text link
    Matching cross-view images is challenging because the appearance and viewpoints are significantly different. While low-level features based on gradient orientations or filter responses can drastically vary with such changes in viewpoint, semantic information of images however shows an invariant characteristic in this respect. Consequently, semantically labeled regions can be used for performing cross-view matching. In this paper, we therefore explore this idea and propose an automatic method for detecting and representing the semantic information of an RGB image with the goal of performing cross-view matching with a (non-RGB) geographic information system (GIS). A segmented image forms the input to our system with segments assigned to semantic concepts such as traffic signs, lakes, roads, foliage, etc. We design a descriptor to robustly capture both, the presence of semantic concepts and the spatial layout of those segments. Pairwise distances between the descriptors extracted from the GIS map and the query image are then used to generate a shortlist of the most promising locations with similar semantic concepts in a consistent spatial layout. An experimental evaluation with challenging query images and a large urban area shows promising results

    Optimal Constrained Wireless Emergency Network Antennae Placement

    Get PDF
    With increasing number of mobile devices, newly introduced smart devices, and the Internet of things (IoT) sensors, the current microwave frequency spectrum is getting rapidly congested. The obvious solution to this frequency spectrum congestion is to use millimeter wave spectrum ranging from 6 GHz to 300 GHz. With the use of millimeter waves, we can enjoy very high communication speeds and very low latency. But, this technology also introduces some challenges that we hardly faced before. The most important one among these challenges is the Line of Sight (LOS) requirement. In the emergent concept of smart cities, the wireless emergency network is set to use millimeter waves. We have worked on the problem of efficiently finding a line of sight for such wireless emergency network antennae in minimal time. We devised two algorithms, Sequential Line of Sight (SLOS) and Tiled Line of Sight (TLOS), both perform better than traditional algorithms in terms of execution time. The tiled line of sight algorithm reduces the time required for a single line of sight query from 200 ms for traditional algorithms to mere 1.7 ms on average

    Visual and geographical data fusion to classify landmarks in geo-tagged images

    Get PDF
    High level semantic image recognition and classification is a challenging task and currently is a very active research domain. Computers struggle with the high level task of identifying objects and scenes within digital images accurately in unconstrained environments. In this paper, we present experiments that aim to overcome the limitations of computer vision algorithms by combining them with novel contextual based features to describe geo-tagged imagery. We adopt a machine learning based algorithm with the aim of classifying classes of geographical landmarks within digital images. We use community contributed image sets downloaded from Flickr and provide a thorough investigation, the results of which are presented in an evaluation section
    • 

    corecore