13,458 research outputs found

    SpeakNav:A voice-based navigation system via route description language understanding

    Get PDF

    Location- and keyword-based querying of geo-textual data: a survey

    Get PDF
    With the broad adoption of mobile devices, notably smartphones, keyword-based search for content has seen increasing use by mobile users, who are often interested in content related to their geographical location. We have also witnessed a proliferation of geo-textual content that encompasses both textual and geographical information. Examples include geo-tagged microblog posts, yellow pages, and web pages related to entities with physical locations. Over the past decade, substantial research has been conducted on integrating location into keyword-based querying of geo-textual content in settings where the underlying data is assumed to be either relatively static or is assumed to stream into a system that maintains a set of continuous queries. This paper offers a survey of both the research problems studied and the solutions proposed in these two settings. As such, it aims to offer the reader a first understanding of key concepts and techniques, and it serves as an “index” for researchers who are interested in exploring the concepts and techniques underlying proposed solutions to the querying of geo-textual data.Agency for Science, Technology and Research (A*STAR)Ministry of Education (MOE)Nanyang Technological UniversityThis research was supported in part by MOE Tier-2 Grant MOE2019-T2-2-181, MOE Tier-1 Grant RG114/19, an NTU ACE Grant, and the Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), which is a collaboration between Singapore Telecommunications Limited (Singtel) and Nanyang Technological University (NTU) that is funded by the Singapore Government through the Industry Alignment Fund Industry Collaboration Projects Grant, and by the Innovation Fund Denmark centre, DIREC

    Efficient spatial keyword query processing on geo-textual data

    Get PDF

    Window-based Streaming Graph Partitioning Algorithm

    Full text link
    In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset

    Time-Constrained Indoor Keyword-Aware Routing

    Get PDF

    Rationalising territorially dispersed consumption: the projects of Fernand Courtoy for the electricity production and distribution of Belgium

    Get PDF
    The electrification of the notoriously dispersed urbanization patterns of Belgium was a complex and confused operation. The superposition of a nationwide industrial framework, consisting of large elements such as coal basins and sea ports, and a territory characterized by a very fragmented and functionally diverse spatial structure, led to the appearance of electricity in all sorts of locations following diverse arrangements. Within fifty years, large-scale industrial self-producers, regional electricity companies, provincial projects, urban municipal companies and small-scale local initiatives brought electricity to every part of Belgium, barely guided by a national policy. Within this disordered context, Fernand Courtoy occupied a very particular and somewhat exceptional position. As electrical engineer and shareholder of a local electricity company he was able to rationalize the electricity supply, first within his company (1911), and soon after in the entire industrial city of Liege (1919). Later, he became the driving force behind the establishment of the association of industrial self-producers (1922) and founded a consultancy firm that developed electricity plans for private companies as well as strategies for the electrification of whole provinces. Moreover, Courtoy was able to put his mark on the 1927 governmental commission that investigated the organization of an efficient electricity supply on the national scale. As the report of this commission proved to be too controversial for the strongly divided electricity sector, few of its propositions were realized. Nonetheless, through the debate provoked by the commission and through the numerous projects undertaken by his firm, Courtoy was able to introduce a perspective which was rather unusual for the Belgian context, combining large-scale rationalization with the national economic policies of dispersion. The paper discusses the various ways in which his plans searched for an efficient electricity supply, while recognizing the generalized availability of electricity as a necessary condition for Belgium’s distributed model of industrialization

    SMAP: A Novel Heterogeneous Information Framework for Scenario-based Optimal Model Assignment

    Full text link
    The increasing maturity of big data applications has led to a proliferation of models targeting the same objectives within the same scenarios and datasets. However, selecting the most suitable model that considers model's features while taking specific requirements and constraints into account still poses a significant challenge. Existing methods have focused on worker-task assignments based on crowdsourcing, they neglect the scenario-dataset-model assignment problem. To address this challenge, a new problem named the Scenario-based Optimal Model Assignment (SOMA) problem is introduced and a novel framework entitled Scenario and Model Associative percepts (SMAP) is developed. SMAP is a heterogeneous information framework that can integrate various types of information to intelligently select a suitable dataset and allocate the optimal model for a specific scenario. To comprehensively evaluate models, a new score function that utilizes multi-head attention mechanisms is proposed. Moreover, a novel memory mechanism named the mnemonic center is developed to store the matched heterogeneous information and prevent duplicate matching. Six popular traffic scenarios are selected as study cases and extensive experiments are conducted on a dataset to verify the effectiveness and efficiency of SMAP and the score function
    • …
    corecore