277,703 research outputs found

    Smart Signs: Showing the way in Smart Surroundings

    Get PDF
    This paper presents a context-aware guidance and messaging system for large buildings and surrounding venues. Smart Signs are a new type of electronic door- and way-sign based on wireless sensor networks. Smart Signs present in-situ personalized guidance and messages, are ubiquitous, and easy to understand. They combine the easiness of use of traditional static signs with the flexibility and reactiveness of navigation systems. The Smart Signs system uses context information such as user’s mobility limitations, the weather, and possible emergency situations to improve guidance and messaging. Minimal infrastructure requirements and a simple deployment tool make it feasible to easily deploy a Smart Signs system on demand. An important design issue of the Smart Signs system is privacy: the system secures communication links, does not track users, allow almost complete anonymous use, and prevent the system to be used as a tool for spying on users

    Unified Pragmatic Models for Generating and Following Instructions

    Full text link
    We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks. Our pragmatics-enabled models reason about why speakers produce certain instructions, and about how listeners will react upon hearing them. Like previous pragmatic models, we use learned base listener and speaker models to build a pragmatic speaker that uses the base listener to simulate the interpretation of candidate descriptions, and a pragmatic listener that reasons counterfactually about alternative descriptions. We extend these models to tasks with sequential structure. Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans) in diverse settings.Comment: NAACL 2018, camera-ready versio

    Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences

    Full text link
    We propose a neural sequence-to-sequence model for direction following, a task that is essential to realizing effective autonomous agents. Our alignment-based encoder-decoder model with long short-term memory recurrent neural networks (LSTM-RNN) translates natural language instructions to action sequences based upon a representation of the observable world state. We introduce a multi-level aligner that empowers our model to focus on sentence "regions" salient to the current world state by using multiple abstractions of the input sentence. In contrast to existing methods, our model uses no specialized linguistic resources (e.g., parsers) or task-specific annotations (e.g., seed lexicons). It is therefore generalizable, yet still achieves the best results reported to-date on a benchmark single-sentence dataset and competitive results for the limited-training multi-sentence setting. We analyze our model through a series of ablations that elucidate the contributions of the primary components of our model.Comment: To appear at AAAI 2016 (and an extended version of a NIPS 2015 Multimodal Machine Learning workshop paper

    The eye contact effect: mechanisms and development

    Get PDF
    The ‘eye contact effect’ is the phenomenon that perceived eye contact with another human face modulates certain aspects of the concurrent and/or immediately following cognitive processing. In addition, functional imaging studies in adults have revealed that eye contact can modulate activity in structures in the social brain network, and developmental studies show evidence for preferential orienting towards, and processing of, faces with direct gaze from early in life. We review different theories of the eye contact effect and advance a ‘fast-track modulator’ model. Specifically, we hypothesize that perceived eye contact is initially detected by a subcortical route, which then modulates the activation of the social brain as it processes the accompanying detailed sensory information

    Multi-Paradigm Reasoning for Access to Heterogeneous GIS

    Get PDF
    Accessing and querying geographical data in a uniform way has become easier in recent years. Emerging standards like WFS turn the web into a geospatial web services enabled place. Mediation architectures like VirGIS overcome syntactical and semantical heterogeneity between several distributed sources. On mobile devices, however, this kind of solution is not suitable, due to limitations, mostly regarding bandwidth, computation power, and available storage space. The aim of this paper is to present a solution for providing powerful reasoning mechanisms accessible from mobile applications and involving data from several heterogeneous sources. By adapting contents to time and location, mobile web information systems can not only increase the value and suitability of the service itself, but can substantially reduce the amount of data delivered to users. Because many problems pertain to infrastructures and transportation in general and to way finding in particular, one cornerstone of the architecture is higher level reasoning on graph networks with the Multi-Paradigm Location Language MPLL. A mediation architecture is used as a “graph provider” in order to transfer the load of computation to the best suited component – graph construction and transformation for example being heavy on resources. Reasoning in general can be conducted either near the “source” or near the end user, depending on the specific use case. The concepts underlying the proposal described in this paper are illustrated by a typical and concrete scenario for web applications
    corecore