6,095 research outputs found

    AmIE: An Ambient Intelligent Environment for Assisted Living

    Full text link
    In the modern world of technology Internet-of-things (IoT) systems strives to provide an extensive interconnected and automated solutions for almost every life aspect. This paper proposes an IoT context-aware system to present an Ambient Intelligence (AmI) environment; such as an apartment, house, or a building; to assist blind, visually-impaired, and elderly people. The proposed system aims at providing an easy-to-utilize voice-controlled system to locate, navigate and assist users indoors. The main purpose of the system is to provide indoor positioning, assisted navigation, outside weather information, room temperature, people availability, phone calls and emergency evacuation when needed. The system enhances the user's awareness of the surrounding environment by feeding them with relevant information through a wearable device to assist them. In addition, the system is voice-controlled in both English and Arabic languages and the information are displayed as audio messages in both languages. The system design, implementation, and evaluation consider the constraints in common types of premises in Kuwait and in challenges, such as the training needed by the users. This paper presents cost-effective implementation options by the adoption of a Raspberry Pi microcomputer, Bluetooth Low Energy devices and an Android smart watch.Comment: 6 pages, 8 figures, 1 tabl

    Fireground location understanding by semantic linking of visual objects and building information models

    Get PDF
    This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding

    Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison

    No full text
    Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data

    InLoc: Indoor Visual Localization with Dense Matching and View Synthesis

    Get PDF
    We seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph with respect to a large indoor 3D map. The contributions of this work are three-fold. First, we develop a new large-scale visual localization method targeted for indoor environments. The method proceeds along three steps: (i) efficient retrieval of candidate poses that ensures scalability to large-scale environments, (ii) pose estimation using dense matching rather than local features to deal with textureless indoor scenes, and (iii) pose verification by virtual view synthesis to cope with significant changes in viewpoint, scene layout, and occluders. Second, we collect a new dataset with reference 6DoF poses for large-scale indoor localization. Query photographs are captured by mobile phones at a different time than the reference 3D map, thus presenting a realistic indoor localization scenario. Third, we demonstrate that our method significantly outperforms current state-of-the-art indoor localization approaches on this new challenging data
    • 

    corecore