809 research outputs found

    Fireground location understanding by semantic linking of visual objects and building information models

    Get PDF
    This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding

    Visual Map Construction Using RGB-D Sensors for Image-Based Localization in Indoor Environments

    Get PDF
    RGB-D sensors capture RGB images and depth images simultaneously, which makes it possible to acquire the depth information at pixel level. This paper focuses on the use of RGB-D sensors to construct a visual map which is an extended dense 3D map containing essential elements for image-based localization, such as poses of the database camera, visual features, and 3D structures of the building. Taking advantage of matched visual features and corresponding depth values, a novel local optimization algorithm is proposed to achieve point cloud registration and database camera pose estimation. Next, graph-based optimization is used to obtain the global consistency of the map. On the basis of the visual map, the image-based localization method is investigated, making use of the epipolar constraint. The performance of the visual map construction and the image-based localization are evaluated on typical indoor scenes. The simulation results show that the average position errors of the database camera and the query camera can be limited to within 0.2 meters and 0.9 meters, respectively

    STCP: Receiver-agnostic Communication Enabled by Space-Time Cloud Pointers

    Get PDF
    Department of Electrical and Computer Engineering (Computer Engineering)During the last decade, mobile communication technologies have rapidly evolved and ubiquitous network connectivity is nearly achieved. However, we observe that there are critical situations where none of the existing mobile communication technologies is usable. Such situations are often found when messages need to be delivered to arbitrary persons or devices that are located in a specific space at a specific time. For instance at a disaster scene, current communication methods are incapable of delivering messages of a rescuer to the group of people at a specific area even when their cellular connections are alive because the rescuer cannot specify the receivers of the messages. We name this as receiver-unknown problem and propose a viable solution called SpaceMessaging. SpaceMessaging adopts the idea of Post-it by which we casually deliver our messages to a person who happens to visit a location at a random moment. To enable SpaceMessaging, we realize the concept of posting messages to a space by implementing cloud-pointers at a cloud server to which messages can be posted and from which messages can fetched by arbitrary mobile devices that are located at that space. Our Android-based prototype of SpaceMessaging, which particularly maps a cloud-pointer to a WiFi signal fingerprint captured from mobile devices, demonstrates that it first allows mobile devices to deliver messages to a specific space and to listen to the messages of a specific space in a highly accurate manner (with more than 90% of Recall)

    A Review of Hybrid Indoor Positioning Systems Employing WLAN Fingerprinting and Image Processing

    Get PDF
    Location-based services (LBS) are a significant permissive technology. One of the main components in indoor LBS is the indoor positioning system (IPS). IPS utilizes many existing technologies such as radio frequency, images, acoustic signals, as well as magnetic sensors, thermal sensors, optical sensors, and other sensors that are usually installed in a mobile device. The radio frequency technologies used in IPS are WLAN, Bluetooth, Zig Bee, RFID, frequency modulation, and ultra-wideband. This paper explores studies that have combined WLAN fingerprinting and image processing to build an IPS. The studies on combined WLAN fingerprinting and image processing techniques are divided based on the methods used. The first part explains the studies that have used WLAN fingerprinting to support image positioning. The second part examines works that have used image processing to support WLAN fingerprinting positioning. Then, image processing and WLAN fingerprinting are used in combination to build IPS in the third part. A new concept is proposed at the end for the future development of indoor positioning models based on WLAN fingerprinting and supported by image processing to solve the effect of people presence around users and the user orientation problem

    Off-line evaluation of indoor positioning systems in different scenarios: the experiences from IPIN 2020 competition

    Get PDF
    Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements.Track 3 organizers were supported by the European Union’s Horizon 2020 Research and Innovation programme under the Marie Skłodowska Curie Grant 813278 (A-WEAR: A network for dynamic WEarable Applications with pRivacy constraints), MICROCEBUS (MICINN, ref. RTI2018-095168-B-C55, MCIU/AEI/FEDER UE), INSIGNIA (MICINN ref. PTQ2018-009981), and REPNIN+ (MICINN, ref. TEC2017-90808-REDT). We would like to thanks the UJI’s Library managers and employees for their support while collecting the required datasets for Track 3. Track 5 organizers were supported by JST-OPERA Program, Japan, under Grant JPMJOP1612. Track 7 organizers were supported by the Bavarian Ministry for Economic Affairs, Infrastructure, Transport and Technology through the Center for Analytics-Data-Applications (ADA-Center) within the framework of “BAYERN DIGITAL II. ” Team UMinho (Track 3) was supported by FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope under Grant UIDB/00319/2020, and the Ph.D. Fellowship under Grant PD/BD/137401/2018. Team YAI (Track 3) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 109-2221-E-197-026. Team Indora (Track 3) was supported in part by the Slovak Grant Agency, Ministry of Education and Academy of Science, Slovakia, under Grant 1/0177/21, and in part by the Slovak Research and Development Agency under Contract APVV-15-0091. Team TJU (Track 3) was supported in part by the National Natural Science Foundation of China under Grant 61771338 and in part by the Tianjin Research Funding under Grant 18ZXRHSY00190. Team Next-Newbie Reckoners (Track 3) were supported by the Singapore Government through the Industry Alignment Fund—Industry Collaboration Projects Grant. This research was conducted at Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), which is a collaboration between Singapore Telecommunications Limited (Singtel) and Nanyang Technological University (NTU). Team KawaguchiLab (Track 5) was supported by JSPS KAKENHI under Grant JP17H01762. Team WHU&AutoNavi (Track 6) was supported by the National Key Research and Development Program of China under Grant 2016YFB0502202. Team YAI (Tracks 6 and 7) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 110-2634-F-155-001

    Off-Line Evaluation of Indoor Positioning Systems in Different Scenarios: The Experiences From IPIN 2020 Competition

    Get PDF
    Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements
    • …
    corecore