760 research outputs found

    Evaluation of Multi-Level Cognitive Maps for Supporting Between-Floor Spatial Behavior in Complex Indoor Environments

    Get PDF
    People often become disoriented when navigating in complex, multi-level buildings. To efficiently find destinations located on different floors, navigators must refer to a globally coherent mental representation of the multi-level environment, which is termed a multi-level cognitive map. However, there is a surprising dearth of research into underlying theories of why integrating multi-level spatial knowledge into a multi-level cognitive map is so challenging and error-prone for humans. This overarching problem is the core motivation of this dissertation. We address this vexing problem in a two-pronged approach combining study of both basic and applied research questions. Of theoretical interest, we investigate questions about how multi-level built environments are learned and structured in memory. The concept of multi-level cognitive maps and a framework of multi-level cognitive map development are provided. We then conducted a set of empirical experiments to evaluate the effects of several environmental factors on users’ development of multi-level cognitive maps. The findings of these studies provide important design guidelines that can be used by architects and help to better understand the research question of why people get lost in buildings. Related to application, we investigate questions about how to design user-friendly visualization interfaces that augment users’ capability to form multi-level cognitive maps. An important finding of this dissertation is that increasing visual access with an X-ray-like visualization interface is effective for overcoming the disadvantage of limited visual access in built environments and assists the development of multi-level cognitive maps. These findings provide important human-computer interaction (HCI) guidelines for visualization techniques to be used in future indoor navigation systems. In sum, this dissertation adopts an interdisciplinary approach, combining theories from the fields of spatial cognition, information visualization, and HCI, addressing a long-standing and ubiquitous problem faced by anyone who navigates indoors: why do people get lost inside multi-level buildings. Results provide both theoretical and applied levels of knowledge generation and explanation, as well as contribute to the growing field of real-time indoor navigation systems

    Real-time kinematics for accurate geolocalization of images in telerobotic applications

    Get PDF
    The paper discusses a real-time kinematic system for accurate geolocalization of images, acquired though stereoscopic cameras mounted on a robot, particularly a teleoperated machinery. A teleoperated vehicle may be used to explore an unsafe environment and to acquire in real-time stereoscopic images through two cameras mounted on top of it. Each camera has a visible image sensor. For night operation, or in case temperature is an important parameter, each camera can be equipped with both visible and infrared image sensors. One of the main issues for telerobotic is the real-time and accurate geolocalization of the images, where an accuracy of few cm is required. Such value is much better than that that provided by GPS (Global Positioning System), which is in the order of few meters. To this aim, a real-time kinematic system is proposed which acquires the GPS signal of the vehicle, plus through an RF channel, the GPS signal of a reference base station, geolocalized with a cm-accuracy. To improve the robustness of the differential GPS system, also the data of an Inertial Measurement Unit are used. Another issue addressed in this paper is the real-time implementation of a stereoscopic image-processing algorithm to recover the 3D structure of the scene. The focus is on the 3D reconstruction of the scene to have the reference trajectory for the actuation done by a robotic arm with a proper end-effector

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Crowd-based cognitive perception of the physical world: Towards the internet of senses

    Get PDF
    This paper introduces a possible architecture and discusses the research directions for the realization of the Cognitive Perceptual Internet (CPI), which is enabled by the convergence of wired and wireless communications, traditional sensor networks, mobile crowd-sensing, and machine learning techniques. The CPI concept stems from the fact that mobile devices, such as smartphones and wearables, are becoming an outstanding mean for zero-effort world-sensing and digitalization thanks to their pervasive diffusion and the increasing number of embedded sensors. Data collected by such devices provide unprecedented insights into the physical world that can be inferred through cognitive processes, thus originating a digital sixth sense. In this paper, we describe how the Internet can behave like a sensing brain, thus evolving into the Internet of Senses, with network-based cognitive perception and action capabilities built upon mobile crowd-sensing mechanisms. The new concept of hyper-map is envisioned as an efficient geo-referenced repository of knowledge about the physical world. Such knowledge is acquired and augmented through heterogeneous sensors, multi-user cooperation and distributed learning mechanisms. Furthermore, we indicate the possibility to accommodate proactive sensors, in addition to common reactive sensors such as cameras, antennas, thermometers and inertial measurement units, by exploiting massive antenna arrays at millimeter-waves to enhance mobile terminals perception capabilities as well as the range of new applications. Finally, we distillate some insights about the challenges arising in the realization of the CPI, corroborated by preliminary results, and we depict a futuristic scenario where the proposed Internet of Senses becomes true

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Framework for indoor video-based augumented reality applications

    Get PDF
    Augmented Reality (AR) has been proven to be useful in many fields such as medical surgery, military training, engineering design, tourist guiding, manufacturing and maintenance. Several AR systems and tracking tools have been reviewed and examined. Taking into consideration the different shortcomings of the available AR systems, a framework for indoor video-based AR applications is proposed to integrate four main components of AR applications, which are large scale virtual environment, mobile devices, interaction methods and video-tracking, in one system. The proposed framework benefits from the rapidly evolving technology in virtual modeling by combing GIS maps and 3D virtual models of cities and building interiors in one single platform. Interaction methods for AR applications are introduced, such as the automatic 3D picking which allows for a location-based data access. In addition, a practical method is proposed for the configuration and the deployment of video tracking. This method makes use of the XML mark-up language to allow for future extensions and simplified interchangeability. An implementation of the proposed approach is developed to demonstrate the feasibility of the framework. Different case studies are carried out to validate the applicability of the system and identify its benefits and limitations

    Submap-based indoor navigation system for the fetch robot

    Full text link
    © 2013 IEEE. In this paper, we present a novel navigation framework for the Fetch robot in a large-scale environment based on submapping techniques. This indoor navigation system is divided into a submap mapping part and an on-line localization part. For the mapping part, in order to deal with large environments or multi-story buildings, a submap mapping framework fusing two-dimensional (2D) laser scan and 3D point cloud from RGBD sensor is proposed using Google Cartographer. Meanwhile, several image datasets with corresponding poses are created from the RGBD sensor. Thanks to the submap framework, the error is limited corresponding to the size of the map, thus localization accuracy will be improved. For the on-line localization, so as to switch the submaps, the on-line images from the RGBD sensor are used to match the database images using DeepLCD, a deep learning based library for loop closure. Based on the information from DeepLCD and odometry, adaptive Monte Carlo localization (AMCL) is reinitialized to finish the localization task. In order to validate the result accuracy, reflectors and a motion capture system are used to compute the absolute trajectory error (ATE) and the relative pose error (RPE) based on the Gaussian-Newton (GN) algorithm. Finally, the proposed framework is tested on the Fetch simulator and the real Fetch robot, including both submap mapping and on-line localization

    Robot Mapping and Navigation by Fusing Sensory Information

    Get PDF
    • 

    corecore