2,098 research outputs found

    Urban Drone Navigation: Autoencoder Learning Fusion for Aerodynamics

    Full text link
    Drones are vital for urban emergency search and rescue (SAR) due to the challenges of navigating dynamic environments with obstacles like buildings and wind. This paper presents a method that combines multi-objective reinforcement learning (MORL) with a convolutional autoencoder to improve drone navigation in urban SAR. The approach uses MORL to achieve multiple goals and the autoencoder for cost-effective wind simulations. By utilizing imagery data of urban layouts, the drone can autonomously make navigation decisions, optimize paths, and counteract wind effects without traditional sensors. Tested on a New York City model, this method enhances drone SAR operations in complex urban settings.Comment: 47 page

    An Adaptive Human Activity-Aided Hand-Held Smartphone-Based Pedestrian Dead Reckoning Positioning System

    Get PDF
    Pedestrian dead reckoning (PDR), enabled by smartphones’ embedded inertial sensors, is widely applied as a type of indoor positioning system (IPS). However, traditional PDR faces two challenges to improve its accuracy: lack of robustness for different PDR-related human activities and positioning error accumulation over elapsed time. To cope with these issues, we propose a novel adaptive human activity-aided PDR (HAA-PDR) IPS that consists of two main parts, human activity recognition (HAR) and PDR optimization. (1) For HAR, eight different locomotion-related activities are divided into two classes: steady-heading activities (ascending/descending stairs, stationary, normal walking, stationary stepping, and lateral walking) and non-steady-heading activities (door opening and turning). A hierarchical combination of a support vector machine (SVM) and decision tree (DT) is used to recognize steady-heading activities. An autoencoder-based deep neural network (DNN) and a heading range-based method to recognize door opening and turning, respectively. The overall HAR accuracy is over 98.44%. (2) For optimization methods, a process automatically sets the parameters of the PDR differently for different activities to enhance step counting and step length estimation. Furthermore, a method of trajectory optimization mitigates PDR error accumulation utilizing the non-steady-heading activities. We divided the trajectory into small segments and reconstructed it after targeted optimization of each segment. Our method does not use any a priori knowledge of the building layout, plan, or map. Finally, the mean positioning error of our HAA-PDR in a multilevel building is 1.79 m, which is a significant improvement in accuracy compared with a baseline state-of-the-art PDR system

    Expanding Navigation Systems by Integrating It with Advanced Technologies

    Get PDF
    Navigation systems provide the optimized route from one location to another. It is mainly assisted by external technologies such as Global Positioning System (GPS) and satellite-based radio navigation systems. GPS has many advantages such as high accuracy, available anywhere, reliable, and self-calibrated. However, GPS is limited to outdoor operations. The practice of combining different sources of data to improve the overall outcome is commonly used in various domains. GIS is already integrated with GPS to provide the visualization and realization aspects of a given location. Internet of things (IoT) is a growing domain, where embedded sensors are connected to the Internet and so IoT improves existing navigation systems and expands its capabilities. This chapter proposes a framework based on the integration of GPS, GIS, IoT, and mobile communications to provide a comprehensive and accurate navigation solution. In the next section, we outline the limitations of GPS, and then we describe the integration of GIS, smartphones, and GPS to enable its use in mobile applications. For the rest of this chapter, we introduce various navigation implementations using alternate technologies integrated with GPS or operated as standalone devices

    A survey of fuzzy logic in wireless localization

    Get PDF

    DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles

    Full text link
    This paper proposes a novel learning-based control policy with strong generalizability to new environments that enables a mobile robot to navigate autonomously through spaces filled with both static obstacles and dense crowds of pedestrians. The policy uses a unique combination of input data to generate the desired steering angle and forward velocity: a short history of lidar data, kinematic data about nearby pedestrians, and a sub-goal point. The policy is trained in a reinforcement learning setting using a reward function that contains a novel term based on velocity obstacles to guide the robot to actively avoid pedestrians and move towards the goal. Through a series of 3D simulated experiments with up to 55 pedestrians, this control policy is able to achieve a better balance between collision avoidance and speed (i.e., higher success rate and faster average speed) than state-of-the-art model-based and learning-based policies, and it also generalizes better to different crowd sizes and unseen environments. An extensive series of hardware experiments demonstrate the ability of this policy to directly work in different real-world environments with different crowd sizes with zero retraining. Furthermore, a series of simulated and hardware experiments show that the control policy also works in highly constrained static environments on a different robot platform without any additional training. Lastly, several important lessons that can be applied to other robot learning systems are summarized. Multimedia demonstrations are available at https://www.youtube.com/watch?v=KneELRT8GzU&list=PLouWbAcP4zIvPgaARrV223lf2eiSR-eSS.Comment: Accepted by IEEE Transactions on Robotics (T-RO), 202

    Kimera: from SLAM to Spatial Perception with 3D Dynamic Scene Graphs

    Full text link
    Humans are able to form a complex mental model of the environment they move in. This mental model captures geometric and semantic aspects of the scene, describes the environment at multiple levels of abstractions (e.g., objects, rooms, buildings), includes static and dynamic entities and their relations (e.g., a person is in a room at a given time). In contrast, current robots' internal representations still provide a partial and fragmented understanding of the environment, either in the form of a sparse or dense set of geometric primitives (e.g., points, lines, planes, voxels) or as a collection of objects. This paper attempts to reduce the gap between robot and human perception by introducing a novel representation, a 3D Dynamic Scene Graph(DSG), that seamlessly captures metric and semantic aspects of a dynamic environment. A DSG is a layered graph where nodes represent spatial concepts at different levels of abstraction, and edges represent spatio-temporal relations among nodes. Our second contribution is Kimera, the first fully automatic method to build a DSG from visual-inertial data. Kimera includes state-of-the-art techniques for visual-inertial SLAM, metric-semantic 3D reconstruction, object localization, human pose and shape estimation, and scene parsing. Our third contribution is a comprehensive evaluation of Kimera in real-life datasets and photo-realistic simulations, including a newly released dataset, uHumans2, which simulates a collection of crowded indoor and outdoor scenes. Our evaluation shows that Kimera achieves state-of-the-art performance in visual-inertial SLAM, estimates an accurate 3D metric-semantic mesh model in real-time, and builds a DSG of a complex indoor environment with tens of objects and humans in minutes. Our final contribution shows how to use a DSG for real-time hierarchical semantic path-planning. The core modules in Kimera are open-source.Comment: 34 pages, 25 figures, 9 tables. arXiv admin note: text overlap with arXiv:2002.0628

    3D indoor modeling and game theory based navigation for pre and post COVID-19 situation

    Get PDF
    The COVID-19 pandemic has greatly affected human behavior, creating a need for individuals to be more cautious about health and safety protocols. People are becoming more aware of their surroundings and the importance of minimizing the risk of exposure to potential sources of infection. This shift in mindset is particularly important in indoor environments, especially hospitals, where there is a greater risk of virus transmission. The implementation of route planning in these areas, aimed at minimizing interaction and exposure, is crucial for positively influencing individual behavior. Accurate maps of buildings help provide location-based services, prepare for emergencies, and manage infrastructural facilities. There aren’t any maps available for most installations, and there are no proven techniques to categorize features within indoor areas to provide location-based services. During a pandemic like COVID-19, the direct connection between the masses is one of the significant preventive steps. Hospitals are the main stakeholders in managing such situations. This study presents a novel method to create an adaptive 3D model of an indoor space to be used for localization and routing purposes. The proposed method infuses LiDAR-based data-driven methodology with a Quantum Geographic Information System (QGIS) model-driven process using game theory. The game theory determines the object localization and optimal path for COVID-19 patients in a real-time scenario using Nash equilibrium. Using the proposed method, comprehensive simulations and model experiments were done using QGIS to identify an optimized route. Dijkstra algorithm is used to determine the path assessment score after obtaining several path plans using dynamic programming. Additionally, Game theory generates path ordering based on the custom scenarios and user preference in the input path. In comparison to other approaches, the suggested way can minimize time and avoid congestion. It is demonstrated that the suggested technique satisfies the actual technical requirements in real-time. As we look forward to the post-COVID era, the tactics and insights gained during the pandemic hold significant value. The techniques used to improve indoor navigation and reduce interpersonal contact within healthcare facilities can be applied to maintain a continued emphasis on safety, hygiene, and effective space management in the long term. The use of three-dimensional (3D) modeling and optimization methodologies in the long-term planning and design of indoor spaces promotes resilience and flexibility, encouraging the adoption of sustainable and safe practices that extend beyond the current pandemic

    A Wearable Indoor Navigation System for Blind and Visually Impaired Individuals

    Get PDF
    Indoor positioning and navigation for blind and visually impaired individuals has become an active field of research. The development of a reliable positioning and navigational system will reduce the suffering of the people with visual disabilities, help them live more independently, and promote their employment opportunities. In this work, a coarse-to-fine multi-resolution model is proposed for indoor navigation in hallway environments based on the use of a wearable computer called the eButton. This self-constructed device contains multiple sensors which are used for indoor positioning and localization in three layers of resolution: a global positioning system (GPS) layer for building identification; a Wi-Fi - barometer layer for rough position localization; and a digital camera - motion sensor layer for precise localization. In this multi-resolution model, a new theoretical framework is developed which uses the change of atmospheric pressure to determine the floor number in a multistory building. The digital camera and motion sensors within the eButton acquire both pictorial and motion data as a person with a normal vision walks along a hallway to establish a database. Precise indoor positioning and localization information is provided to the visually impaired individual based on a Kalman filter fusion algorithm and an automatic matching algorithm between the acquired images and those in the pre-established database. Motion calculation is based on the data from motion sensors is used to refine the localization result. Experiments were conducted to evaluate the performance of the algorithms. Our results show that the new device and algorithms can precisely determine the floor level and indoor location along hallways in multistory buildings, providing a powerful and unobtrusive navigational tool for blind and visually impaired individuals
    • …
    corecore