518 research outputs found

    3D Perception Based Lifelong Navigation of Service Robots in Dynamic Environments

    Get PDF
    Lifelong navigation of mobile robots is to ability to reliably operate over extended periods of time in dynamically changing environments. Historically, computational capacity and sensor capability have been the constraining factors to the richness of the internal representation of the environment that a mobile robot could use for navigation tasks. With affordable contemporary sensing technology available that provides rich 3D information of the environment and increased computational power, we can increasingly make use of more semantic environmental information in navigation related tasks.A navigation system has many subsystems that must operate in real time competing for computation resources in such as the perception, localization, and path planning systems. The main thesis proposed in this work is that we can utilize 3D information from the environment in our systems to increase navigational robustness without making trade-offs in any of the real time subsystems. To support these claims, this dissertation presents robust, real world 3D perception based navigation systems in the domains of indoor doorway detection and traversal, sidewalk-level outdoor navigation in urban environments, and global localization in large scale indoor warehouse environments.The discussion of these systems includes methods of 3D point cloud based object detection to find respective objects of semantic interest for the given navigation tasks as well as the use of 3D information in the navigational systems for purposes such as localization and dynamic obstacle avoidance. Experimental results for each of these applications demonstrate the effectiveness of the techniques for robust long term autonomous operation

    Sense-Assess-eXplain (SAX): Building Trust in Autonomous Vehicles in Challenging Real-World Driving Scenarios

    Get PDF
    This paper discusses ongoing work in demonstrating research in mobile autonomy in challenging driving scenarios. In our approach, we address fundamental technical issues to overcome critical barriers to assurance and regulation for large-scale deployments of autonomous systems. To this end, we present how we build robots that (1) can robustly sense and interpret their environment using traditional as well as unconventional sensors; (2) can assess their own capabilities; and (3), vitally in the purpose of assurance and trust, can provide causal explanations of their interpretations and assessments. As it is essential that robots are safe and trusted, we design, develop, and demonstrate fundamental technologies in real-world applications to overcome critical barriers which impede the current deployment of robots in economically and socially important areas. Finally, we describe ongoing work in the collection of an unusual, rare, and highly valuable dataset.Comment: accepted for publication at the IEEE Intelligent Vehicles Symposium (IV), Workshop on Ensuring and Validating Safety for Automated Vehicles (EVSAV), 2020, project URL: https://ori.ox.ac.uk/projects/sense-assess-explain-sa

    Adaptive Multi-sensor Perception for Driving Automation in Outdoor Contexts

    Get PDF
    In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system

    Vision-based Learning for Drones: A Survey

    Full text link
    Drones as advanced cyber-physical systems are undergoing a transformative shift with the advent of vision-based learning, a field that is rapidly gaining prominence due to its profound impact on drone autonomy and functionality. Different from existing task-specific surveys, this review offers a comprehensive overview of vision-based learning in drones, emphasizing its pivotal role in enhancing their operational capabilities under various scenarios. We start by elucidating the fundamental principles of vision-based learning, highlighting how it significantly improves drones' visual perception and decision-making processes. We then categorize vision-based control methods into indirect, semi-direct, and end-to-end approaches from the perception-control perspective. We further explore various applications of vision-based drones with learning capabilities, ranging from single-agent systems to more complex multi-agent and heterogeneous system scenarios, and underscore the challenges and innovations characterizing each area. Finally, we explore open questions and potential solutions, paving the way for ongoing research and development in this dynamic and rapidly evolving field. With growing large language models (LLMs) and embodied intelligence, vision-based learning for drones provides a promising but challenging road towards artificial general intelligence (AGI) in 3D physical world

    Shedding light on GIS: A 3D immersive approach to urban lightscape integration into GIS

    Get PDF
    Geographic Information Systems (GIS) have the ability to map, model, and analyze real world data and phenomena, and yet visibility and lighting conditions are rarely considered or researched in Geographic Information Science (GISci). Lighting technologies have been created and implemented to overcome the darkness of night and other issues of visibility, and in no place is that more evident than urban areas. Though not researched heavily in GIS, it is now possible to model and analyze lighting of the built environment using GIS, 3D modeling and rendering software. This thesis explores the night time urban lightscape, its spatial aspects and contribution to place as well as its incorporation into GIS and GISci. To capture lighting and its multi-dimensional properties, a 3D model was created of the built environment of Morgantown, WV, USA, including the West Virginia University (WVU) campuses and their exterior lighting. The model was completed through the coupling of ESRI\u27s CityEngine and E-on software\u27s LumenRT4 Geodesign plug-in. Lighting data was obtained through the WVU Department of Construction and Design in the form of a CAD map. After geo-referencing CAD-based exterior lighting data, a raster lighting analysis of WVU\u27s Evansdale Campus was produced to identify under-lit areas. These areas were then redesigned using a lighting design tool and incorporated 3D modeling, GIS, and procedural rule-based modeling. An original workflow was designed consisting of ArcGIS, SketchUp, CityEngine, and LumenRT 4 Geodesign. Lighting scenarios were subsequently viewed and experienced through immersive technologies

    Urban Drone Navigation: Autoencoder Learning Fusion for Aerodynamics

    Full text link
    Drones are vital for urban emergency search and rescue (SAR) due to the challenges of navigating dynamic environments with obstacles like buildings and wind. This paper presents a method that combines multi-objective reinforcement learning (MORL) with a convolutional autoencoder to improve drone navigation in urban SAR. The approach uses MORL to achieve multiple goals and the autoencoder for cost-effective wind simulations. By utilizing imagery data of urban layouts, the drone can autonomously make navigation decisions, optimize paths, and counteract wind effects without traditional sensors. Tested on a New York City model, this method enhances drone SAR operations in complex urban settings.Comment: 47 page
    • …
    corecore