518 research outputs found
3D Perception Based Lifelong Navigation of Service Robots in Dynamic Environments
Lifelong navigation of mobile robots is to ability to reliably operate over extended periods of time in dynamically changing environments. Historically, computational capacity and sensor capability have been the constraining factors to the richness of the internal representation of the environment that a mobile robot could use for navigation tasks. With affordable contemporary sensing technology available that provides rich 3D information of the environment and increased computational power, we can increasingly make use of more semantic environmental information in navigation related tasks.A navigation system has many subsystems that must operate in real time competing for computation resources in such as the perception, localization, and path planning systems. The main thesis proposed in this work is that we can utilize 3D information from the environment in our systems to increase navigational robustness without making trade-offs in any of the real time subsystems. To support these claims, this dissertation presents robust, real world 3D perception based navigation systems in the domains of indoor doorway detection and traversal, sidewalk-level outdoor navigation in urban environments, and global localization in large scale indoor warehouse environments.The discussion of these systems includes methods of 3D point cloud based object detection to find respective objects of semantic interest for the given navigation tasks as well as the use of 3D information in the navigational systems for purposes such as localization and dynamic obstacle avoidance. Experimental results for each of these applications demonstrate the effectiveness of the techniques for robust long term autonomous operation
Sense-Assess-eXplain (SAX): Building Trust in Autonomous Vehicles in Challenging Real-World Driving Scenarios
This paper discusses ongoing work in demonstrating research in mobile
autonomy in challenging driving scenarios. In our approach, we address
fundamental technical issues to overcome critical barriers to assurance and
regulation for large-scale deployments of autonomous systems. To this end, we
present how we build robots that (1) can robustly sense and interpret their
environment using traditional as well as unconventional sensors; (2) can assess
their own capabilities; and (3), vitally in the purpose of assurance and trust,
can provide causal explanations of their interpretations and assessments. As it
is essential that robots are safe and trusted, we design, develop, and
demonstrate fundamental technologies in real-world applications to overcome
critical barriers which impede the current deployment of robots in economically
and socially important areas. Finally, we describe ongoing work in the
collection of an unusual, rare, and highly valuable dataset.Comment: accepted for publication at the IEEE Intelligent Vehicles Symposium
(IV), Workshop on Ensuring and Validating Safety for Automated Vehicles
(EVSAV), 2020, project URL:
https://ori.ox.ac.uk/projects/sense-assess-explain-sa
Recommended from our members
Real-time spatial modeling to detect and track resources on construction sites
For more than 10 years the U.S. construction industry has experienced over 1,000
fatalities annually. Many fatalities may have been prevented had the individuals and
equipment involved been more aware of and alert to the physical state of the environment
around them. Awareness may be improved by automatic 3D (three-dimensional) sensing
and modeling of the job site environment in real-time. Existing 3D modeling approaches
based on range scanning techniques are capable of modeling static objects only, and thus
cannot model in real-time dynamic objects in an environment comprised of moving
humans, equipment, and materials. Emerging prototype 3D video range cameras offer
another alternative by facilitating affordable, wide field of view, automated static and
dynamic object detection and tracking at frame rates better than 1Hz (real-time).
This dissertation presents an imperical work and methodology to rapidly create a
spatial model of construction sites and in particular to detect, model, and track the position, dimension, direction, and velocity of static and moving project resources in real-time, based on range data obtained from a three-dimensional video range camera in a
static or moving position. Existing construction site 3D modeling approaches based on
optical range sensing technologies (laser scanners, rangefinders, etc.) and 3D modeling
approaches (dense, sparse, etc.) that offered potential solutions for this research are
reviewed. The choice of an emerging sensing tool and preliminary experiments with this
prototype sensing technology are discussed. These findings led to the development of a
range data processing algorithm based on three-dimensional occupancy grids which is
demonstrated in detail. Testing and validation of the proposed algorithms have been
conducted to quantify the performance of sensor and algorithm through extensive
experimentation involving static and moving objects. Experiments in indoor laboratory
and outdoor construction environments have been conducted with construction resources
such as humans, equipment, materials, or structures to verify the accuracy of the
occupancy grid modeling approach. Results show that modeling objects and measuring
their position, dimension, direction, and speed had an accuracy level compatible to the
requirements of active safety features for construction. Results demonstrate that video
rate 3D data acquisition and analysis of construction environments can support effective
detection, tracking, and convex hull modeling of objects. Exploiting rapidly generated
three-dimensional models for improved visualization, communications, and process
control has inherent value, broad application, and potential impact, e.g. as-built vs. as-planned comparison, condition assessment, maintenance, operations, and construction
activities control. In combination with effective management practices, this sensing
approach has the potential to assist equipment operators to avoid incidents that result in
reduce human injury, death, or collateral damage on construction sites.Civil, Architectural, and Environmental Engineerin
Adaptive Multi-sensor Perception for Driving Automation in Outdoor Contexts
In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system
Vision-based Learning for Drones: A Survey
Drones as advanced cyber-physical systems are undergoing a transformative
shift with the advent of vision-based learning, a field that is rapidly gaining
prominence due to its profound impact on drone autonomy and functionality.
Different from existing task-specific surveys, this review offers a
comprehensive overview of vision-based learning in drones, emphasizing its
pivotal role in enhancing their operational capabilities under various
scenarios. We start by elucidating the fundamental principles of vision-based
learning, highlighting how it significantly improves drones' visual perception
and decision-making processes. We then categorize vision-based control methods
into indirect, semi-direct, and end-to-end approaches from the
perception-control perspective. We further explore various applications of
vision-based drones with learning capabilities, ranging from single-agent
systems to more complex multi-agent and heterogeneous system scenarios, and
underscore the challenges and innovations characterizing each area. Finally, we
explore open questions and potential solutions, paving the way for ongoing
research and development in this dynamic and rapidly evolving field. With
growing large language models (LLMs) and embodied intelligence, vision-based
learning for drones provides a promising but challenging road towards
artificial general intelligence (AGI) in 3D physical world
Shedding light on GIS: A 3D immersive approach to urban lightscape integration into GIS
Geographic Information Systems (GIS) have the ability to map, model, and analyze real world data and phenomena, and yet visibility and lighting conditions are rarely considered or researched in Geographic Information Science (GISci). Lighting technologies have been created and implemented to overcome the darkness of night and other issues of visibility, and in no place is that more evident than urban areas. Though not researched heavily in GIS, it is now possible to model and analyze lighting of the built environment using GIS, 3D modeling and rendering software. This thesis explores the night time urban lightscape, its spatial aspects and contribution to place as well as its incorporation into GIS and GISci. To capture lighting and its multi-dimensional properties, a 3D model was created of the built environment of Morgantown, WV, USA, including the West Virginia University (WVU) campuses and their exterior lighting. The model was completed through the coupling of ESRI\u27s CityEngine and E-on software\u27s LumenRT4 Geodesign plug-in. Lighting data was obtained through the WVU Department of Construction and Design in the form of a CAD map. After geo-referencing CAD-based exterior lighting data, a raster lighting analysis of WVU\u27s Evansdale Campus was produced to identify under-lit areas. These areas were then redesigned using a lighting design tool and incorporated 3D modeling, GIS, and procedural rule-based modeling. An original workflow was designed consisting of ArcGIS, SketchUp, CityEngine, and LumenRT 4 Geodesign. Lighting scenarios were subsequently viewed and experienced through immersive technologies
Urban Drone Navigation: Autoencoder Learning Fusion for Aerodynamics
Drones are vital for urban emergency search and rescue (SAR) due to the
challenges of navigating dynamic environments with obstacles like buildings and
wind. This paper presents a method that combines multi-objective reinforcement
learning (MORL) with a convolutional autoencoder to improve drone navigation in
urban SAR. The approach uses MORL to achieve multiple goals and the autoencoder
for cost-effective wind simulations. By utilizing imagery data of urban
layouts, the drone can autonomously make navigation decisions, optimize paths,
and counteract wind effects without traditional sensors. Tested on a New York
City model, this method enhances drone SAR operations in complex urban
settings.Comment: 47 page
- …