2,816 research outputs found
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Autonomous RC Car Platform
This project explores building an autonomous research robot on a 1/10 scale RC car platform. The goals of the project were to build an easy to use system that allowed for the exploration of techniques such as localization, object detection, mapping, and more. The completed robot consists of a self-contained RC car, running on battery power, that uses a camera, lidar, inertial measurement unit, and other sensors to observe the environment. Completed research explored pose estimation based on combining dead reckoning, inertial measurement unit readings, and visual odometry in an Extended Kalman Filter. The result of this project included the RC car and a build guide on replicating the process for future students
Deep Learning-Based Robotic Perception for Adaptive Facility Disinfection
Hospitals, schools, airports, and other environments built for mass gatherings can become hot spots for microbial pathogen colonization, transmission, and exposure, greatly accelerating the spread of infectious diseases across communities, cities, nations, and the world. Outbreaks of infectious diseases impose huge burdens on our society. Mitigating the spread of infectious pathogens within mass-gathering facilities requires routine cleaning and disinfection, which are primarily performed by cleaning staff under current practice. However, manual disinfection is limited in terms of both effectiveness and efficiency, as it is labor-intensive, time-consuming, and health-undermining. While existing studies have developed a variety of robotic systems for disinfecting contaminated surfaces, those systems are not adequate for intelligent, precise, and environmentally adaptive disinfection. They are also difficult to deploy in mass-gathering infrastructure facilities, given the high volume of occupants. Therefore, there is a critical need to develop an adaptive robot system capable of complete and efficient indoor disinfection.
The overarching goal of this research is to develop an artificial intelligence (AI)-enabled robotic system that adapts to ambient environments and social contexts for precise and efficient disinfection. This would maintain environmental hygiene and health, reduce unnecessary labor costs for cleaning, and mitigate opportunity costs incurred from infections. To these ends, this dissertation first develops a multi-classifier decision fusion method, which integrates scene graph and visual information, in order to recognize patterns in human activity in infrastructure facilities. Next, a deep-learning-based method is proposed for detecting and classifying indoor objects, and a new mechanism is developed to map detected objects in 3D maps. A novel framework is then developed to detect and segment object affordance and to project them into a 3D semantic map for precise disinfection. Subsequently, a novel deep-learning network, which integrates multi-scale features and multi-level features, and an encoder network are developed to recognize the materials of surfaces requiring disinfection. Finally, a novel computational method is developed to link the recognition of object surface information to robot disinfection actions with optimal disinfection parameters
Robot Mapping with Real-Time Incremental Localization Using Expectation Maximization
This research effort explores and develops a real-time sonar-based robot mapping and localization algorithm that provides pose correction within the context of a single room, to be combined with pre-existing global localization techniques, and thus produce a single, well-formed map of an unknown environment. Our algorithm implements an expectation maximization algorithm that is based on the notion of the alpha-beta functions of a Hidden Markov Model. It performs a forward alpha calculation as an integral component of the occupancy grid mapping procedure using local maps in place of a single global map, and a backward beta calculation that considers the prior local map, a limited step that enables real-time processing. Real-time localization is an extremely difficult task that continues to be the focus of much research in the field, and most advances in localization have been achieved in an off-line context. The results of our research into and implementation of realtime localization showed limited success, generating improved maps in a number of cases, but not all-a trade-off between real-time and off-line processing. However, we believe there is ample room for extension to our approach that promises a more consistently successful real-time localization algorithm
Recommended from our members
Spatio-temporal map maintenance for extending autonomy in long-term mobile robotic tasks
Working in hazardous environments requires routine inspections in order to meet safety standards. Dangerous quantities of nuclear contamination can exist in infinitesimally small volumes. In order to confidently inspect a nuclear environment for radioactive sources, especially those which emit alpha radiation, technicians must carefully maintain detectors at a consistent velocity and distance from a source. Technicians must also take careful records of which areas have been surveyed or not are important so that no area is left unmonitored. This is a difficult, exhausting task when the coverage area is larger than an office space. An autonomous mobile robotic platform with Complete Coverage Path Planning (CCPP) can reduce dangerous exposure to humans and provide better information for Radiological Control Technicians (RCT). The developed robotic system - or RCTbot - is designed for long-term deployment with little human correction, intervention, or maintenance required. To do this, the RCTbot creates a map of the environment, continually updates it based on multiple sensor inputs, and searches its map for contamination. In nuclear environments, the areas of interest often remain spatially constant throughout the duration of an inspection and are considered temporally static. The RCTbot monitors temporally static environments but adapts to dynamic changes over time. It then uses its sensor data to update and maintain its map so no manual human intervention is necessary. The spatio-temporal map maintenance (STMM) is agnostic to the survey type, so the RCTbot system is viable for application domain other than nuclear.Mechanical Engineerin
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
- …