8,593 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    The Comparative Exploration of the Ice Giant Planets with Twin Spacecraft: Unveiling the History of our Solar System

    Full text link
    In the course of the selection of the scientific themes for the second and third L-class missions of the Cosmic Vision 2015-2025 program of the European Space Agency, the exploration of the ice giant planets Uranus and Neptune was defined "a timely milestone, fully appropriate for an L class mission". Among the proposed scientific themes, we presented the scientific case of exploring both planets and their satellites in the framework of a single L-class mission and proposed a mission scenario that could allow to achieve this result. In this work we present an updated and more complete discussion of the scientific rationale and of the mission concept for a comparative exploration of the ice giant planets Uranus and Neptune and of their satellite systems with twin spacecraft. The first goal of comparatively studying these two similar yet extremely different systems is to shed new light on the ancient past of the Solar System and on the processes that shaped its formation and evolution. This, in turn, would reveal whether the Solar System and the very diverse extrasolar systems discovered so far all share a common origin or if different environments and mechanisms were responsible for their formation. A space mission to the ice giants would also open up the possibility to use Uranus and Neptune as templates in the study of one of the most abundant type of extrasolar planets in the galaxy. Finally, such a mission would allow a detailed study of the interplanetary and gravitational environments at a range of distances from the Sun poorly covered by direct exploration, improving the constraints on the fundamental theories of gravitation and on the behaviour of the solar wind and the interplanetary magnetic field.Comment: 29 pages, 4 figures; accepted for publication on the special issue "The outer Solar System X" of the journal Planetary and Space Science. This article presents an updated and expanded discussion of the white paper "The ODINUS Mission Concept" (arXiv:1402.2472) submitted in response to the ESA call for ideas for the scientific themes of the future L2 and L3 space mission

    Peer Attention Modeling with Head Pose Trajectory Tracking Using Temporal Thermal Maps

    Get PDF
    Human head pose trajectories can represent a wealth of implicit information such as areas of attention, body language, potential future actions, and more. This signal is of high value for use in Human-Robot teams due to the implicit information encoded within it. Although team-based tasks require both explicit and implicit communication among peers, large team sizes, noisy environments, distance, and mission urgency can inhibit the frequency and quality of explicit communication. The goal for this thesis is to improve the capabilities of Human-Robot teams by making use of implicit communication. In support of this goal, the following hypotheses are investigated: ● Implicit information about a human subject’s attention can be reliably extracted with software by tracking the subject’s head pose trajectory, and ● Attention can be represented with a 3D temporal thermal map for implicitly determining a subject’s Objects Of Interest (OOIs). These hypotheses are investigated by experimentation with a new tool for peer attention modeling by Head Pose Trajectory Tracking using Temporal Thermal Maps (HPT4M). This system allows a robot Observing Agent (OA) to view a human teammate and temporally model their Regions Of Interest (ROIs) by generating a 3D thermal map based on the subject’s head pose trajectory. The findings in this work are that HPT4M can be used by an OA to contribute to a team search mission by implicitly discovering a human subject’s OOI type, mapping the item’s location within the searched space, and labeling the item’s discovery state. Furthermore, this work discusses some of the discovered limitations of this technology and hurdles that must be overcome before implementing HPT4M in a reliable real-world system. Finally, the techniques used in this work are provided as an open source Robot Operating System (ROS) node at github.com/HPT4M with the intent that it will aid other developers in the robotics community with improving Human-Robot teams. Furthermore, the proofs of principle and tools developed in this thesis are a foundational platform for deeper investigation in future research on improving Human-Robot teams via implicit communication techniques

    The <i>Castalia</i> mission to Main Belt Comet 133P/Elst-Pizarro

    Get PDF
    We describe Castalia, a proposed mission to rendezvous with a Main Belt Comet (MBC), 133P/Elst-Pizarro. MBCs are a recently discovered population of apparently icy bodies within the main asteroid belt between Mars and Jupiter, which may represent the remnants of the population which supplied the early Earth with water. Castalia will perform the first exploration of this population by characterising 133P in detail, solving the puzzle of the MBC’s activity, and making the first in situ measurements of water in the asteroid belt. In many ways a successor to ESA’s highly successful Rosetta mission, Castalia will allow direct comparison between very different classes of comet, including measuring critical isotope ratios, plasma and dust properties. It will also feature the first radar system to visit a minor body, mapping the ice in the interior. Castalia was proposed, in slightly different versions, to the ESA M4 and M5 calls within the Cosmic Vision programme. We describe the science motivation for the mission, the measurements required to achieve the scientific goals, and the proposed instrument payload and spacecraft to achieve these

    Resilient Perception for Outdoor Unmanned Ground Vehicles

    Get PDF
    This thesis promotes the development of resilience for perception systems with a focus on Unmanned Ground Vehicles (UGVs) in adverse environmental conditions. Perception is the interpretation of sensor data to produce a representation of the environment that is necessary for subsequent decision making. Long-term autonomy requires perception systems that correctly function in unusual but realistic conditions that will eventually occur during extended missions. State-of-the-art UGV systems can fail when the sensor data are beyond the operational capacity of the perception models. The key to resilient perception system lies in the use of multiple sensor modalities and the pre-selection of appropriate sensor data to minimise the chance of failure. This thesis proposes a framework based on diagnostic principles to evaluate and preselect sensor data prior to interpretation by the perception system. Image-based quality metrics are explored and evaluated experimentally using infrared (IR) and visual cameras onboard a UGV in the presence of smoke and airborne dust. A novel quality metric, Spatial Entropy (SE), is introduced and evaluated. The proposed framework is applied to a state-of-the-art Visual-SLAM algorithm combining visual and IR imaging as a real-world example. An extensive experimental evaluation demonstrates that the framework allows for camera-based localisation that is resilient to a range of low-visibility conditions when compared to other methods that use a single sensor or combine sensor data without selection. The proposed framework allows for a resilient localisation in adverse conditions using image data but also has significant potential to benefit many perception applications. Employing multiple sensing modalities along with pre-selection of appropriate data is a powerful method to create resilient perception systems by anticipating and mitigating errors. The development of such resilient perception systems is a requirement for next-generation outdoor UGVs

    Marshall Space Flight Center Research and Technology Report 2019

    Get PDF
    Today, our calling to explore is greater than ever before, and here at Marshall Space Flight Centerwe make human deep space exploration possible. A key goal for Artemis is demonstrating and perfecting capabilities on the Moon for technologies needed for humans to get to Mars. This years report features 10 of the Agencys 16 Technology Areas, and I am proud of Marshalls role in creating solutions for so many of these daunting technical challenges. Many of these projects will lead to sustainable in-space architecture for human space exploration that will allow us to travel to the Moon, on to Mars, and beyond. Others are developing new scientific instruments capable of providing an unprecedented glimpse into our universe. NASA has led the charge in space exploration for more than six decades, and through the Artemis program we will help build on our work in low Earth orbit and pave the way to the Moon and Mars. At Marshall, we leverage the skills and interest of the international community to conduct scientific research, develop and demonstrate technology, and train international crews to operate further from Earth for longer periods of time than ever before first at the lunar surface, then on to our next giant leap, human exploration of Mars. While each project in this report seeks to advance new technology and challenge conventions, it is important to recognize the diversity of activities and people supporting our mission. This report not only showcases the Centers capabilities and our partnerships, it also highlights the progress our people have achieved in the past year. These scientists, researchers and innovators are why Marshall and NASA will continue to be a leader in innovation, exploration, and discovery for years to come

    Sistema de Deteção de Quedas Automåtico Baseado em Vídeo

    Get PDF
    The elderly population faces difficulties in completing certain tasks independently, often re quiring supervision to not only assist them but also to mitigate and notify about potential health risks. Falls, a prevalent and severe problem, pose a high risk of causing hospitaliza tions and fatalities. However, the aging population in developed countries is growing at an unprecedented rate, while the proportion of active age individuals continues to decline. Con sequently, elderly care has become less accessible as caregivers are confronted with a larger number of patients. Nonetheless, conventional fall detection methods, typically triggered by victims themselves, are unreliable and inadequate. This thesis proposes an automatic alternative to existing methods, presenting a computer vision-based Fall Detection System (FDS) that utilizes a two-stream Inflated 3D Convolutional Neural Network (I3D) in con junction with a Recurrent Neural Network (RNN). To enhance the available datasets, a new collection of simulated falls was created. Experimental evaluations demonstrate the superi ority of this hybrid model over state-of-the-art fall detection models, achieving an accuracy of 94% and a recall value of 96%. By promptly and accurately detecting falls, a system employing this model could significantly reduce the risk of severe injuries posed to the elderly and physically disabled individuals.Os idosos enfrentam dificuldades em completar certas tarefas sozinhos e precisam de su pervisĂŁo frequente, nĂŁo sĂł para assistĂ­-los, mas tambĂ©m para mitigar e alertar para riscos potenciais de saĂșde. Quedas sĂŁo problemas prevalentes e sĂ©rios, muitas vezes resultando em hospitalizaçÔes ou mortes. Contudo, nos paĂ­ses desenvolvidos, a população idosa estĂĄ a crescer e a proporção de cidadĂŁos de idade ativa a diminuir. Por consequĂȘncia, cuidados a idosos tornam-se mais inacessĂ­veis, jĂĄ que enfermeiros sĂŁo confrontados com um maior nĂșmero de pacientes. NĂŁo obstante, mĂ©todos convencionais de deteção de quedas, que requerem, normalmente, a ativação por parte da vĂ­tima, nĂŁo sĂŁo confiĂĄveis nem adequados. Esta tese propĂ”e uma alternativa automĂĄtica a estes mĂ©todos na forma de um sistema de deteção de quedas que incorpora uma rede neuronal convolucional 3D juntamente com uma rede neuronal recorrente. Para melhorar os datasets jĂĄ existentes, uma nova coleção de vĂ­deos de quedas foi criada. Este modelo hĂ­brido revela ter performances superiores Ă s de outros modelos, conseguindo uma acurĂĄcia de 94% e uma sensitividade de 96%. Ao ser capaz de detetar quedas precisa e imediatamente, um sistema que inclui este modelo poderĂĄ reduzir drasticamente o risco de ferimentos graves aos idosos e pessoas com deficiĂȘncias fĂ­sicas
    • 

    corecore