237 research outputs found

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Rover and Telerobotics Technology Program

    Get PDF
    The Jet Propulsion Laboratory's (JPL's) Rover and Telerobotics Technology Program, sponsored by the National Aeronautics and Space Administration (NASA), responds to opportunities presented by NASA space missions and systems, and seeds commerical applications of the emerging robotics technology. The scope of the JPL Rover and Telerobotics Technology Program comprises three major segments of activity: NASA robotic systems for planetary exploration, robotic technology and terrestrial spin-offs, and technology for non-NASA sponsors. Significant technical achievements have been reached in each of these areas, including complete telerobotic system prototypes that have built and tested in realistic scenarios relevant to prospective users. In addition, the program has conducted complementary basic research and created innovative technology and terrestrial applications, as well as enabled a variety of commercial spin-offs

    Challenges and Solutions for Autonomous Robotic Mobile Manipulation for Outdoor Sample Collection

    Get PDF
    In refinery, petrochemical, and chemical plants, process technicians collect uncontaminated samples to be analyzed in the quality control laboratory all time and all weather. This traditionally manual operation not only exposes the process technicians to hazardous chemicals, but also imposes an economical burden on the management. The recent development in mobile manipulation provides an opportunity to fully automate the operation of sample collection. This paper reviewed the various challenges in sample collection in terms of navigation of the mobile platform and manipulation of the robotic arm from four aspects, namely mobile robot positioning/attitude using global navigation satellite system (GNSS), vision-based navigation and visual servoing, robotic manipulation, mobile robot path planning and control. This paper further proposed solutions to these challenges and pointed the main direction of development in mobile manipulation

    UGV Navigation in ROS using LIDAR 3D

    Get PDF
    This works addresses to give a step forward the achievement of robust Unmanned Ground Vehicles (UGVs), which can drive in urban environments. More specifically, it focuses in the management of a four wheeled vehicle in ROS using mainly the inputs provided by a LIDAR 3D. Simulations were carried out in ad-hoc scenarios designed and run using GAZEBO. Visual information provided by sensors is processed through PCL library. Thanks to this processing the needed parameters to manage the UGV are obtained and its guidance can be carried out though a PID controller.El foco de este trabajo consiste en avanzar un paso hacia la consecución de vehículos terrestres no tripulados robustos, que puedan circular en zonas urbanas. Más concretamente se centra en el manejo de un vehículo de cuatro ruedas en ROS usando, sobre todo, las entradas proporcionadas por un LIDAR 3D. Las simulaciones se llevaron a cabo en escenarios ad-hoc diseñados y ejecutados usando GAZEBO. La información visual de los sensores es procesada mediante la librería PCL. Gracias a este procesamiento se obtienen los parámetros para conducir el UGV y su guiado puede ser llevado a cabo mediante un controlador PID.Máster Universitario en Ingeniería Industrial (M141

    積算状態推定に基づくヒューマノイドロボットの継続的タスク実行システムの構成法

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 岡田 慧, 東京大学教授 中村 仁彦, 東京大学教授 稲葉 雅幸, 東京大学教授 國吉 康夫, 東京大学准教授 高野 渉University of Tokyo(東京大学

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space 1994

    Get PDF
    The Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space (i-SAIRAS 94), held October 18-20, 1994, in Pasadena, California, was jointly sponsored by NASA, ESA, and Japan's National Space Development Agency, and was hosted by the Jet Propulsion Laboratory (JPL) of the California Institute of Technology. i-SAIRAS 94 featured presentations covering a variety of technical and programmatic topics, ranging from underlying basic technology to specific applications of artificial intelligence and robotics to space missions. i-SAIRAS 94 featured a special workshop on planning and scheduling and provided scientists, engineers, and managers with the opportunity to exchange theoretical ideas, practical results, and program plans in such areas as space mission control, space vehicle processing, data analysis, autonomous spacecraft, space robots and rovers, satellite servicing, and intelligent instruments
    corecore