34,376 research outputs found

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    State Estimation for Kite Power Systems with Delayed Sensor Measurements

    Get PDF
    We present a novel estimation approach for airborne wind energy systems with ground-based control and energy generation. The estimator fuses measurements from an inertial measurement unit attached to a tethered wing and position measurements from a camera as well as line angle sensors in an unscented Kalman filter. We have developed a novel kinematic description for tethered wings to specifically address tether dynamics. The presented approach simultaneously estimates feedback variables for a flight controller as well as model parameters, such as a time-varying delay. We demonstrate the performance of the estimator for experimental flight data and compare it to a state-of-the-art estimator based on inertial measurements

    The Coordinate Particle Filter - A novel Particle Filter for High Dimensional Systems

    Full text link
    Parametric filters, such as the Extended Kalman Filter and the Unscented Kalman Filter, typically scale well with the dimensionality of the problem, but they are known to fail if the posterior state distribution cannot be closely approximated by a density of the assumed parametric form. For nonparametric filters, such as the Particle Filter, the converse holds. Such methods are able to approximate any posterior, but the computational requirements scale exponentially with the number of dimensions of the state space. In this paper, we present the Coordinate Particle Filter which alleviates this problem. We propose to compute the particle weights recursively, dimension by dimension. This allows us to explore one dimension at a time, and resample after each dimension if necessary. Experimental results on simulated as well as real data confirm that the proposed method has a substantial performance advantage over the Particle Filter in high-dimensional systems where not all dimensions are highly correlated. We demonstrate the benefits of the proposed method for the problem of multi-object and robotic manipulator tracking

    An Equivariant Observer Design for Visual Localisation and Mapping

    Full text link
    This paper builds on recent work on Simultaneous Localisation and Mapping (SLAM) in the non-linear observer community, by framing the visual localisation and mapping problem as a continuous-time equivariant observer design problem on the symmetry group of a kinematic system. The state-space is a quotient of the robot pose expressed on SE(3) and multiple copies of real projective space, used to represent both points in space and bearings in a single unified framework. An observer with decoupled Riccati-gains for each landmark is derived and we show that its error system is almost globally asymptotically stable and exponentially stable in-the-large.Comment: 12 pages, 2 figures, published in 2019 IEEE CD

    The path inference filter: model-based low-latency map matching of probe vehicle data

    Full text link
    We consider the problem of reconstructing vehicle trajectories from sparse sequences of GPS points, for which the sampling interval is between 10 seconds and 2 minutes. We introduce a new class of algorithms, called altogether path inference filter (PIF), that maps GPS data in real time, for a variety of trade-offs and scenarios, and with a high throughput. Numerous prior approaches in map-matching can be shown to be special cases of the path inference filter presented in this article. We present an efficient procedure for automatically training the filter on new data, with or without ground truth observations. The framework is evaluated on a large San Francisco taxi dataset and is shown to improve upon the current state of the art. This filter also provides insights about driving patterns of drivers. The path inference filter has been deployed at an industrial scale inside the Mobile Millennium traffic information system, and is used to map fleets of data in San Francisco, Sacramento, Stockholm and Porto.Comment: Preprint, 23 pages and 23 figure

    Collision Detection and Reaction: A Contribution to Safe Physical Human-Robot Interaction

    Get PDF
    In the framework of physical Human-Robot Interaction (pHRI), methodologies and experimental tests are presented for the problem of detecting and reacting to collisions between a robot manipulator and a human being. Using a lightweight robot that was especially designed for interactive and cooperative tasks, we show how reactive control strategies can significantly contribute to ensuring safety to the human during physical interaction. Several collision tests were carried out, illustrating the feasibility and effectiveness of the proposed approach. While a subjective “safety” feeling is experienced by users when being able to naturally stop the robot in autonomous motion, a quantitative analysis of different reaction strategies was lacking. In order to compare these strategies on an objective basis, a mechanical verification platform has been built. The proposed collision detection and reactions methods prove to work very reliably and are effective in reducing contact forces far below any level which is dangerous to humans. Evaluations of impacts between robot and human arm or chest up to a maximum robot velocity of 2.7 m/s are presented
    corecore