1,277 research outputs found

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Visual Tracking Based on Human Feature Extraction from Surveillance Video for Human Recognition

    Get PDF
    A multimodal human identification system based on face and body recognition may be made available for effective biometric authentication. The outcomes are achieved by extracting facial recognition characteristics using several extraction techniques, including Eigen-face and Principle Component Analysis (PCA). Systems for authenticating people using their bodies and faces are implemented using artificial neural networks (ANN) and genetic optimization techniques as classifiers. Through feature fusion and scores fusion, the biometric systems for the human body and face are merged to create a single multimodal biometric system. Human bodies may be identified with astonishing accuracy and effectiveness thanks to the SDK for the Kinect sensor. To identify people, biometrics aims to mimic the pattern recognition process. In comparison to traditional authentication methods based on secrets and tokens, it is a more dependable and safe option. Human physiological and behavioral traits are used by biometric technologies to identify people automatically. These characteristics must fulfill many criteria, especially those that relate to universality, efficacy, and applicability

    Simultaneous measurements of three-dimensional trajectories and wingbeat frequencies of birds in the field

    Get PDF
    This is the author accepted manuscript. The final version is available from the Royal Society via the DOI in this recordData accessibility: We provide data including images recorded by four cameras, camera parameters, videos showing the time variation of the bird 3D positions, and plain text files that include bird id number, positions, times, velocities, accelerations, and wingbeat frequencies at every time step. We also provide the Matlab codes that were used to: (i) detect birds on images; (ii) reconstruct birds' 3D locations using the new stereo-matching algorithm; (iii) track individual's 3D motions; and (iv) calculate wing motion and wingbeat frequency from tracking results. The code and data are available at: https://github.com/linghj/3DTracking.git and https://figshare.com/s/3c572f91b07b06ed30aa.Tracking the movements of birds in three dimensions is integral to a wide range of problems in animal ecology, behaviour and cognition. Multi-camera stereo-imaging has been used to track the three-dimensional (3D) motion of birds in dense flocks, but precise localization of birds remains a challenge due to imaging resolution in the depth direction and optical occlusion. This paper introduces a portable stereo-imaging system with improved accuracy and a simple stereo-matching algorithm that can resolve optical occlusion. This system allows us to decouple body and wing motion, and thus measure not only velocities and accelerations but also wingbeat frequencies along the 3D trajectories of birds. We demonstrate these new methods by analysing six flocking events consisting of 50 to 360 jackdaws (Corvus monedula) and rooks (Corvus frugilegus) as well as 32 jackdaws and 6 rooks flying in isolated pairs or alone. Our method allows us to (i) measure flight speed and wingbeat frequency in different flying modes; (ii) characterize the U-shaped flight performance curve of birds in the wild, showing that wingbeat frequency reaches its minimum at moderate flight speeds; (iii) examine group effects on individual flight performance, showing that birds have a higher wingbeat frequency when flying in a group than when flying alone and when flying in dense regions than when flying in sparse regions; and (iv) provide a potential avenue for automated discrimination of bird species. We argue that the experimental method developed in this paper opens new opportunities for understanding flight kinematics and collective behaviour in natural environments.Human Frontier Science Progra

    Vision for Looking at Traffic Lights:Issues, Survey, and Perspectives

    Get PDF

    OpenPTrack: Open Source Multi-Camera Calibration and People Tracking for RGB-D Camera Networks

    Get PDF
    OpenPTrack is an open source software for multi-camera calibration and people tracking in RGB-D camera networks. It allows to track people in big volumes at sensor frame rate and currently supports a heterogeneous set of 3D sensors. In this work, we describe its user-friendly calibration procedure, which consists of simple steps with real-time feedback that allow to obtain accurate results in estimating the camera poses that are then used for tracking people. On top of a calibration based on moving a checkerboard within the tracking space and on a global optimization of cameras and checkerboards poses, a novel procedure which aligns people detections coming from all sensors in a x-y-time space is used for refining camera poses. While people detection is executed locally, in the machines connected to each sensor, tracking is performed by a single node which takes into account detections from all over the network. Here we detail how a cascade of algorithms working on depth point clouds and color, infrared and disparity images is used to perform people detection from different types of sensors and in any indoor light condition. We present experiments showing that a considerable improvement can be obtained with the proposed calibration refinement procedure that exploits people detections and we compare Kinect v1, Kinect v2 and Mesa SR4500 performance for people tracking applications. OpenPTrack is based on the Robot Operating System and the Point Cloud Library and has already been adopted in networks composed of up to ten imagers for interactive arts, education, culture and human\u2013robot interaction applications
    • …
    corecore