4,212 research outputs found

    Multi-camera Realtime 3D Tracking of Multiple Flying Animals

    Full text link
    Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data. The additional capability of tracking in realtime - with minimal latency - opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behavior. Here we describe a new system capable of tracking the position and body orientation of animals such as flies and birds. The system operates with less than 40 msec latency and can track multiple animals simultaneously. To achieve these results, a multi target tracking algorithm was developed based on the Extended Kalman Filter and the Nearest Neighbor Standard Filter data association algorithm. In one implementation, an eleven camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers. This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system. An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster. At low contrasts, speed is more variable and faster on average than at high contrasts. Thus, the system is already a useful tool to study the neurobiology and behavior of freely flying animals. If combined with other techniques, such as `virtual reality'-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals.Comment: pdfTeX using libpoppler 3.141592-1.40.3-2.2 (Web2C 7.5.6), 18 pages with 9 figure

    TossingBot: Learning to Throw Arbitrary Objects with Residual Physics

    Full text link
    We investigate whether a robot arm can learn to pick and throw arbitrary objects into selected boxes quickly and accurately. Throwing has the potential to increase the physical reachability and picking speed of a robot arm. However, precisely throwing arbitrary objects in unstructured settings presents many challenges: from acquiring reliable pre-throw conditions (e.g. initial pose of object in manipulator) to handling varying object-centric properties (e.g. mass distribution, friction, shape) and dynamics (e.g. aerodynamics). In this work, we propose an end-to-end formulation that jointly learns to infer control parameters for grasping and throwing motion primitives from visual observations (images of arbitrary objects in a bin) through trial and error. Within this formulation, we investigate the synergies between grasping and throwing (i.e., learning grasps that enable more accurate throws) and between simulation and deep learning (i.e., using deep networks to predict residuals on top of control parameters predicted by a physics simulator). The resulting system, TossingBot, is able to grasp and throw arbitrary objects into boxes located outside its maximum reach range at 500+ mean picks per hour (600+ grasps per hour with 85% throwing accuracy); and generalizes to new objects and target locations. Videos are available at https://tossingbot.cs.princeton.eduComment: Summary Video: https://youtu.be/f5Zn2Up2RjQ Project webpage: https://tossingbot.cs.princeton.ed

    Dynamic Grasping of Unknown Objects with a Multi-Fingered Hand

    Full text link
    An important prerequisite for autonomous robots is their ability to reliably grasp a wide variety of objects. Most state-of-the-art systems employ specialized or simple end-effectors, such as two-jaw grippers, which severely limit the range of objects to manipulate. Additionally, they conventionally require a structured and fully predictable environment while the vast majority of our world is complex, unstructured, and dynamic. This paper presents an implementation to overcome both issues. Firstly, the integration of a five-finger hand enhances the variety of possible grasps and manipulable objects. This kinematically complex end-effector is controlled by a deep learning based generative grasping network. The required virtual model of the unknown target object is iteratively completed by processing visual sensor data. Secondly, this visual feedback is employed to realize closed-loop servo control which compensates for external disturbances. Our experiments on real hardware confirm the system's capability to reliably grasp unknown dynamic target objects without a priori knowledge of their trajectories. To the best of our knowledge, this is the first method to achieve dynamic multi-fingered grasping for unknown objects. A video of the experiments is available at https://youtu.be/Ut28yM1gnvI.Comment: ICRA202

    The Typical Flight Performance of Blowflies: Measuring the Normal Performance Envelope of Calliphora vicina Using a Novel Corner-Cube Arena

    Get PDF
    Despite a wealth of evidence demonstrating extraordinary maximal performance, little is known about the routine flight performance of insects. We present a set of techniques for benchmarking performance characteristics of insects in free flight, demonstrated using a model species, and comment on the significance of the performance observed. Free-flying blowflies (Calliphora vicina) were filmed inside a novel mirrored arena comprising a large (1.6 m1.6 m1.6 m) corner-cube reflector using a single high-speed digital video camera (250 or 500 fps). This arrangement permitted accurate reconstruction of the flies' 3-dimensional trajectories without the need for synchronisation hardware, by virtue of the multiple reflections of a subject within the arena. Image sequences were analysed using custom-written automated tracking software, and processed using a self-calibrating bundle adjustment procedure to determine the subject's instantaneous 3-dimensional position. We illustrate our method by using these trajectory data to benchmark the routine flight performance envelope of our flies. Flight speeds were most commonly observed between 1.2 ms−1 and 2.3 ms−1, with a maximum of 2.5 ms−1. Our flies tended to dive faster than they climbed, with a maximum descent rate (−2.4 ms−1) almost double the maximum climb rate (1.2 ms−1). Modal turn rate was around 240°s−1, with maximal rates in excess of 1700°s−1. We used the maximal flight performance we observed during normal flight to construct notional physical limits on the blowfly flight envelope, and used the distribution of observations within that notional envelope to postulate behavioural preferences or physiological and anatomical constraints. The flight trajectories we recorded were never steady: rather they were constantly accelerating or decelerating, with maximum tangential accelerations and maximum centripetal accelerations on the order of 3 g

    Study to design and develop remote manipulator systems

    Get PDF
    A description is given of part of a continuing effort both to develop models for and to augment the performance of humans controlling remote manipulators. The project plan calls for the performance of several standard tasks with a number of different manipulators, controls, and viewing conditions, using an automated performance measuring system; in addition, the project plan calls for the development of a force-reflecting joystick and supervisory display system

    Virtual Reality system for freely-moving rodents

    Get PDF
    Spatial navigation, active sensing, and most cognitive functions rely on a tight link between motor output and sensory input. Virtual reality (VR) systems simulate the sensorimotor loop, allowing flexible manipulation of enriched sensory input. Conventional rodent VR systems provide 3D visual cues linked to restrained locomotion on a treadmill, leading to a mismatch between visual and most other sensory inputs, sensory-motor conflicts, as well as restricted naturalistic behavior. To rectify these limitations, we developed a VR system (ratCAVE) that provides realistic and low-latency visual feedback directly to head movements of completely unrestrained rodents. Immersed in this VR system, rats displayed naturalistic behavior by spontaneously interacting with and hugging virtual walls, exploring virtual objects, and avoiding virtual cliffs. We further illustrate the effect of ratCAVE-VR manipulation on hippocampal place fields. The newly-developed methodology enables a wide range of experiments involving flexible manipulation of visual feedback in freely-moving behaving animals
    • …
    corecore