3,721 research outputs found

    Time-Frequency Analysis Reveals Pairwise Interactions in Insect Swarms

    Full text link
    The macroscopic emergent behavior of social animal groups is a classic example of dynamical self-organization, and is thought to arise from the local interactions between individuals. Determining these interactions from empirical data sets of real animal groups, however, is challenging. Using multicamera imaging and tracking, we studied the motion of individual flying midges in laboratory mating swarms. By performing a time-frequency analysis of the midge trajectories, we show that the midge behavior can be segmented into two distinct modes: one that is independent and composed of low-frequency maneuvers, and one that consists of higher-frequency nearly harmonic oscillations conducted in synchrony with another midge. We characterize these pairwise interactions, and make a hypothesis as to their biological function

    Vide-omics : a genomics-inspired paradigm for video analysis

    Get PDF
    With the development of applications associated to ego-vision systems, smart-phones, and autonomous cars, automated analysis of videos generated by freely moving cameras has become a major challenge for the computer vision community. Current techniques are still not suitable to deal with real-life situations due to, in particular, wide scene variability and the large range of camera motions. Whereas most approaches attempt to control those parameters, this paper introduces a novel video analysis paradigm, 'vide-omics', inspired by the principles of genomics where variability is the expected norm. Validation of this new concept is performed by designing an implementation addressing foreground extraction from videos captured by freely moving cameras. Evaluation on a set of standard videos demonstrates both robust performance that is largely independent from camera motion and scene, and state-of-the-art results in the most challenging video. Those experiments underline not only the validity of the 'vide-omics' paradigm, but also its potential

    Keeping track of worm trackers

    Get PDF
    C. elegans is used extensively as a model system in the neurosciences due to its well defined nervous system. However, the seeming simplicity of this nervous system in anatomical structure and neuronal connectivity, at least compared to higher animals, underlies a rich diversity of behaviors. The usefulness of the worm in genome-wide mutagenesis or RNAi screens, where thousands of strains are assessed for phenotype, emphasizes the need for computational methods for automated parameterization of generated behaviors. In addition, behaviors can be modulated upon external cues like temperature, O2 and CO2 concentrations, mechanosensory and chemosensory inputs. Different machine vision tools have been developed to aid researchers in their efforts to inventory and characterize defined behavioral “outputs”. Here we aim at providing an overview of different worm-tracking packages or video analysis tools designed to quantify different aspects of locomotion such as the occurrence of directional changes (turns, omega bends), curvature of the sinusoidal shape (amplitude, body bend angles) and velocity (speed, backward or forward movement)

    ROBUST BACKGROUND SUBTRACTION FOR MOVING CAMERAS AND THEIR APPLICATIONS IN EGO-VISION SYSTEMS

    Get PDF
    Background subtraction is the algorithmic process that segments out the region of interest often known as foreground from the background. Extensive literature and numerous algorithms exist in this domain, but most research have focused on videos captured by static cameras. The proliferation of portable platforms equipped with cameras has resulted in a large amount of video data being generated from moving cameras. This motivates the need for foundational algorithms for foreground/background segmentation in videos from moving cameras. In this dissertation, I propose three new types of background subtraction algorithms for moving cameras based on appearance, motion, and a combination of them. Comprehensive evaluation of the proposed approaches on publicly available test sequences show superiority of our system over state-of-the-art algorithms. The first method is an appearance-based global modeling of foreground and background. Features are extracted by sliding a fixed size window over the entire image without any spatial constraint to accommodate arbitrary camera movements. Supervised learning method is then used to build foreground and background models. This method is suitable for limited scene scenarios such as Pan-Tilt-Zoom surveillance cameras. The second method relies on motion. It comprises of an innovative background motion approximation mechanism followed by spatial regulation through a Mega-Pixel denoising process. This work does not need to maintain any costly appearance models and is therefore appropriate for resource constraint ego-vision systems. The proposed segmentation combined with skin cues is validated by a novel application on authenticating hand-gestured signature captured by wearable cameras. The third method combines both motion and appearance. Foreground probabilities are jointly estimated by motion and appearance. After the mega-pixel denoising process, the probability estimates and gradient image are combined by Graph-Cut to produce the segmentation mask. This method is universal as it can handle all types of moving cameras

    Ping-Pong Robotics with High-Speed Vision System

    Get PDF
    • …
    corecore