6,426 research outputs found

    Adaptive Feature Selection for Object Tracking with Particle Filter

    No full text
    International audienceObject tracking is an important topic in the field of computer vision. Commonly used color-based trackers are based on a fixed set of color features such as RGB or HSV and, as a result, fail to adapt to changing illumination conditions and background clutter. These drawbacks can be overcome to an extent by using an adaptive framework which selects for each frame of a sequence the features that best discriminate the object from the background. In this paper, we use such an adaptive feature selection method embedded into a particle filter mechanism and show that our tracking method is robust to lighting changes and background distractions. Different experiments also show that the proposed method outperform other approaches

    Robust multi-clue face tracking system

    Get PDF
    In this paper we present a multi-clue face tracking system, based on the combination of a face detector and two independent trackers. The detector, a variant of the Viola-Jones algorithm, is set to generate very low false positive error rate. It initiates the tracking system and updates its state. The trackers, based on 3DRS and optical flow respectively, have been chosen to complement each other in different conditions. The main focus of this work is the integration of the two trackers and the design of a closed loop detector-tracker system, aiming at achieving superior robustness at real-time operation on a PC platform. Tests were carried out to assess the actual performance of the system. With an average of about 95% correct face location rate and no significant false positives, the proposed approach appears to be particularly robust to complex backgrounds, ambient light variation, face orientation and scale changes, partial occlusions, different\ud facial expressions and presence of other unwanted faces

    Studies of Single-Molecule Dynamics in Microorganisms

    Get PDF
    Fluorescence microscopy is one of the most extensively used techniques in the life sciences. Considering the non-invasive sample preparation, enabling live-cell compliant imaging, and the specific fluorescence labeling, allowing for a specific visualization of virtually any cellular compound, it is possible to localize even a single molecule in living cells. This makes modern fluorescence microscopy a powerful toolbox. In the recent decades, the development of new, "super-resolution" fluorescence microscopy techniques, which surpass the diffraction limit, revolutionized the field. Single-Molecule Localization Microscopy (SMLM) is a class of super-resolution microscopy methods and it enables resolution of down to tens of nanometers. SMLM methods like Photoactivated Localization Microscopy (PALM), (direct) Stochastic Optical Reconstruction Microscopy ((d)STORM), Ground-State Depletion followed by Individual Molecule Return (GSDIM) and Point Accumulation for Imaging in Nanoscale Topography (PAINT) have allowed to investigate both, the intracellular spatial organization of proteins and to observe their real-time dynamics at the single-molecule level in live cells. The focus of this thesis was the development of novel tools and strategies for live-cell SingleParticle Tracking PALM (sptPALM) imaging and implementing them for biological research. In the first part of this thesis, I describe the development of new Photoconvertible Fluorescent Proteins (pcFPs) which are optimized for sptPALM lowering the phototoxic damage caused by the imaging procedure. Furthermore, we show that we can utilize them together with Photoactivatable Fluorescent Proteins (paFPs) to enable multi-target labeling and read-out in a single color channel, which significantly simplifies the sample preparation and imaging routines as well as data analysis of multi-color PALM imaging of live cells. In parallel to developing new fluorescent proteins, I developed a high throughput data analysis pipeline. I have implemented this pipeline in my second project, described in the second part of this thesis, where I have investigated the protein organization and dynamics of the CRISPR-Cas antiviral defense mechanism of bacteria in vivo at a high spatiotemporal level with the sptPALM approach. I was successful to show the differences in the target search dynamics of the CRISPR effector complexes as well as of single Cas proteins for different target complementarities. I have also first data describing longer-lasting bound-times between effector complex and their potential targets in vivo, for which only in vitro data has been available till today. In summary, this thesis is a significant contribution for both, the advances of current sptPALM imaging methods, as well as for the understanding of the native behavior of CRISPR-Cas systems in vivo

    Complementarity of PALM and SOFI for super-resolution live cell imaging of focal adhesions

    Get PDF
    Live cell imaging of focal adhesions requires a sufficiently high temporal resolution, which remains a challenging task for super-resolution microscopy. We have addressed this important issue by combining photo-activated localization microscopy (PALM) with super-resolution optical fluctuation imaging (SOFI). Using simulations and fixed cell focal adhesion images, we investigated the complementarity between PALM and SOFI in terms of spatial and temporal resolution. This PALM-SOFI framework was used to image focal adhesions in living cells, while obtaining a temporal resolution below 10 s. We visualized the dynamics of focal adhesions, and revealed local mean velocities around 190 nm per minute. The complementarity of PALM and SOFI was assessed in detail with a methodology that integrates a quantitative resolution and signal-to-noise metric. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of focal adhesions through the estimation of molecular parameters such as the fluorophore density and the photo-activation and photo-switching rates

    Unfalsified visual servoing for simultaneous object recognition and pose tracking

    Get PDF
    In a complex environment, simultaneous object recognition and tracking has been one of the challenging topics in computer vision and robotics. Current approaches are usually fragile due to spurious feature matching and local convergence for pose determination. Once a failure happens, these approaches lack a mechanism to recover automatically. In this paper, data-driven unfalsified control is proposed for solving this problem in visual servoing. It recognizes a target through matching image features with a 3-D model and then tracks them through dynamic visual servoing. The features can be falsified or unfalsified by a supervisory mechanism according to their tracking performance. Supervisory visual servoing is repeated until a consensus between the model and the selected features is reached, so that model recognition and object tracking are accomplished. Experiments show the effectiveness and robustness of the proposed algorithm to deal with matching and tracking failures caused by various disturbances, such as fast motion, occlusions, and illumination variation

    Switching Local and Covariance Matching for Efficient Object Tracking

    Get PDF
    The covariance tracker finds the targets in consecutive frames by global searching. Covariance tracking has achieved impressive successes thanks to its ability of capturing spatial and statistical properties as well as the correlations between them. Nevertheless, the covariance tracker is relatively inefficient due to its heavy computational cost of model updating and comparing the model with the covariance matrices of the candidate regions. Moreover, it is not good at dealing with articulated object tracking since integral histograms are employed to accelerate the searching process. In this work, we aim to alleviate the computational burden by selecting appropriate tracking approaches. We compute foreground probabilities of pixels and localize the target by local searching when the tracking is in steady states. Covariance tracking is performed when distractions, sudden motions or occlusions are detected. Different from the traditional covariance tracker, we use Log-Euclidean metrics instead of Riemannian invariant metrics which are more computationally expensive. The proposed tracking algorithm has been verified on many video sequences. It proves more efficient than the covariance tracker. It is also effective in dealing with occlusions, which are an obstacle for local mode-seeking trackers such as the mean-shift tracker. 1

    Particle Filters for Colour-Based Face Tracking Under Varying Illumination

    Get PDF
    Automatic human face tracking is the basis of robotic and active vision systems used for facial feature analysis, automatic surveillance, video conferencing, intelligent transportation, human-computer interaction and many other applications. Superior human face tracking will allow future safety surveillance systems which monitor drowsy drivers, or patients and elderly people at the risk of seizure or sudden falls and will perform with lower risk of failure in unexpected situations. This area has actively been researched in the current literature in an attempt to make automatic face trackers more stable in challenging real-world environments. To detect faces in video sequences, features like colour, texture, intensity, shape or motion is used. Among these feature colour has been the most popular, because of its insensitivity to orientation and size changes and fast process-ability. The challenge of colour-based face trackers, however, has been dealing with the instability of trackers in case of colour changes due to the drastic variation in environmental illumination. Probabilistic tracking and the employment of particle filters as powerful Bayesian stochastic estimators, on the other hand, is increasing in the visual tracking field thanks to their ability to handle multi-modal distributions in cluttered scenes. Traditional particle filters utilize transition prior as importance sampling function, but this can result in poor posterior sampling. The objective of this research is to investigate and propose stable face tracker capable of dealing with challenges like rapid and random motion of head, scale changes when people are moving closer or further from the camera, motion of multiple people with close skin tones in the vicinity of the model person, presence of clutter and occlusion of face. The main focus has been on investigating an efficient method to address the sensitivity of the colour-based trackers in case of gradual or drastic illumination variations. The particle filter is used to overcome the instability of face trackers due to nonlinear and random head motions. To increase the traditional particle filter\u27s sampling efficiency an improved version of the particle filter is introduced that considers the latest measurements. This improved particle filter employs a new colour-based bottom-up approach that leads particles to generate an effective proposal distribution. The colour-based bottom-up approach is a classification technique for fast skin colour segmentation. This method is independent to distribution shape and does not require excessive memory storage or exhaustive prior training. Finally, to address the adaptability of the colour-based face tracker to illumination changes, an original likelihood model is proposed based of spatial rank information that considers both the illumination invariant colour ordering of a face\u27s pixels in an image or video frame and the spatial interaction between them. The original contribution of this work lies in the unique mixture of existing and proposed components to improve colour-base recognition and tracking of faces in complex scenes, especially where drastic illumination changes occur. Experimental results of the final version of the proposed face tracker, which combines the methods developed, are provided in the last chapter of this manuscript

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world
    corecore