21,701 research outputs found

    Adaptive constraints for feature tracking

    Get PDF
    In this paper extensions to an existing tracking algorithm are described. These extensions implement adaptive tracking constraints in the form of regional upper-bound displacements and an adaptive track smoothness constraint. Together, these constraints make the tracking algorithm more flexible than the original algorithm (which used fixed tracking parameters) and provide greater confidence in the tracking results. The result of applying the new algorithm to high-resolution ECMWF reanalysis data is shown as an example of its effectiveness

    Towards Semantic Fast-Forward and Stabilized Egocentric Videos

    Full text link
    The emergence of low-cost personal mobiles devices and wearable cameras and the increasing storage capacity of video-sharing websites have pushed forward a growing interest towards first-person videos. Since most of the recorded videos compose long-running streams with unedited content, they are tedious and unpleasant to watch. The fast-forward state-of-the-art methods are facing challenges of balancing the smoothness of the video and the emphasis in the relevant frames given a speed-up rate. In this work, we present a methodology capable of summarizing and stabilizing egocentric videos by extracting the semantic information from the frames. This paper also describes a dataset collection with several semantically labeled videos and introduces a new smoothness evaluation metric for egocentric videos that is used to test our method.Comment: Accepted for publication and presented in the First International Workshop on Egocentric Perception, Interaction and Computing at European Conference on Computer Vision (EPIC@ECCV) 201

    Speckle reduction in swept source optical coherence tomography images with slow-axis averaging

    Get PDF
    The effectiveness of speckle reduction using traditional frame averaging technique was limited in ultrahigh speed optical coherence tomography (OCT). As the motion between repeated frames was very small, the speckle pattern of the frames might be identical. This problem could be solved by averaging frames acquired at slightly different locations. The optimized scan range depended on the spot size of the laser beam, the smoothness of the boundary, and the homogeneity of the tissue. In this study we presented a method to average frames obtained within a narrow range along the slow-axis. A swept-source OCT with 100,000 Hz axial scan rate was used to scan the retina in vivo. A series of narrow raster scans (0-50 micron along the slow axis) were evaluated. Each scan contained 20 image frames evenly distributed in the scan range. The imaging frame rate was 417 HZ. Only frames with high correlation after rigid registration were used in averaging. The result showed that the contrast-to-noise ratio (CNR) increased with the scan range. But the best edge reservation was obtained with 15 micron scan range. Thus, for ultrahigh speed OCT systems, averaging frames from a narrow band along the slow-axis could achieve better speckle reduction than traditional frame averaging techniques

    Quantitative Kinematic Characterization of Reaching Impairments in Mice After a Stroke

    Get PDF
    Background and Objective. Kinematic analysis of reaching movements is increasingly used to evaluate upper extremity function after cerebrovascular insults in humans and has also been applied to rodent models. Such analyses can require time-consuming frame-by-frame inspections and are affected by the experimenter's bias. In this study, we introduce a semi-automated algorithm for tracking forepaw movements in mice. This methodology allows us to calculate several kinematic measures for the quantitative assessment of performance in a skilled reaching task before and after a focal cortical stroke. Methods. Mice were trained to reach for food pellets with their preferred paw until asymptotic performance was achieved. Photothrombosis was then applied to induce a focal ischemic injury in the motor cortex, contralateral to the trained limb. Mice were tested again once a week for 30 days. A high frame rate camera was used to record the movements of the paw, which was painted with a nontoxic dye. An algorithm was then applied off-line to track the trajectories and to compute kinematic measures for motor performance evaluation. Results. The tracking algorithm proved to be fast, accurate, and robust. A number of kinematic measures were identified as sensitive indicators of poststroke modifications. Based on end-point measures, ischemic mice appeared to improve their motor performance after 2 weeks. However, kinematic analysis revealed the persistence of specific trajectory adjustments up to 30 days poststroke, indicating the use of compensatory strategies. Conclusions. These results support the use of kinematic analysis in mice as a tool for both detection of poststroke functional impairments and tracking of motor improvements following rehabilitation. Similar studies could be performed in parallel with human studies to exploit the translational value of this skilled reaching analysis

    Fast Robust PCA on Graphs

    Get PDF
    Mining useful clusters from high dimensional data has received significant attention of the computer vision and pattern recognition community in the recent years. Linear and non-linear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), non-convexity (for matrix factorization methods) and susceptibility to gross corruptions in the data. In this paper we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient and scalable for huge datasets with O(nlog(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data
    • …
    corecore