11,739 research outputs found

    Learning Articulated Motions From Visual Demonstration

    Full text link
    Many functional elements of human homes and workplaces consist of rigid components which are connected through one or more sliding or rotating linkages. Examples include doors and drawers of cabinets and appliances; laptops; and swivel office chairs. A robotic mobile manipulator would benefit from the ability to acquire kinematic models of such objects from observation. This paper describes a method by which a robot can acquire an object model by capturing depth imagery of the object as a human moves it through its range of motion. We envision that in future, a machine newly introduced to an environment could be shown by its human user the articulated objects particular to that environment, inferring from these "visual demonstrations" enough information to actuate each object independently of the user. Our method employs sparse (markerless) feature tracking, motion segmentation, component pose estimation, and articulation learning; it does not require prior object models. Using the method, a robot can observe an object being exercised, infer a kinematic model incorporating rigid, prismatic and revolute joints, then use the model to predict the object's motion from a novel vantage point. We evaluate the method's performance, and compare it to that of a previously published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN: 978-0-9923747-0-

    A robust and efficient video representation for action recognition

    Get PDF
    This paper introduces a state-of-the-art video representation and applies it to efficient action recognition and detection. We first propose to improve the popular dense trajectory features by explicit camera motion estimation. More specifically, we extract feature point matches between frames using SURF descriptors and dense optical flow. The matches are used to estimate a homography with RANSAC. To improve the robustness of homography estimation, a human detector is employed to remove outlier matches from the human body as human motion is not constrained by the camera. Trajectories consistent with the homography are considered as due to camera motion, and thus removed. We also use the homography to cancel out camera motion from the optical flow. This results in significant improvement on motion-based HOF and MBH descriptors. We further explore the recent Fisher vector as an alternative feature encoding approach to the standard bag-of-words histogram, and consider different ways to include spatial layout information in these encodings. We present a large and varied set of evaluations, considering (i) classification of short basic actions on six datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that our improved trajectory features significantly outperform previous dense trajectories, and that Fisher vectors are superior to bag-of-words encodings for video recognition tasks. In all three tasks, we show substantial improvements over the state-of-the-art results

    Seeing Tree Structure from Vibration

    Full text link
    Humans recognize object structure from both their appearance and motion; often, motion helps to resolve ambiguities in object structure that arise when we observe object appearance only. There are particular scenarios, however, where neither appearance nor spatial-temporal motion signals are informative: occluding twigs may look connected and have almost identical movements, though they belong to different, possibly disconnected branches. We propose to tackle this problem through spectrum analysis of motion signals, because vibrations of disconnected branches, though visually similar, often have distinctive natural frequencies. We propose a novel formulation of tree structure based on a physics-based link model, and validate its effectiveness by theoretical analysis, numerical simulation, and empirical experiments. With this formulation, we use nonparametric Bayesian inference to reconstruct tree structure from both spectral vibration signals and appearance cues. Our model performs well in recognizing hierarchical tree structure from real-world videos of trees and vessels.Comment: ECCV 2018. The first two authors contributed equally to this work. Project page: http://tree.csail.mit.edu

    Ultrasound localization microscopy to image and assess microvasculature in a rat kidney.

    Get PDF
    The recent development of ultrasound localization microscopy, where individual microbubbles (contrast agents) are detected and tracked within the vasculature, provides new opportunities for imaging the vasculature of entire organs with a spatial resolution below the diffraction limit. In stationary tissue, recent studies have demonstrated a theoretical resolution on the order of microns. In this work, single microbubbles were localized in vivo in a rat kidney using a dedicated high frame rate imaging sequence. Organ motion was tracked by assuming rigid motion (translation and rotation) and appropriate correction was applied. In contrast to previous work, coherence-based non-linear phase inversion processing was used to reject tissue echoes while maintaining echoes from very slowly moving microbubbles. Blood velocity in the small vessels was estimated by tracking microbubbles, demonstrating the potential of this technique to improve vascular characterization. Previous optical studies of microbubbles in vessels of approximately 20 microns have shown that expansion is constrained, suggesting that microbubble echoes would be difficult to detect in such regions. We therefore utilized the echoes from individual MBs as microscopic sensors of slow flow associated with such vessels and demonstrate that highly correlated, wideband echoes are detected from individual microbubbles in vessels with flow rates below 2 mm/s

    Retrieval and Registration of Long-Range Overlapping Frames for Scalable Mosaicking of In Vivo Fetoscopy

    Get PDF
    Purpose: The standard clinical treatment of Twin-to-Twin Transfusion Syndrome consists in the photo-coagulation of undesired anastomoses located on the placenta which are responsible to a blood transfer between the two twins. While being the standard of care procedure, fetoscopy suffers from a limited field-of-view of the placenta resulting in missed anastomoses. To facilitate the task of the clinician, building a global map of the placenta providing a larger overview of the vascular network is highly desired. Methods: To overcome the challenging visual conditions inherent to in vivo sequences (low contrast, obstructions or presence of artifacts, among others), we propose the following contributions: (i) robust pairwise registration is achieved by aligning the orientation of the image gradients, and (ii) difficulties regarding long-range consistency (e.g. due to the presence of outliers) is tackled via a bag-of-word strategy, which identifies overlapping frames of the sequence to be registered regardless of their respective location in time. Results: In addition to visual difficulties, in vivo sequences are characterised by the intrinsic absence of gold standard. We present mosaics motivating qualitatively our methodological choices and demonstrating their promising aspect. We also demonstrate semi-quantitatively, via visual inspection of registration results, the efficacy of our registration approach in comparison to two standard baselines. Conclusion: This paper proposes the first approach for the construction of mosaics of placenta in in vivo fetoscopy sequences. Robustness to visual challenges during registration and long-range temporal consistency are proposed, offering first positive results on in vivo data for which standard mosaicking techniques are not applicable.Comment: Accepted for publication in International Journal of Computer Assisted Radiology and Surgery (IJCARS

    Biolocomotion Detection in Videos

    Get PDF
    Animals locomote for various reasons: to search for food, to find suitable habitat, to pursue prey, to escape from predators, or to seek a mate. The grand scale of biodiversity contributes to the great locomotory design and mode diversity. In this dissertation, the locomotion of general biological species is referred to as biolocomotion. The goal of this dissertation is to develop a computational approach to detect biolocomotion in any unprocessed video. The ways biological entities locomote through an environment are extremely diverse. Various creatures make use of legs, wings, fins, and other means to move through the world. Significantly, the motion exhibited by the body parts to navigate through an environment can be modelled by a combination of an overall positional advance with an overlaid asymmetric oscillatory pattern, a distinctive signature that tends to be absent in non-biological objects in locomotion. In this dissertation, this key trait of positional advance with asymmetric oscillation along with differences in an object's common motion (extrinsic motion) and localized motion of its parts (intrinsic motion) is exploited to detect biolocomotion. In particular, a computational algorithm is developed to measure the presence of these traits in tracked objects to determine if they correspond to a biological entity in locomotion. An alternative algorithm, based on generic handcrafted features combined with learning is assembled out of components from allied areas of investigation, also is presented as a basis of comparison to the main proposed algorithm. A novel biolocomotion dataset encompassing a wide range of moving biological and non-biological objects in natural settings is provided. Additionally, biolocomotion annotations to an extant camouflage animals dataset also is provided. Quantitative results indicate that the proposed algorithm considerably outperforms the alternative approach, supporting the hypothesis that biolocomotion can be detected reliably based on its distinct signature of positional advance with asymmetric oscillation and extrinsic/intrinsic motion dissimilarity
    • …
    corecore