2,432 research outputs found

    Gravity optimised particle filter for hand tracking

    Get PDF
    This paper presents a gravity optimised particle filter (GOPF) where the magnitude of the gravitational force for every particle is proportional to its weight. GOPF attracts nearby particles and replicates new particles as if moving the particles towards the peak of the likelihood distribution, improving the sampling efficiency. GOPF is incorporated into a technique for hand features tracking. A fast approach to hand features detection and labelling using convexity defects is also presented. Experimental results show that GOPF outperforms the standard particle filter and its variants, as well as state-of-the-art CamShift guided particle filter using a significantly reduced number of particles

    Towards an Interactive Humanoid Companion with Visual Tracking Modalities

    Get PDF
    The idea of robots acting as human companions is not a particularly new or original one. Since the notion of “robot ” was created, the idea of robots replacing humans in dangerous, dirty and dull activities has been inseparably tied with the fantasy of human-like robots being friends and existing side by side with humans. In 1989, Engelberger (Engelberger

    VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera

    Full text link
    We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e. it works for outdoor scenes, community videos, and low quality commodity RGB cameras.Comment: Accepted to SIGGRAPH 201

    Articulated Object Tracking from Visual Sensory Data for Robotic Manipulation

    Get PDF
    Roboti juhtimine liigestatud objekti manipuleerimisel vajab robustset ja täpsetobjekti oleku hindamist. Oleku hindamise tulemust kasutatakse tagasisidena vastavate roboti liigutuste arvutamisel soovitud manipulatsiooni tulemuse saavutamiseks. Selles töös uuritakse robootilise manipuleerimise visuaalse tagasiside teostamist. Tehisnägemisele põhinevat servode liigutamist juhitakse ruutplaneerimise raamistikus võimaldamaks humanoidsel robotil läbi viia objekti manipulatsiooni. Esitletakse tehisnägemisel põhinevat liigestatud objekti oleku hindamise meetodit. Me näitame väljapakutud meetodi efektiivsust mitmel erineval eksperimendil HRP-4 humanoidse robotiga. Teeme ka ettepaneku ühendada masinõppe ja serva tuvastamise tehnikad liigestatud objekti manipuleerimise markeerimata visuaalse tagasiside teostamiseks reaalajas.In order for a robot to manipulate an articulated object, it needs to know itsstate (i.e. its pose); that is to say: where and in which configuration it is. Theresult of the object’s state estimation is to be provided as a feedback to the control to compute appropriate robot motion and achieve the desired manipulation outcome. This is the main topic of this thesis, where articulated object state estimation is solved using visual feedback. Vision based servoing is implemented in a Quadratic Programming task space control framework to enable humanoid robot to perform articulated objects manipulation. We thoroughly developed our methodology for vision based articulated object state estimation on these bases.We demonstrate its efficiency by assessing it on several real experiments involving the HRP-4 humanoid robot. We also propose to combine machine learning and edge extraction techniques to achieve markerless, realtime and robust visual feedback for articulated object manipulation

    Models and estimators for markerless human motion tracking

    Get PDF
    In this work, we analyze the diferent components of a model-based motion tracking system. The system consists in: a human body model, an estimator, and a likelihood or cost function

    Computer analysis of objects’ movement in image sequences: methods and applications

    Get PDF
    Computer analysis of objects’ movement in image sequences is a very complex problem, considering that it usually involves tasks for automatic detection, matching, tracking, motion analysis and deformation estimation. In spite of its complexity, this computational analysis has a wide range of important applications; for instance, in surveillance systems, clinical analysis of human gait, objects recognition, pose estimation and deformation analysis. Due to the extent of the purposes, several difficulties arise, such as the simultaneous tracking of manifold objects, their possible temporary occlusion or definitive disappearance from the image scene, changes of the viewpoints considered in images acquisition or of the illumination conditions, or even nonrigid deformations that objects may suffer in image sequences. In this paper, we present an overview of several methods that may be considered to analyze objects’ movement; namely, for their segmentation, tracking and matching in images, and for estimation of the deformation involved between images.This paper was partially done in the scope of project “Segmentation, Tracking and Motion Analysis of Deformable (2D/3D) Objects using Physical Principles”, with reference POSC/EEA-SRI/55386/2004, financially supported by FCT -Fundação para a Ciência e a Tecnologia from Portugal. The fourth, fifth and seventh authors would like to thank also the support of their PhD grants from FCT with references SFRH/BD/29012/2006, SFRH/BD/28817/2006 and SFRH/BD/12834/2003, respectively

    Vision-based and marker-less surgical tool detection and tracking: a review of the literature

    Get PDF
    In recent years, tremendous progress has been made in surgical practice for example with Minimally Invasive Surgery (MIS). To overcome challenges coming from deported eye-to-hand manipulation, robotic and computer-assisted systems have been developed. Having real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy is a key ingredient for such systems. In this paper, we present a review of the literature dealing with vision-based and marker-less surgical tool detection. This paper includes three primary contributions: (1) identification and analysis of data-sets used for developing and testing detection algorithms, (2) in-depth comparison of surgical tool detection methods from the feature extraction process to the model learning strategy and highlight existing shortcomings, and (3) analysis of validation techniques employed to obtain detection performance results and establish comparison between surgical tool detectors. The papers included in the review were selected through PubMed and Google Scholar searches using the keywords: “surgical tool detection”, “surgical tool tracking”, “surgical instrument detection” and “surgical instrument tracking” limiting results to the year range 2000 2015. Our study shows that despite significant progress over the years, the lack of established surgical tool data-sets, and reference format for performance assessment and method ranking is preventing faster improvement
    corecore