28,811 research outputs found

    Robust real-time visual tracking using a 2D-3D model-based approach

    Get PDF
    National audienceWe present an original method for tracking, in an image sequence, complex objects which can be approximately modeled by a polyhedral shape. The approach relies on the estimation of the 2D object image motion along with the computation of the 3D object pose. The proposed method fulfills real-time constraints along with reliability and robustness requirements. Real tracking experiments and results concerning a visual servoing positioning task are presented

    Real-time vision-based microassembly of 3D MEMS.

    Get PDF
    International audienceRobotic microassembly is a promising way to fabricate micrometric components based three dimensions (3D) compound products where the materials or the technologies are incompatible: structures, devices, Micro Electro Mechanical Systems (MEMS), Micro Opto Electro Mechanical Systems (MOEMS),... To date, solutions proposed in the literature are based on 2D visual control because of the lack of accurate and robust 3D measures from the work scene. In this paper the relevance of the real-time 3D visual tracking and control is demonstrated. The 3D poses of the MEMS is supplied by a model-based tracking algorithm in real-time. It is accurate and robust enough to enable a precise regulation toward zero of a 3D error using a visual servoing approach. The assembly of 400 mm 400 mm 100 mm parts by their 100 mm 100 mm 100 mm notches with a mechanical play of 3 mm is achieved with a rate of 41 seconds per assembly. The control accuracy reaches 0.3 mm in position and 0.2 in orientation

    VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera

    Full text link
    We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e. it works for outdoor scenes, community videos, and low quality commodity RGB cameras.Comment: Accepted to SIGGRAPH 201
    • …
    corecore