3,341 research outputs found
Depth sensors in augmented reality solutions. Literature review
The emergence of depth sensors has made it possible to track – not only monocular
cues – but also the actual depth values of the environment. This is especially
useful in augmented reality solutions, where the position and orientation (pose) of
the observer need to be accurately determined. This allows virtual objects to be
installed to the view of the user through, for example, a screen of a tablet or augmented
reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have
been physically quite large, the size of these sensors is decreasing, and possibly –
eventually – a 3D sensor could be embedded – for example – to augmented reality
glasses. The wider subject area considered in this review is 3D SLAM methods,
which take advantage of the 3D information available by modern RGB-D sensors,
such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization
and Mapping) and 3D tracking in augmented reality is a timely subject. We also try
to find out the limitations and possibilities of different tracking methods, and how
they should be improved, in order to allow efficient integration of the methods to
the augmented reality solutions of the future.Siirretty Doriast
Depth sensors in augmented reality solutions. Literature review
The emergence of depth sensors has made it possible to track – not only monocular
cues – but also the actual depth values of the environment. This is especially
useful in augmented reality solutions, where the position and orientation (pose) of
the observer need to be accurately determined. This allows virtual objects to be
installed to the view of the user through, for example, a screen of a tablet or augmented
reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have
been physically quite large, the size of these sensors is decreasing, and possibly –
eventually – a 3D sensor could be embedded – for example – to augmented reality
glasses. The wider subject area considered in this review is 3D SLAM methods,
which take advantage of the 3D information available by modern RGB-D sensors,
such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization
and Mapping) and 3D tracking in augmented reality is a timely subject. We also try
to find out the limitations and possibilities of different tracking methods, and how
they should be improved, in order to allow efficient integration of the methods to
the augmented reality solutions of the future.Siirretty Doriast
A feature-based approach for monocular camera tracking in unknown environments
© 2017 IEEE. Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method
Recent advances in monocular model-based tracking: a systematic literature review
In this paper, we review the advances of monocular model-based tracking for
last ten years period until 2014. In 2005, Lepetit, et. al, [19] reviewed the status
of monocular model based rigid body tracking. Since then, direct 3D tracking has
become quite popular research area, but monocular model-based tracking should
still not be forgotten. We mainly focus on tracking, which could be applied to aug-
mented reality, but also some other applications are covered. Given the wide subject
area this paper tries to give a broad view on the research that has been conducted,
giving the reader an introduction to the different disciplines that are tightly related
to model-based tracking. The work has been conducted by searching through well
known academic search databases in a systematic manner, and by selecting certain
publications for closer examination. We analyze the results by dividing the found
papers into different categories by their way of implementation. The issues which
have not yet been solved are discussed. We also discuss on emerging model-based
methods such as fusing different types of features and region-based pose estimation
which could show the way for future research in this subject.Siirretty Doriast
Markerless visual servoing on unknown objects for humanoid robot platforms
To precisely reach for an object with a humanoid robot, it is of central
importance to have good knowledge of both end-effector, object pose and shape.
In this work we propose a framework for markerless visual servoing on unknown
objects, which is divided in four main parts: I) a least-squares minimization
problem is formulated to find the volume of the object graspable by the robot's
hand using its stereo vision; II) a recursive Bayesian filtering technique,
based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose
(position and orientation) of the robot's end-effector without the use of
markers; III) a nonlinear constrained optimization problem is formulated to
compute the desired graspable pose about the object; IV) an image-based visual
servo control commands the robot's end-effector toward the desired pose. We
demonstrate effectiveness and robustness of our approach with extensive
experiments on the iCub humanoid robot platform, achieving real-time
computation, smooth trajectories and sub-pixel precisions
Camera pose estimation in unknown environments using a sequence of wide-baseline monocular images
In this paper, a feature-based technique for the camera pose estimation in a sequence of wide-baseline images has been proposed. Camera pose estimation is an important issue in many computer vision and robotics applications, such as, augmented reality and visual SLAM. The proposed method can track captured images taken by hand-held camera in room-sized workspaces with maximum scene depth of 3-4 meters. The system can be used in unknown environments with no additional information available from the outside world except in the first two images that are used for initialization. Pose estimation is performed using only natural feature points extracted and matched in successive images. In wide-baseline images unlike consecutive frames of a video stream, displacement of the feature points in consecutive images is notable and hence cannot be traced easily using patch-based methods. To handle this problem, a hybrid strategy is employed to obtain accurate feature correspondences. In this strategy, first initial feature correspondences are found using similarity of their descriptors and then outlier matchings are removed by applying RANSAC algorithm. Further, to provide a set of required feature matchings a mechanism based on sidelong result of robust estimator was employed. The proposed method is applied on indoor real data with images in VGA quality (640×480 pixels) and on average the translation error of camera pose is less than 2 cm which indicates the effectiveness and accuracy of the proposed approach
- …