1,190 research outputs found
Robust Estimation of Trifocal Tensors Using Natural Features for Augmented Reality Systems
Augmented reality deals with the problem of dynamically augmenting or enhancing the real world with computer generated virtual scenes. Registration is one of the most pivotal problems currently limiting AR applications. In this paper, a novel registration method using natural features based on online estimation of trifocal tensors is proposed. This method consists of two stages: offline initialization and online registration. Initialization involves specifying four points in two reference images respectively to build the world coordinate system on which a virtual object will be augmented. In online registration, the natural feature correspondences detected from the reference views are tracked in the current frame to build the feature triples. Then these triples are used to estimate the corresponding trifocal tensors in the image sequence by which the four specified points are transferred to compute the registration matrix for augmentation. The estimated registration matrix will be used as an initial estimate for a nonlinear optimization method that minimizes the actual residual errors based on the Levenberg-Marquardt (LM) minimization method, thus making the results more robust and stable. This paper also proposes a robust method for estimating the trifocal tensors, where a modified RANSAC algorithm is used to remove outliers. Compared with standard RANSAC, our method can significantly reduce computation complexity, while overcoming the disturbance of mismatches. Some experiments have been carried out to demonstrate the validity of the proposed approach
Recommended from our members
Spatial calibration of an optical see-through head-mounted display
We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the~HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry
Registration Combining Wide and Narrow Baseline Feature Tracking Techniques for Markerless AR Systems
Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. Registration is one of the most difficult problems currently limiting the usability of AR systems. In this paper, we propose a novel natural feature tracking based registration method for AR applications. The proposed method has following advantages: (1) it is simple and efficient, as no man-made markers are needed for both indoor and outdoor AR applications; moreover, it can work with arbitrary geometric shapes including planar, near planar and non planar structures which really enhance the usability of AR systems. (2) Thanks to the reduced SIFT based augmented optical flow tracker, the virtual scene can still be augmented on the specified areas even under the circumstances of occlusion and large changes in viewpoint during the entire process. (3) It is easy to use, because the adaptive classification tree based matching strategy can give us fast and accurate initialization, even when the initial camera is different from the reference image to a large degree. Experimental evaluations validate the performance of the proposed method for online pose tracking and augmentation
Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor
Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level.
Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques.
Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use
Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate.
Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation
Hierarchical structure-and-motion recovery from uncalibrated images
This paper addresses the structure-and-motion problem, that requires to find
camera motion and 3D struc- ture from point matches. A new pipeline, dubbed
Samantha, is presented, that departs from the prevailing sequential paradigm
and embraces instead a hierarchical approach. This method has several
advantages, like a provably lower computational complexity, which is necessary
to achieve true scalability, and better error containment, leading to more
stability and less drift. Moreover, a practical autocalibration procedure
allows to process images without ancillary information. Experiments with real
data assess the accuracy and the computational efficiency of the method.Comment: Accepted for publication in CVI
Visual Enhancement for Sports Entertainment by Vision-Based Augmented Reality
This paper presents visually enhanced sports entertainment
applications: AR Baseball Presentation System and Interactive AR
Bowling System. We utilize vision-based augmented reality for
getting immersive feeling. First application is an observation
system of a virtual baseball game on the tabletop. 3D virtual
players are playing a game on a real baseball field model, so that
users can observe the game from favorite view points through a
handheld monitor with a web camera. Second application is a bowling
system which allows users to roll a real ball down a real bowling
lane model on the tabletop and knock down virtual pins. The users
watch the virtual pins through the monitor. The lane and the ball
are also tracked by vision-based tracking. In those applications, we
utilize multiple 2D markers distributed at arbitrary positions and
directions. Even though the geometrical relationship among the
markers is unknown, we can track the camera in very wide area
Projector-Based Augmentation
Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces
Optical versus video see-through mead-mounted displays in medical visualization
We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality research efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology
- …