3,926 research outputs found
Cavlectometry: Towards Holistic Reconstruction of Large Mirror Objects
We introduce a method based on the deflectometry principle for the
reconstruction of specular objects exhibiting significant size and geometric
complexity. A key feature of our approach is the deployment of an Automatic
Virtual Environment (CAVE) as pattern generator. To unfold the full power of
this extraordinary experimental setup, an optical encoding scheme is developed
which accounts for the distinctive topology of the CAVE. Furthermore, we devise
an algorithm for detecting the object of interest in raw deflectometric images.
The segmented foreground is used for single-view reconstruction, the background
for estimation of the camera pose, necessary for calibrating the sensor system.
Experiments suggest a significant gain of coverage in single measurements
compared to previous methods. To facilitate research on specular surface
reconstruction, we will make our data set publicly available
Spatial calibration of an optical see-through head-mounted display
We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the~HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry
Creating virtual models from uncalibrated camera views
The reconstruction of photorealistic 3D models from camera views is becoming an ubiquitous element in many applications that simulate physical interaction with the real world. In this paper, we present a low-cost, interactive pipeline aimed at non-expert users, that achieves 3D reconstruction from multiple views acquired with a standard digital camera. 3D models are amenable to access through diverse representation modalities that typically imply trade-offs between level of detail, interaction, and computational costs. Our approach allows users to selectively control the complexity of different surface regions, while requiring only simple 2D image editing operations. An initial reconstruction at coarse resolution is followed by an iterative refining of the surface areas corresponding to the selected regions
LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System
Collision avoidance is a critical task in many applications, such as ADAS
(advanced driver-assistance systems), industrial automation and robotics. In an
industrial automation setting, certain areas should be off limits to an
automated vehicle for protection of people and high-valued assets. These areas
can be quarantined by mapping (e.g., GPS) or via beacons that delineate a
no-entry area. We propose a delineation method where the industrial vehicle
utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to
detect passive beacons and model-predictive control to stop the vehicle from
entering a restricted space. The beacons are standard orange traffic cones with
a highly reflective vertical pole attached. The LiDAR can readily detect these
beacons, but suffers from false positives due to other reflective surfaces such
as worker safety vests. Herein, we put forth a method for reducing false
positive detection from the LiDAR by projecting the beacons in the camera
imagery via a deep learning method and validating the detection using a neural
network-learned projection from the camera to the LiDAR space. Experimental
data collected at Mississippi State University's Center for Advanced Vehicular
Systems (CAVS) shows the effectiveness of the proposed system in keeping the
true detection while mitigating false positives.Comment: 34 page
Reconstruction of hidden 3D shapes using diffuse reflections
We analyze multi-bounce propagation of light in an unknown hidden volume and
demonstrate that the reflected light contains sufficient information to recover
the 3D structure of the hidden scene. We formulate the forward and inverse
theory of secondary and tertiary scattering reflection using ideas from energy
front propagation and tomography. We show that using careful choice of
approximations, such as Fresnel approximation, greatly simplifies this problem
and the inversion can be achieved via a backpropagation process. We provide a
theoretical analysis of the invertibility, uniqueness and choices of
space-time-angle dimensions using synthetic examples. We show that a 2D streak
camera can be used to discover and reconstruct hidden geometry. Using a 1D high
speed time of flight camera, we show that our method can be used recover 3D
shapes of objects "around the corner"
Single View 3D Reconstruction under an Uncalibrated Camera and an Unknown Mirror Sphere
In this paper, we develop a novel self-calibration method for single view 3D reconstruction using a mirror sphere. Unlike other mirror sphere based reconstruction methods, our method needs neither the intrinsic parameters of the camera, nor the position and radius of the sphere be known. Based on eigen decomposition of the matrix representing the conic image of the sphere and enforcing a repeated eignvalue constraint, we derive an analytical solution for recovering the focal length of the camera given its principal point. We then introduce a robust algorithm for estimating both the principal point and the focal length of the camera by minimizing the differences between focal lengths estimated from multiple images of the sphere. We also present a novel approach for estimating both the principal point and focal length of the camera in the case of just one single image of the sphere. With the estimated camera intrinsic parameters, the position(s) of the sphere can be readily retrieved from the eigen decomposition(s) and a scaled 3D reconstruction follows. Experimental results on both synthetic and real data are presented, which demonstrate the feasibility and accuracy of our approach. © 2016 IEEE.postprin
- …