2,891 research outputs found
Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging
Depth sensing is useful in a variety of applications that range from
augmented reality to robotics. Time-of-flight (TOF) cameras are appealing
because they obtain dense depth measurements with minimal latency. However, for
many battery-powered devices, the illumination source of a TOF camera is power
hungry and can limit the battery life of the device. To address this issue, we
present an algorithm that lowers the power for depth sensing by reducing the
usage of the TOF camera and estimating depth maps using concurrently collected
images. Our technique also adaptively controls the TOF camera and enables it
when an accurate depth map cannot be estimated. To ensure that the overall
system power for depth sensing is reduced, we design our algorithm to run on a
low power embedded platform, where it outputs 640x480 depth maps at 30 frames
per second. We evaluate our approach on several RGB-D datasets, where it
produces depth maps with an overall mean relative error of 0.96% and reduces
the usage of the TOF camera by 85%. When used with commercial TOF cameras, we
estimate that our algorithm can lower the total power for depth sensing by up
to 73%
Better Feature Tracking Through Subspace Constraints
Feature tracking in video is a crucial task in computer vision. Usually, the
tracking problem is handled one feature at a time, using a single-feature
tracker like the Kanade-Lucas-Tomasi algorithm, or one of its derivatives.
While this approach works quite well when dealing with high-quality video and
"strong" features, it often falters when faced with dark and noisy video
containing low-quality features. We present a framework for jointly tracking a
set of features, which enables sharing information between the different
features in the scene. We show that our method can be employed to track
features for both rigid and nonrigid motions (possibly of few moving bodies)
even when some features are occluded. Furthermore, it can be used to
significantly improve tracking results in poorly-lit scenes (where there is a
mix of good and bad features). Our approach does not require direct modeling of
the structure or the motion of the scene, and runs in real time on a single CPU
core.Comment: 8 pages, 2 figures. CVPR 201
New image processing tools for structural dynamic monitoring
This paper presents an introduction to structural damage assessment using image processing on real data (non ideal conditions). Our contribution is much more a groundwork than a
classical experimental validation. After measuring the bridge dynamic parameter on a small resolution video, we conjointly present advantages and limitations of our method. Finally we
introduce several "computer vision" based rules and focus on the technical ability to detect damage using camera and video motion estimation
Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map
An algorithm for pose and motion estimation using corresponding features in
omnidirectional images and a digital terrain map is proposed. In previous
paper, such algorithm for regular camera was considered. Using a Digital
Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables
recovering the absolute position and orientation of the camera. In order to do
this, the DTM is used to formulate a constraint between corresponding features
in two consecutive frames. In this paper, these constraints are extended to
handle non-central projection, as is the case with many omnidirectional
systems. The utilization of omnidirectional data is shown to improve the
robustness and accuracy of the navigation algorithm. The feasibility of this
algorithm is established through lab experimentation with two kinds of
omnidirectional acquisition systems. The first one is polydioptric cameras
while the second is catadioptric camera.Comment: 6 pages, 9 figure
Independent Motion Detection with Event-driven Cameras
Unlike standard cameras that send intensity images at a constant frame rate,
event-driven cameras asynchronously report pixel-level brightness changes,
offering low latency and high temporal resolution (both in the order of
micro-seconds). As such, they have great potential for fast and low power
vision algorithms for robots. Visual tracking, for example, is easily achieved
even for very fast stimuli, as only moving objects cause brightness changes.
However, cameras mounted on a moving robot are typically non-stationary and the
same tracking problem becomes confounded by background clutter events due to
the robot ego-motion. In this paper, we propose a method for segmenting the
motion of an independently moving object for event-driven cameras. Our method
detects and tracks corners in the event stream and learns the statistics of
their motion as a function of the robot's joint velocities when no
independently moving objects are present. During robot operation, independently
moving objects are identified by discrepancies between the predicted corner
velocities from ego-motion and the measured corner velocities. We validate the
algorithm on data collected from the neuromorphic iCub robot. We achieve a
precision of ~ 90 % and show that the method is robust to changes in speed of
both the head and the target.Comment: 7 pages, 6 figure
Sparse optical flow regularisation for real-time visual tracking
Optical flow can greatly improve the robustness of visual tracking algorithms. While dense optical flow algorithms have various applications, they can not be used for real-time solutions without resorting to GPU calculations. Furthermore, most optical flow algorithms fail in challenging lighting environments due to the violation of the brightness constraint. We propose a simple but effective iterative regularisation scheme for real-time, sparse optical flow algorithms, that is shown to be robust to sudden illumination changes and can handle large displacements. The algorithm proves to outperform well known techniques in real life video sequences, while being much faster to calculate. Our solution increases the robustness of a real-time particle filter based tracking application, consuming only a fraction of the available CPU power. Furthermore, a new and realistic optical flow dataset with annotated ground truth is created and made freely available for research purposes
- …