47,242 research outputs found
Dealing with multi-scale depth changes and motion in depth edge detection
Sharp discontinuities in depth, or depth edges, are very important low-level features for scene understanding. Recently, we have proposed a solution to the depth edge detection problem using a simple modification of the capture setup: a multi-flash camera with flashes appropriately positioned to cast shadows along depth discontinuities in the scene. In this paper, we show that by varying illumination parameters, such as the number, spatial position, and wavelength of light sources, we are able to handle fundamental problems in depth edge detection, including multi-scale depth changes and motion. The robustness of our methods is demonstrated through our experimental results in complex scenes. 1
Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications
Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications
Online real-time crowd behavior detection in video sequences
Automatically detecting events in crowded scenes is a challenging task in Computer Vision. A number of offline approaches have been proposed for solving the problem of crowd behavior detection, however the offline assumption limits their application in real-world video surveillance systems. In this paper, we propose an online and real-time method for detecting events in crowded video sequences. The proposed approach is based on the combination of visual feature extraction and image segmentation and it works without the need of a training phase. A quantitative experimental evaluation has been carried out on multiple publicly available video sequences, containing data from various crowd scenarios and different types of events, to demonstrate the effectiveness of the approach
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
MoDeep: A Deep Learning Framework Using Motion Features for Human Pose Estimation
In this work, we propose a novel and efficient method for articulated human
pose estimation in videos using a convolutional network architecture, which
incorporates both color and motion features. We propose a new human body pose
dataset, FLIC-motion, that extends the FLIC dataset with additional motion
features. We apply our architecture to this dataset and report significantly
better performance than current state-of-the-art pose detection systems
Guided Filtering based Pyramidal Stereo Matching for Unrectified Images
Stereo matching deals with recovering quantitative
depth information from a set of input images, based on the visual
disparity between corresponding points. Generally most of the
algorithms assume that the processed images are rectified. As
robotics becomes popular, conducting stereo matching in the
context of cloth manipulation, such as obtaining the disparity
map of the garments from the two cameras of the cloth folding
robot, is useful and challenging. This is resulted from the fact of
the high efficiency, accuracy and low memory requirement under
the usage of high resolution images in order to capture the details
(e.g. cloth wrinkles) for the given application (e.g. cloth folding).
Meanwhile, the images can be unrectified. Therefore, we propose
to adapt guided filtering algorithm into the pyramidical stereo
matching framework that works directly for unrectified images.
To evaluate the proposed unrectified stereo matching in terms of
accuracy, we present three datasets that are suited to especially
the characteristics of the task of cloth manipulations. By com-
paring the proposed algorithm with two baseline algorithms on
those three datasets, we demonstrate that our proposed approach
is accurate, efficient and requires low memory. This also shows
that rather than relying on image rectification, directly applying
stereo matching through the unrectified images can be also quite
effective and meanwhile efficien
Fast Graph-Based Object Segmentation for RGB-D Images
Object segmentation is an important capability for robotic systems, in
particular for grasping. We present a graph- based approach for the
segmentation of simple objects from RGB-D images. We are interested in
segmenting objects with large variety in appearance, from lack of texture to
strong textures, for the task of robotic grasping. The algorithm does not rely
on image features or machine learning. We propose a modified Canny edge
detector for extracting robust edges by using depth information and two simple
cost functions for combining color and depth cues. The cost functions are used
to build an undirected graph, which is partitioned using the concept of
internal and external differences between graph regions. The partitioning is
fast with O(NlogN) complexity. We also discuss ways to deal with missing depth
information. We test the approach on different publicly available RGB-D object
datasets, such as the Rutgers APC RGB-D dataset and the RGB-D Object Dataset,
and compare the results with other existing methods
Maximum likelihood estimation of cloud height from multi-angle satellite imagery
We develop a new estimation technique for recovering depth-of-field from
multiple stereo images. Depth-of-field is estimated by determining the shift in
image location resulting from different camera viewpoints. When this shift is
not divisible by pixel width, the multiple stereo images can be combined to
form a super-resolution image. By modeling this super-resolution image as a
realization of a random field, one can view the recovery of depth as a
likelihood estimation problem. We apply these modeling techniques to the
recovery of cloud height from multiple viewing angles provided by the MISR
instrument on the Terra Satellite. Our efforts are focused on a two layer cloud
ensemble where both layers are relatively planar, the bottom layer is optically
thick and textured, and the top layer is optically thin. Our results
demonstrate that with relative ease, we get comparable estimates to the M2
stereo matcher which is the same algorithm used in the current MISR standard
product (details can be found in [IEEE Transactions on Geoscience and Remote
Sensing 40 (2002) 1547--1559]). Moreover, our techniques provide the
possibility of modeling all of the MISR data in a unified way for cloud height
estimation. Research is underway to extend this framework for fast, quality
global estimates of cloud height.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS243 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …