13,837 research outputs found
Adaptive Temporal Compressive Sensing for Video
This paper introduces the concept of adaptive temporal compressive sensing
(CS) for video. We propose a CS algorithm to adapt the compression ratio based
on the scene's temporal complexity, computed from the compressed data, without
compromising the quality of the reconstructed video. The temporal adaptivity is
manifested by manipulating the integration time of the camera, opening the
possibility to real-time implementation. The proposed algorithm is a
generalized temporal CS approach that can be incorporated with a diverse set of
existing hardware systems.Comment: IEEE Interonal International Conference on Image Processing
(ICIP),201
Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging
A variety of techniques such as light field, structured illumination, and
time-of-flight (TOF) are commonly used for depth acquisition in consumer
imaging, robotics and many other applications. Unfortunately, each technique
suffers from its individual limitations preventing robust depth sensing. In
this paper, we explore the strengths and weaknesses of combining light field
and time-of-flight imaging, particularly the feasibility of an on-chip
implementation as a single hybrid depth sensor. We refer to this combination as
depth field imaging. Depth fields combine light field advantages such as
synthetic aperture refocusing with TOF imaging advantages such as high depth
resolution and coded signal processing to resolve multipath interference. We
show applications including synthesizing virtual apertures for TOF imaging,
improved depth mapping through partial and scattering occluders, and single
frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding,
depth fields can improve depth sensing in the wild and generate new insights
into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201
Advances on CMOS image sensors
This paper offers an introduction to the technological advances of image sensors designed using
complementary metal–oxide–semiconductor (CMOS) processes along the last decades. We review
some of those technological advances and examine potential disruptive growth directions for CMOS
image sensors and proposed ways to achieve them. Those advances include breakthroughs on
image quality such as resolution, capture speed, light sensitivity and color detection and advances on
the computational imaging. The current trend is to push the innovation efforts even further as the
market requires higher resolution, higher speed, lower power consumption and, mainly, lower cost
sensors. Although CMOS image sensors are currently used in several different applications from
consumer to defense to medical diagnosis, product differentiation is becoming both a requirement and
a difficult goal for any image sensor manufacturer. The unique properties of CMOS process allows the
integration of several signal processing techniques and are driving the impressive advancement of the
computational imaging. With this paper, we offer a very comprehensive review of methods,
techniques, designs and fabrication of CMOS image sensors that have impacted or might will impact
the images sensor applications and markets
Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light
One of the solutions of depth imaging of moving scene is to project a static
pattern on the object and use just a single image for reconstruction. However,
if the motion of the object is too fast with respect to the exposure time of
the image sensor, patterns on the captured image are blurred and reconstruction
fails. In this paper, we impose multiple projection patterns into each single
captured image to realize temporal super resolution of the depth image
sequences. With our method, multiple patterns are projected onto the object
with higher fps than possible with a camera. In this case, the observed pattern
varies depending on the depth and motion of the object, so we can extract
temporal information of the scene from each single image. The decoding process
is realized using a learning-based approach where no geometric calibration is
needed. Experiments confirm the effectiveness of our method where sequential
shapes are reconstructed from a single image. Both quantitative evaluations and
comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision
(ICCV 2017
CoBe -- Coded Beacons for Localization, Object Tracking, and SLAM Augmentation
This paper presents a novel beacon light coding protocol, which enables fast
and accurate identification of the beacons in an image. The protocol is
provably robust to a predefined set of detection and decoding errors, and does
not require any synchronization between the beacons themselves and the optical
sensor. A detailed guide is then given for developing an optical tracking and
localization system, which is based on the suggested protocol and readily
available hardware. Such a system operates either as a standalone system for
recovering the six degrees of freedom of fast moving objects, or integrated
with existing SLAM pipelines providing them with error-free and easily
identifiable landmarks. Based on this guide, we implemented a low-cost
positional tracking system which can run in real-time on an IoT board. We
evaluate our system's accuracy and compare it to other popular methods which
utilize the same optical hardware, in experiments where the ground truth is
known. A companion video containing multiple real-world experiments
demonstrates the accuracy, speed, and applicability of the proposed system in a
wide range of environments and real-world tasks. Open source code is provided
to encourage further development of low-cost localization systems integrating
the suggested technology at its navigation core
- …