1,524 research outputs found
High-speed Video from Asynchronous Camera Array
This paper presents a method for capturing high-speed video using an
asynchronous camera array. Our method sequentially fires each sensor in a
camera array with a small time offset and assembles captured frames into a
high-speed video according to the time stamps. The resulting video, however,
suffers from parallax jittering caused by the viewpoint difference among
sensors in the camera array. To address this problem, we develop a dedicated
novel view synthesis algorithm that transforms the video frames as if they were
captured by a single reference sensor. Specifically, for any frame from a
non-reference sensor, we find the two temporally neighboring frames captured by
the reference sensor. Using these three frames, we render a new frame with the
same time stamp as the non-reference frame but from the viewpoint of the
reference sensor. Specifically, we segment these frames into super-pixels and
then apply local content-preserving warping to warp them to form the new frame.
We employ a multi-label Markov Random Field method to blend these warped
frames. Our experiments show that our method can produce high-quality and
high-speed video of a wide variety of scenes with large parallax, scene
dynamics, and camera motion and outperforms several baseline and
state-of-the-art approaches.Comment: 10 pages, 82 figures, Published at IEEE WACV 201
Wireless Software Synchronization of Multiple Distributed Cameras
We present a method for precisely time-synchronizing the capture of image
sequences from a collection of smartphone cameras connected over WiFi. Our
method is entirely software-based, has only modest hardware requirements, and
achieves an accuracy of less than 250 microseconds on unmodified commodity
hardware. It does not use image content and synchronizes cameras prior to
capture. The algorithm operates in two stages. In the first stage, we designate
one device as the leader and synchronize each client device's clock to it by
estimating network delay. Once clocks are synchronized, the second stage
initiates continuous image streaming, estimates the relative phase of image
timestamps between each client and the leader, and shifts the streams into
alignment. We quantitatively validate our results on a multi-camera rig imaging
a high-precision LED array and qualitatively demonstrate significant
improvements to multi-view stereo depth estimation and stitching of dynamic
scenes. We release as open source 'libsoftwaresync', an Android implementation
of our system, to inspire new types of collective capture applications.Comment: Main: 9 pages, 10 figures. Supplemental: 3 pages, 5 figure
Content-preserving image stitching with piecewise rectangular boundary constraints
This paper proposes an approach to content-preserving image stitching with regular boundary constraints, which aims to stitch multiple images to generate a panoramic image with a piecewise rectangular boundary. Existing methods treat image stitching and rectangling as two separate steps, which may result in suboptimal results as the stitching process is not aware of the further warping needs for rectangling. We address these limitations by formulating image stitching with regular boundaries in a unified optimization. Starting from the initial stitching results produced by the traditional warping-based optimization, we obtain the irregular boundary from the warped meshes by polygon Boolean operations which robustly handle arbitrary mesh compositions. By analyzing the irregular boundary, we construct a piecewise rectangular boundary. Based on this, we further incorporate line and regular boundary preservation constraints into the image stitching framework, and conduct iterative optimization to obtain an optimal piecewise rectangular boundary. Thus we can make the boundary of the stitching results as close as possible to a rectangle, while reducing unwanted distortions. We further extend our method to video stitching, by integrating the temporal coherence into the optimization. Experiments show that our method efficiently produces visually pleasing panoramas with regular boundaries and unnoticeable distortions
Imaging with two-axis micromirrors
We demonstrate a means of creating a digital image by using a two axis tilt
micromirror to scan a scene. For each different orientation we extract a single
grayscale value from the mirror and combine them to form a single composite
image. This allows one to choose the distribution of the samples, and so in
principle a variable resolution image could be created. We demonstrate this
ability to control resolution by constructing a voltage table that compensates
for the non-linear response of the mirrors to the applied voltage.Comment: 8 pages, 5 figures, preprin
Parallelized computational 3D video microscopy of freely moving organisms at multiple gigapixels per second
To study the behavior of freely moving model organisms such as zebrafish
(Danio rerio) and fruit flies (Drosophila) across multiple spatial scales, it
would be ideal to use a light microscope that can resolve 3D information over a
wide field of view (FOV) at high speed and high spatial resolution. However, it
is challenging to design an optical instrument to achieve all of these
properties simultaneously. Existing techniques for large-FOV microscopic
imaging and for 3D image measurement typically require many sequential image
snapshots, thus compromising speed and throughput. Here, we present 3D-RAPID, a
computational microscope based on a synchronized array of 54 cameras that can
capture high-speed 3D topographic videos over a 135-cm^2 area, achieving up to
230 frames per second at throughputs exceeding 5 gigapixels (GPs) per second.
3D-RAPID features a 3D reconstruction algorithm that, for each synchronized
temporal snapshot, simultaneously fuses all 54 images seamlessly into a
globally-consistent composite that includes a coregistered 3D height map. The
self-supervised 3D reconstruction algorithm itself trains a
spatiotemporally-compressed convolutional neural network (CNN) that maps raw
photometric images to 3D topography, using stereo overlap redundancy and
ray-propagation physics as the only supervision mechanism. As a result, our
end-to-end 3D reconstruction algorithm is robust to generalization errors and
scales to arbitrarily long videos from arbitrarily sized camera arrays. The
scalable hardware and software design of 3D-RAPID addresses a longstanding
problem in the field of behavioral imaging, enabling parallelized 3D
observation of large collections of freely moving organisms at high
spatiotemporal throughputs, which we demonstrate in ants (Pogonomyrmex
barbatus), fruit flies, and zebrafish larvae
- …