7,435 research outputs found
Event-Based Motion Segmentation by Motion Compensation
In contrast to traditional cameras, whose pixels have a common exposure time,
event-based cameras are novel bio-inspired sensors whose pixels work
independently and asynchronously output intensity changes (called "events"),
with microsecond resolution. Since events are caused by the apparent motion of
objects, event-based cameras sample visual information based on the scene
dynamics and are, therefore, a more natural fit than traditional cameras to
acquire motion, especially at high speeds, where traditional cameras suffer
from motion blur. However, distinguishing between events caused by different
moving objects and by the camera's ego-motion is a challenging task. We present
the first per-event segmentation method for splitting a scene into
independently moving objects. Our method jointly estimates the event-object
associations (i.e., segmentation) and the motion parameters of the objects (or
the background) by maximization of an objective function, which builds upon
recent results on event-based motion-compensation. We provide a thorough
evaluation of our method on a public dataset, outperforming the
state-of-the-art by as much as 10%. We also show the first quantitative
evaluation of a segmentation algorithm for event cameras, yielding around 90%
accuracy at 4 pixels relative displacement.Comment: When viewed in Acrobat Reader, several of the figures animate. Video:
https://youtu.be/0q6ap_OSBA
Reconstructing the Forest of Lineage Trees of Diverse Bacterial Communities Using Bio-inspired Image Analysis
Cell segmentation and tracking allow us to extract a plethora of cell
attributes from bacterial time-lapse cell movies, thus promoting computational
modeling and simulation of biological processes down to the single-cell level.
However, to analyze successfully complex cell movies, imaging multiple
interacting bacterial clones as they grow and merge to generate overcrowded
bacterial communities with thousands of cells in the field of view,
segmentation results should be near perfect to warrant good tracking results.
We introduce here a fully automated closed-loop bio-inspired computational
strategy that exploits prior knowledge about the expected structure of a
colony's lineage tree to locate and correct segmentation errors in analyzed
movie frames. We show that this correction strategy is effective, resulting in
improved cell tracking and consequently trustworthy deep colony lineage trees.
Our image analysis approach has the unique capability to keep tracking cells
even after clonal subpopulations merge in the movie. This enables the
reconstruction of the complete Forest of Lineage Trees (FLT) representation of
evolving multi-clonal bacterial communities. Moreover, the percentage of valid
cell trajectories extracted from the image analysis almost doubles after
segmentation correction. This plethora of trustworthy data extracted from a
complex cell movie analysis enables single-cell analytics as a tool for
addressing compelling questions for human health, such as understanding the
role of single-cell stochasticity in antibiotics resistance without losing site
of the inter-cellular interactions and microenvironment effects that may shape
it
Live User-guided Intrinsic Video For Static Scenes
We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance
- …