6,811 research outputs found
General Dynamic Scene Reconstruction from Multiple View Video
This paper introduces a general approach to dynamic scene reconstruction from
multiple moving cameras without prior knowledge or limiting constraints on the
scene structure, appearance, or illumination. Existing techniques for dynamic
scene reconstruction from multiple wide-baseline camera views primarily focus
on accurate reconstruction in controlled environments, where the cameras are
fixed and calibrated and background is known. These approaches are not robust
for general dynamic scenes captured with sparse moving cameras. Previous
approaches for outdoor dynamic scene reconstruction assume prior knowledge of
the static background appearance and structure. The primary contributions of
this paper are twofold: an automatic method for initial coarse dynamic scene
segmentation and reconstruction without prior knowledge of background
appearance or structure; and a general robust approach for joint segmentation
refinement and dense reconstruction of dynamic scenes from multiple
wide-baseline static or moving cameras. Evaluation is performed on a variety of
indoor and outdoor scenes with cluttered backgrounds and multiple dynamic
non-rigid objects such as people. Comparison with state-of-the-art approaches
demonstrates improved accuracy in both multiple view segmentation and dense
reconstruction. The proposed approach also eliminates the requirement for prior
knowledge of scene structure and appearance
Event-Based Motion Segmentation by Motion Compensation
In contrast to traditional cameras, whose pixels have a common exposure time,
event-based cameras are novel bio-inspired sensors whose pixels work
independently and asynchronously output intensity changes (called "events"),
with microsecond resolution. Since events are caused by the apparent motion of
objects, event-based cameras sample visual information based on the scene
dynamics and are, therefore, a more natural fit than traditional cameras to
acquire motion, especially at high speeds, where traditional cameras suffer
from motion blur. However, distinguishing between events caused by different
moving objects and by the camera's ego-motion is a challenging task. We present
the first per-event segmentation method for splitting a scene into
independently moving objects. Our method jointly estimates the event-object
associations (i.e., segmentation) and the motion parameters of the objects (or
the background) by maximization of an objective function, which builds upon
recent results on event-based motion-compensation. We provide a thorough
evaluation of our method on a public dataset, outperforming the
state-of-the-art by as much as 10%. We also show the first quantitative
evaluation of a segmentation algorithm for event cameras, yielding around 90%
accuracy at 4 pixels relative displacement.Comment: When viewed in Acrobat Reader, several of the figures animate. Video:
https://youtu.be/0q6ap_OSBA
Joint Optical Flow and Temporally Consistent Semantic Segmentation
The importance and demands of visual scene understanding have been steadily
increasing along with the active development of autonomous systems.
Consequently, there has been a large amount of research dedicated to semantic
segmentation and dense motion estimation. In this paper, we propose a method
for jointly estimating optical flow and temporally consistent semantic
segmentation, which closely connects these two problem domains and leverages
each other. Semantic segmentation provides information on plausible physical
motion to its associated pixels, and accurate pixel-level temporal
correspondences enhance the accuracy of semantic segmentation in the temporal
domain. We demonstrate the benefits of our approach on the KITTI benchmark,
where we observe performance gains for flow and segmentation. We achieve
state-of-the-art optical flow results, and outperform all published algorithms
by a large margin on challenging, but crucial dynamic objects.Comment: 14 pages, Accepted for CVRSUAD workshop at ECCV 201
Markov mezĆk a kĂ©pmodellezĂ©sben, alkalmazĂĄsuk az automatikus kĂ©pszegmentĂĄlĂĄs terĂŒletĂ©n = Markovian Image Models: Applications in Unsupervised Image Segmentation
1) KifejlesztettĂŒnk egy olyan szĂn Ă©s textĂșra alapĂș szegmentĂĄlĂł MRF algoritmust, amely alkalmas egy kĂ©p automatikus szegmentĂĄlĂĄsĂĄt elvĂ©gezni. Az eredmĂ©nyeinket az Image and Vision Computing folyĂłiratban publikĂĄltuk. 2) KifejlesztettĂŒnk egy Reversible Jump Markov Chain Monte Carlo technikĂĄn alapulĂł automatikus kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, melyet sikeresen alkalmaztunk szĂnes kĂ©pek teljesen automatikus szegmentĂĄlĂĄsĂĄra. Az eredmĂ©nyeinket a BMVC 2004 konferenciĂĄn Ă©s az Image and Vision Computing folyĂłiratban publikĂĄltuk. 3) A modell többrĂ©tegƱ tovĂĄbbfejlesztĂ©sĂ©t alkalmaztuk video objektumok szĂn Ă©s mozgĂĄs alapĂș szegmentĂĄlĂĄsĂĄra, melynek eredmĂ©nyeit a HACIPPR 2005 illetve az ACCV 2006 nemzetközi konferenciĂĄkon publikĂĄltuk. SzintĂ©n ehhez az alapproblĂ©mĂĄhoz kapcsolĂłdik HorvĂĄth PĂ©ter hallgatĂłmmal az optic flow szamĂtĂĄsĂĄval illetve szĂn, textĂșra Ă©s mozgĂĄs alapĂș GVF aktĂv kontĂșrral kapcsoltos munkĂĄink. TDK dolgozata elsĆ helyezĂ©st Ă©rt el a 2004-es helyi versenyen, az eredmĂ©nyeinket pedig a KEPAF 2004 konferenciĂĄn publikĂĄltuk. 4) HorvĂĄth PĂ©ter PhD hallgatĂłmmal illetve az franciaorszĂĄgi INRIA Ariana csoportjĂĄval, kidolgoztunk egy olyan kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, amely a szegmentĂĄlandĂł objektum alakjĂĄt is figyelembe veszi. Az eredmĂ©nyeinket az ICPR 2006 illetve az ICCVGIP 2006 konferenciĂĄn foglaltuk össze. A modell elĆzmĂ©nyekĂ©nt kidolgoztunk tovĂĄbbĂĄ egy alakzat-momemntumokon alapulĂł aktĂv kontĂșr modellt, amelyet a HACIPPR 2005 konferenciĂĄn publikĂĄltunk. | 1) We have proposed a monogrid MRF model which is able to combine color and texture features in order to improve the quality of segmentation results. We have also solved the estimation of model parameters. This work has been published in the Image and Vision Computing journal. 2) We have proposed an RJMCMC sampling method which is able to identify multi-dimensional Gaussian mixtures. Using this technique, we have developed a fully automatic color image segmentation algorithm. Our results have been published at BMVC 2004 international conference and in the Image and Vision Computing journal. 3) A new multilayer MRF model has been proposed which is able to segment an image based on multiple cues (such as color, texture, or motion). This work has been published at HACIPPR 2005 and ACCV 2006 international conferences. The work on optic flow computation and color-, texture-, and motion-based GVF active contours doen with my student, Mr. Peter Horvath, won a first price at the local Student Research Competition in 2004. Results have been presented at KEPAF 2004 conference. 4) A new shape prior, called 'gas of circles' has been introduced using active contour models. This work is done in collaboration with the Ariana group of INRIA, France and my PhD student, Mr. Peter Horvath. Results are published at the ICPR 2006 and ICCVGIP 2006 conferences. A preliminary study on active contour models using shape-moments has also been done, these results are published at HACIPPR 2005
A massively parallel multi-level approach to a domain decomposition method for the optical flow estimation with varying illumination
We consider a variational method to solve the optical flow problem with
varying illumination. We apply an adaptive control of the regularization
parameter which allows us to preserve the edges and fine features of the
computed flow. To reduce the complexity of the estimation for high resolution
images and the time of computations, we implement a multi-level parallel
approach based on the domain decomposition with the Schwarz overlapping method.
The second level of parallelism uses the massively parallel solver MUMPS. We
perform some numerical simulations to show the efficiency of our approach and
to validate it on classical and real-world image sequences
Colour, texture, and motion in level set based segmentation and tracking
This paper introduces an approach for the extraction and combination of different cues in a level set based image segmentation framework. Apart from the image grey value or colour, we suggest to add its spatial and temporal variations, which may provide important further characteristics. It often turns out that the combination of colour, texture, and motion permits to distinguish object regions that cannot be separated by one cue alone. We propose a two-step approach. In the first stage, the input features are extracted and enhanced by applying coupled nonlinear diffusion. This ensures coherence between the channels and deals with outliers. We use a nonlinear diffusion technique, closely related to total variation flow, but being strictly edge enhancing. The resulting features are then employed for a vector-valued front propagation based on level sets and statistical region models that approximate the distributions of each feature. The application of this approach to two-phase segmentation is followed by an extension to the tracking of multiple objects in image sequences
- âŠ