37 research outputs found

    Multiframe Motion Segmentation via Penalized MAP Estimation and Linear Programming

    Full text link
    Motion segmentation is an important topic in computer vision. In this paper, we study the problem of multi-body motion segmentation under the affine camera model. We use a mixture of subspace model to describe the multi-body motions. Then the motion segmentation problem is formulated as an MAP estimation problem with model complexity penalty. With several candidate motion models, the problem can be naturally converted into a linear programming problem, which guarantees a global optimality. The main advantages of our algorithm include: It needs no priori on the number of motions and it has comparable high segmentation accuracy with the best of motion-number-known algorithms. Experiments on benchmark data sets illustrate these points

    Particle detection and tracking in fluorescence time-lapse imaging: a contrario approach

    Full text link
    This paper proposes a probabilistic approach for the detection and the tracking of particles in fluorescent time-lapse imaging. In the presence of a very noised and poor-quality data, particles and trajectories can be characterized by an a contrario model, that estimates the probability of observing the structures of interest in random data. This approach, first introduced in the modeling of human visual perception and then successfully applied in many image processing tasks, leads to algorithms that neither require a previous learning stage, nor a tedious parameter tuning and are very robust to noise. Comparative evaluations against a well-established baseline show that the proposed approach outperforms the state of the art.Comment: Published in Journal of Machine Vision and Application

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    Video Motion: Finding Complete Motion Paths for Every Visible Point

    Get PDF
    <p>The problem of understanding motion in video has been an area of intense research in computer vision for decades. The traditional approach is to represent motion using optical flow fields, which describe the two-dimensional instantaneous velocity at every pixel in every frame. We present a new approach to describing motion in video in which each visible world point is associated with a sequence-length video motion path. A video motion path lists the location where a world point would appear if it were visible in every frame of the sequence. Each motion path is coupled with a vector of binary visibility flags for the associated point that identify the frames in which the tracked point is unoccluded.</p><p>We represent paths for all visible points in a particular sequence using a single linear subspace. The key insight we exploit is that, for many sequences, this subspace is low-dimensional, scaling with the complexity of the deformations and the number of independent objects in the scene, rather than the number of frames in the sequence. Restricting all paths to lie within a single motion subspace provides strong regularization that allows us to extend paths through brief occlusions, relying on evidence from the visible frames to hallucinate the unseen locations.</p><p>This thesis presents our mathematical model of video motion. We define a path objective function that optimizes a set of paths given estimates of visible intervals, under the assumption that motion is generally spatially smooth and that the appearance of a tracked point remains constant over time. We estimate visibility based on global properties of all paths, enforcing the physical requirement that at least one tracked point must be visible at every pixel in the video. The model assumes the existence of an appropriate path motion basis; we find a sequence-specific basis through analysis of point tracks from a frame-to-frame tracker. Tracking failures caused by image noise, non-rigid deformations, or occlusions complicate the problem by introducing missing data. We update standard trackers to aggressively reinitialize points lost in earlier frames. Finally, we improve on standard Principal Component Analysis with missing data by introducing a novel compaction step that associates these relocalized points, reducing the amount of missing data that must be overcome. The full system achieves state-of-the-art results, recovering dense, accurate, long-range point correspondences in the face of significant occlusions.</p>Dissertatio

    Combinatorial Solutions for Shape Optimization in Computer Vision

    Get PDF
    This thesis aims at solving so-called shape optimization problems, i.e. problems where the shape of some real-world entity is sought, by applying combinatorial algorithms. I present several advances in this field, all of them based on energy minimization. The addressed problems will become more intricate in the course of the thesis, starting from problems that are solved globally, then turning to problems where so far no global solutions are known. The first two chapters treat segmentation problems where the considered grouping criterion is directly derived from the image data. That is, the respective data terms do not involve any parameters to estimate. These problems will be solved globally. The first of these chapters treats the problem of unsupervised image segmentation where apart from the image there is no other user input. Here I will focus on a contour-based method and show how to integrate curvature regularity into a ratio-based optimization framework. The arising optimization problem is reduced to optimizing over the cycles in a product graph. This problem can be solved globally in polynomial, effectively linear time. As a consequence, the method does not depend on initialization and translational invariance is achieved. This is joint work with Daniel Cremers and Simon Masnou. I will then proceed to the integration of shape knowledge into the framework, while keeping translational invariance. This problem is again reduced to cycle-finding in a product graph. Being based on the alignment of shape points, the method actually uses a more sophisticated shape measure than most local approaches and still provides global optima. It readily extends to tracking problems and allows to solve some of them in real-time. I will present an extension to highly deformable shape models which can be included in the global optimization framework. This method simultaneously allows to decompose a shape into a set of deformable parts, based only on the input images. This is joint work with Daniel Cremers. In the second part segmentation is combined with so-called correspondence problems, i.e. the underlying grouping criterion is now based on correspondences that have to be inferred simultaneously. That is, in addition to inferring the shapes of objects, one now also tries to put into correspondence the points in several images. The arising problems become more intricate and are no longer optimized globally. This part is divided into two chapters. The first chapter treats the topic of real-time motion segmentation where objects are identified based on the observations that the respective points in the video will move coherently. Rather than pre-estimating motion, a single energy functional is minimized via alternating optimization. The main novelty lies in the real-time capability, which is achieved by exploiting a fast combinatorial segmentation algorithm. The results are furthermore improved by employing a probabilistic data term. This is joint work with Daniel Cremers. The final chapter presents a method for high resolution motion layer decomposition and was developed in combination with Daniel Cremers and Thomas Pock. Layer decomposition methods support the notion of a scene model, which allows to model occlusion and enforce temporal consistency. The contributions are twofold: from a practical point of view the proposed method allows to recover fine-detailed layer images by minimizing a single energy. This is achieved by integrating a super-resolution method into the layer decomposition framework. From a theoretical viewpoint the proposed method introduces layer-based regularity terms as well as a graph cut-based scheme to solve for the layer domains. The latter is combined with powerful continuous convex optimization techniques into an alternating minimization scheme. Lastly I want to mention that a significant part of this thesis is devoted to the recent trend of exploiting parallel architectures, in particular graphics cards: many combinatorial algorithms are easily parallelized. In Chapter 3 we will see a case where the standard algorithm is hard to parallelize, but easy for the respective problem instances

    Discrete and Continuous Optimization for Motion Estimation

    Get PDF
    The study of motion estimation reaches back decades and has become one of the central topics of research in computer vision. Even so, there are situations where current approaches fail, such as when there are extreme lighting variations, significant occlusions, or very large motions. In this thesis, we propose several approaches to address these issues. First, we propose a novel continuous optimization framework for estimating optical flow based on a decomposition of the image domain into triangular facets. We show how this allows for occlusions to be easily and naturally handled within our optimization framework without any post-processing. We also show that a triangular decomposition enables us to use a direct Cholesky decomposition to solve the resulting linear systems by reducing its memory requirements. Second, we introduce a simple method for incorporating additional temporal information into optical flow using inertial estimates of the flow, which leads to a significant reduction in error. We evaluate our methods on several datasets and achieve state-of-the-art results on MPI-Sintel. Finally, we introduce a discrete optimization framework for optical flow computation. Discrete approaches have generally been avoided in optical flow because of the relatively large label space that makes them computationally expensive. In our approach, we use recent advances in image segmentation to build a tree-structured graphical model that conforms to the image content. We show how the optimal solution to these discrete optical flow problems can be computed efficiently by making use of optimization methods from the object recognition literature, even for large images with hundreds of thousands of labels

    Elastic shape analysis of geometric objects with complex structures and partial correspondences

    Get PDF
    In this dissertation, we address the development of elastic shape analysis frameworks for the registration, comparison and statistical shape analysis of geometric objects with complex topological structures and partial correspondences. In particular, we introduce a variational framework and several numerical algorithms for the estimation of geodesics and distances induced by higher-order elastic Sobolev metrics on the space of parametrized and unparametrized curves and surfaces. We extend our framework to the setting of shape graphs (i.e., geometric objects with branching structures where each branch is a curve) and surfaces with complex topological structures and partial correspondences. To do so, we leverage the flexibility of varifold fidelity metrics in order to augment our geometric objects with a spatially-varying weight function, which in turn enables us to indirectly model topological changes and handle partial matching constraints via the estimation of vanishing weights within the registration process. In the setting of shape graphs, we prove the existence of solutions to the relaxed registration problem with weights, which is the main theoretical contribution of this thesis. In the setting of surfaces, we leverage our surface matching algorithms to develop a comprehensive collection of numerical routines for the statistical shape analysis of sets of 3D surfaces, which includes algorithms to compute Karcher means, perform dimensionality reduction via multidimensional scaling and tangent principal component analysis, and estimate parallel transport across surfaces (possibly with partial matching constraints). Moreover, we also address the development of numerical shape analysis pipelines for large-scale data-driven applications with geometric objects. Towards this end, we introduce a supervised deep learning framework to compute the square-root velocity (SRV) distance for curves. Our trained network provides fast and accurate estimates of the SRV distance between pairs of geometric curves, without the need to find optimal reparametrizations. As a proof of concept for the suitability of such approaches in practical contexts, we use it to perform optical character recognition (OCR), achieving comparable performance in terms of computational speed and accuracy to other existing OCR methods. Lastly, we address the difficulty of extracting high quality shape structures from imaging data in the field of astronomy. To do so, we present a state-of-the-art expectation-maximization approach for the challenging task of multi-frame astronomical image deconvolution and super-resolution. We leverage our approach to obtain a high-fidelity reconstruction of the night sky, from which high quality shape data can be extracted using appropriate segmentation and photometric techniques

    Big Data decision support system

    Get PDF
    Includes bibliographical references.2022 Fall.Each day, the amount of data produced by sensors, social and digital media, and Internet of Things is rapidly increasing. The volume of digital data is expected to be doubled within the next three years. At some point, it might not be financially feasible to store all the data that is received. Hence, if data is not analyzed as it is received, the information collected could be lost forever. Actionable Intelligence is the next level of Big Data analysis where data is being used for decision making. This thesis document describes my scientific contribution to Big Data Actionable Intelligence generations. Chapter 1 consists of my colleagues and I's contribution in Big Data Actionable Intelligence Architecture. The proven architecture has demonstrated to support real-time actionable intelligence generation using disparate data sources (e.g., social media, satellite, newsfeeds). This work has been published in the Journal of Big Data. Chapter 2 shows my original method to perform real-time detection of moving targets using Remote Sensing Big Data. This work has also been published in the Journal of Big Data and it has received an issuance of a U.S. patent. As the Field-of-View (FOV) in remote sensing continues to expand, the number of targets observed by each sensor continues to increase. The ability to track large quantities of targets in real-time poses a significant challenge. Chapter 3 describes my colleague and I's contribution to the multi-target tracking domain. We have demonstrated that we can overcome real-time tracking challenges when there are large number of targets. Our work was published in the Journal of Sensors

    Motion-Augmented Inference and Joint Kernels in Structured Learning for Object Tracking and Integration with Object Segmentation

    Get PDF
    Video object tracking is a fundamental task of continuously following an object of interest in a video sequence. It has attracted considerable attention in both academia and industry due to its diverse applications, such as in automated video surveillance, augmented and virtual reality, medical, automated vehicle navigation and tracking, and smart devices. Challenges in video object tracking arise from occlusion, deformation, background clutter, illumination variation, fast object motion, scale variation, low resolution, rotation, out-of-view, and motion blur. Object tracking remains, therefore, as an active research field. This thesis explores improving object tracking by employing 1) advanced techniques in machine learning theory to account for intrinsic changes in the object appearance under those challenging conditions, and 2) object segmentation. More specifically, we propose a fast and competitive method for object tracking by modeling target dynamics as a random stochastic process, and using structured support vector machines. First, we predict target dynamics by harmonic means and particle filter in which we exploit kernel machines to derive a new entropy based observation likelihood distribution. Second, we employ online structured support vector machines to model object appearance, where we analyze responses of several kernel functions for various feature descriptors and study how such kernels can be optimally combined to formulate a single joint kernel function. During learning, we develop a probability formulation to determine model updates and use sequential minimal optimization-step to solve the structured optimization problem. We gain efficiency improvements in the proposed object tracking by 1) exploiting particle filter for sampling the search space instead of commonly adopted dense sampling strategies, and 2) introducing a motion-augmented regularization term during inference to constrain the output search space. We then extend our baseline tracker to detect tracking failures or inaccuracies and reinitialize itself when needed. To that end, we integrate object segmentation into tracking. First, we use binary support vector machines to develop a technique to detect tracking failures (or inaccuracies) by monitoring internal variables of our baseline tracker. We leverage learned examples from our baseline tracker to train the employed binary support vector machines. Second, we propose an automated method to re-initialize the tracker to recover from tracking failures by integrating an active contour based object segmentation and using particle filter to sample bounding boxes for segmentation. Through extensive experiments on standard video datasets, we subjectively and objectively demonstrate that both our baseline and extended methods strongly compete against state-of-the-art object tracking methods on challenging video conditions
    corecore