563 research outputs found

    Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint

    Get PDF
    We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures

    Enforcing Monotonous Shape Growth or Shrinkage in Video Segmentation

    Full text link

    Enforcing Monotonous Shape Growth or Shrinkage in Video Segmentation

    Get PDF
    International audienceWe propose a new method based on graph cuts for joint segmentation of monotonously growing or shrinking shapes in time series of noisy images. By introducing directed infinite links connecting pixels at the same spatial locations in successive image frames, we impose shape growth/shrinkage constraint in graph cuts. Minimization of energy computed on the resulting graph of the image sequence yields globally optimal segmentation. We validate the proposed approach on two applications: segmentation of melting sea ice floes from a time series of multimodal satellite images and segmentation of a growing brain tumor from sequences of 3D multimodal medical scans. In the latter application, we impose an additional inter-sequences inclusion constraint by adding directed infinite links between pixels of dependent image structures

    Detection and Generalization of Spatio-temporal Trajectories for Motion Imagery

    Get PDF
    In today\u27s world of vast information availability users often confront large unorganized amounts of data with limited tools for managing them. Motion imagery datasets have become increasingly popular means for exposing and disseminating information. Commonly, moving objects are of primary interest in modeling such datasets. Users may require different levels of detail mainly for visualization and further processing purposes according to the application at hand. In this thesis we exploit the geometric attributes of objects for dataset summarization by using a series of image processing and neural network tools. In order to form data summaries we select representative time instances through the segmentation of an object\u27s spatio-temporal trajectory lines. High movement variation instances are selected through a new hybrid self-organizing map (SOM) technique to describe a single spatio-temporal trajectory. Multiple objects move in diverse yet classifiable patterns. In order to group corresponding trajectories we utilize an abstraction mechanism that investigates a vague moving relevance between the data in space and time. Thus, we introduce the spatio-temporal neighborhood unit as a variable generalization surface. By altering the unit\u27s dimensions, scaled generalization is accomplished. Common complications in tracking applications that include occlusion, noise, information gaps and unconnected segments of data sequences are addressed through the hybrid-SOM analysis. Nevertheless, entangled data sequences where no information on which data entry belongs to each corresponding trajectory are frequently evident. A multidimensional classification technique that combines geometric and backpropagation neural network implementation is used to distinguish between trajectory data. Further more, modeling and summarization of two-dimensional phenomena evolving in time brings forward the novel concept of spatio-temporal helixes as compact event representations. The phenomena models are comprised of SOM movement nodes (spines) and cardinality shape-change descriptors (prongs). While we focus on the analysis of MI datasets, the framework can be generalized to function with other types of spatio-temporal datasets. Multiple scale generalization is allowed in a dynamic significance-based scale rather than a constant one. The constructed summaries are not just a visualization product but they support further processing for metadata creation, indexing, and querying. Experimentation, comparisons and error estimations for each technique support the analyses discussed

    Simultaneous completion and spatiotemporal grouping of corrupted motion tracks

    Get PDF
    Given an unordered list of 2D or 3D point trajectories corrupted by noise and partial observations, in this paper we introduce a framework to simultaneously recover the incomplete motion tracks and group the points into spatially and temporally coherent clusters. This advances existing work, which only addresses partial problems and without considering a unified and unsupervised solution. We cast this problem as a matrix completion one, in which point tracks are arranged into a matrix with the missing entries set as zeros. In order to perform the double clustering, the measurement matrix is assumed to be drawn from a dual union of spatiotemporal subspaces. The bases and the dimensionality for these subspaces, the affinity matrices used to encode the temporal and spatial clusters to which each point belongs, and the non-visible tracks, are then jointly estimated via augmented Lagrange multipliers in polynomial time. A thorough evaluation on incomplete motion tracks for multiple-object typologies shows that the accuracy of the matrix we recover compares favorably to that obtained with existing low-rank matrix completion methods, specially under noisy measurements. In addition, besides recovering the incomplete tracks, the point trajectories are directly grouped into different object instances, and a number of semantically meaningful temporal primitive actions are automatically discoveredThis work has been partially supported by the Spanish State Research Agency through the María de Maeztu Seal of Excellence to IRI MDM-2016-0656, by the Spanish Ministry of Science and Innovation under project HuMoUR TIN2017-90086-R and the Salvador de Madariaga grant PRX19/00626, and by the ERA-net CHIST-ERA project IPALM PCI2019-103386.Peer ReviewedPostprint (published version

    Spatiotemporal subpixel mapping of time-series images

    Get PDF
    Land cover/land use (LCLU) information extraction from multitemporal sequences of remote sensing imagery is becoming increasingly important. Mixed pixels are a common problem in Landsat and MODIS images that are used widely for LCLU monitoring. Recently developed subpixel mapping (SPM) techniques can extract LCLU information at the subpixel level by dividing mixed pixels into subpixels to which hard classes are then allocated. However, SPM has rarely been studied for time-series images (TSIs). In this paper, a spatiotemporal SPM approach was proposed for SPM of TSIs. In contrast to conventional spatial dependence-based SPM methods, the proposed approach considers simultaneously spatial and temporal dependences, with the former considering the correlation of subpixel classes within each image and the latter considering the correlation of subpixel classes between images in a temporal sequence. The proposed approach was developed assuming the availability of one fine spatial resolution map which exists among the TSIs. The SPM of TSIs is formulated as a constrained optimization problem. Under the coherence constraint imposed by the coarse LCLU proportions, the objective is to maximize the spatiotemporal dependence, which is defined by blending both spatial and temporal dependences. Experiments on three data sets showed that the proposed approach can provide more accurate subpixel resolution TSIs than conventional SPM methods. The SPM results obtained from the TSIs provide an excellent opportunity for LCLU dynamic monitoring and change detection at a finer spatial resolution than the available coarse spatial resolution TSIs

    Bayesian Model Based Tracking with Application to Cell Segmentation and Tracking

    Get PDF
    The goal of this research is to develop a model-based tracking framework with biomedical imaging applications. This is an interdisciplinary area of research with interests in machine vision, image processing, and biology. This thesis presents methods of image modeling, tracking, and data association applied to problems in multi-cellular image analysis, especially hematopoietic stem cell (HSC) images at the current stage. The focus of this research is on the development of a robust image analysis interface capable of detecting, locating, and tracking individual hematopoietic stem cells (HSCs), which proliferate and differentiate to different blood cell types continuously during their lifetime, and are of substantial interest in gene therapy, cancer, and stem-cell research. Such a system can be potentially employed in the future to track different groups of HSCs extracted from bone marrow and recognize the best candidates based on some biomedical-biological criteria. Selected candidates can further be used for bone marrow transplantation (BMT) which is a medical procedure for the treatment of various incurable diseases such as leukemia, lymphomas, aplastic anemia, immune deficiency disorders, multiple myeloma and some solid tumors. Tracking HSCs over time is a localization-based tracking problem which is one of the most challenging tracking problems to be solved. The proposed cell tracking system consists of three inter-related stages: i) Cell detection/localization, ii) The association of detected cells, iii) Background estimation/subtraction. that will be discussed in detail
    corecore