58,466 research outputs found

    A framework for quantification and physical modeling of cell mixing applied to oscillator synchronization in vertebrate somitogenesis

    Get PDF
    In development and disease, cells move as they exchange signals. One example is found in vertebrate development, during which the timing of segment formation is set by a ‘segmentation clock’, in which oscillating gene expression is synchronized across a population of cells by Delta-Notch signaling. Delta-Notch signaling requires local cell-cell contact, but in the zebrafish embryonic tailbud, oscillating cells move rapidly, exchanging neighbors. Previous theoretical studies proposed that this relative movement or cell mixing might alter signaling and thereby enhance synchronization. However, it remains unclear whether the mixing timescale in the tissue is in the right range for this effect, because a framework to reliably measure the mixing timescale and compare it with signaling timescale is lacking. Here, we develop such a framework using a quantitative description of cell mixing without the need for an external reference frame and constructing a physical model of cell movement based on the data. Numerical simulations show that mixing with experimentally observed statistics enhances synchronization of coupled phase oscillators, suggesting that mixing in the tailbud is fast enough to affect the coherence of rhythmic gene expression. Our approach will find general application in analyzing the relative movements of communicating cells during development and disease.Fil: Uriu, Koichiro. Kanazawa University; JapónFil: Bhavna, Rajasekaran. Max Planck Institute of Molecular Cell Biology and Genetics; Alemania. Max Planck Institute for the Physics of Complex Systems; AlemaniaFil: Oates, Andrew C.. Francis Crick Institute; Reino Unido. University College London; Reino UnidoFil: Morelli, Luis Guillermo. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Parque Centenario. Instituto de Investigación en Biomedicina de Buenos Aires - Instituto Partner de la Sociedad Max Planck; Argentina. Max Planck Institute for Molecular Physiology; Alemania. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física; Argentin

    Coercive Region-level Registration for Multi-modal Images

    Full text link
    We propose a coercive approach to simultaneously register and segment multi-modal images which share similar spatial structure. Registration is done at the region level to facilitate data fusion while avoiding the need for interpolation. The algorithm performs alternating minimization of an objective function informed by statistical models for pixel values in different modalities. Hypothesis tests are developed to determine whether to refine segmentations by splitting regions. We demonstrate that our approach has significantly better performance than the state-of-the-art registration and segmentation methods on microscopy images.Comment: This work has been accepted to International Conference on Image Processing (ICIP) 201

    Image and Volume Segmentation by Water Flow

    No full text
    A general framework for image segmentation is presented in this paper, based on the paradigm of water flow. The major water flow attributes like water pressure, surface tension and capillary force are defined in the context of force field generation and make the model adaptable to topological and geometrical changes. A flow-stopping image functional combining edge- and region-based forces is introduced to produce capability for both range and accuracy. The method is assessed qualitatively and quantitatively on synthetic and natural images. It is shown that the new approach can segment objects with complex shapes or weak-contrasted boundaries, and has good immunity to noise. The operator is also extended to 3-D, and is successfully applied to medical volume segmentation

    Multispectral object segmentation and retrieval in surveillance video

    Get PDF
    This paper describes a system for object segmentation and feature extraction for surveillance video. Segmentation is performed by a dynamic vision system that fuses information from thermal infrared video with standard CCTV video in order to detect and track objects. Separate background modelling in each modality and dynamic mutual information based thresholding are used to provide initial foreground candidates for tracking. The belief in the validity of these candidates is ascertained using knowledge of foreground pixels and temporal linking of candidates. The transferable belief model is used to combine these sources of information and segment objects. Extracted objects are subsequently tracked using adaptive thermo-visual appearance models. In order to facilitate search and classification of objects in large archives, retrieval features from both modalities are extracted for tracked objects. Overall system performance is demonstrated in a simple retrieval scenari
    corecore