11,881 research outputs found

    Image and Volume Segmentation by Water Flow

    No full text
    A general framework for image segmentation is presented in this paper, based on the paradigm of water flow. The major water flow attributes like water pressure, surface tension and capillary force are defined in the context of force field generation and make the model adaptable to topological and geometrical changes. A flow-stopping image functional combining edge- and region-based forces is introduced to produce capability for both range and accuracy. The method is assessed qualitatively and quantitatively on synthetic and natural images. It is shown that the new approach can segment objects with complex shapes or weak-contrasted boundaries, and has good immunity to noise. The operator is also extended to 3-D, and is successfully applied to medical volume segmentation

    On Using Physical Analogies for Feature and Shape Extraction in Computer Vision

    No full text
    There is a rich literature of approaches to image feature extraction in computer vision. Many sophisticated approaches exist for low- and high-level feature extraction but can be complex to implement with parameter choice guided by experimentation, but impeded by speed of computation. We have developed new ways to extract features based on notional use of physical paradigms, with parameterisation that is more familiar to a scientifically-trained user, aiming to make best use of computational resource. We describe how analogies based on gravitational force can be used for low-level analysis, whilst analogies of water flow and heat can be deployed to achieve high-level smooth shape detection. These new approaches to arbitrary shape extraction are compared with standard state-of-art approaches by curve evolution. There is no comparator operator to our use of gravitational force. We also aim to show that the implementation is consistent with the original motivations for these techniques and so contend that the exploration of physical paradigms offers a promising new avenue for new approaches to feature extraction in computer vision

    Medical Image Segmentation by Water Flow

    No full text
    We present a new image segmentation technique based on the paradigm of water flow and apply it to medical images. The force field analogy is used to implement the major water flow attributes like water pressure, surface tension and adhesion so that the model achieves topological adaptability and geometrical flexibility. A new snake-like force functional combining edge- and region-based forces is introduced to produce capability for both range and accuracy. The method has been assessed qualitatively and quantitatively, and shows decent detection performance as well as ability to handle noise

    Estimation of vector fields in unconstrained and inequality constrained variational problems for segmentation and registration

    Get PDF
    Vector fields arise in many problems of computer vision, particularly in non-rigid registration. In this paper, we develop coupled partial differential equations (PDEs) to estimate vector fields that define the deformation between objects, and the contour or surface that defines the segmentation of the objects as well.We also explore the utility of inequality constraints applied to variational problems in vision such as estimation of deformation fields in non-rigid registration and tracking. To solve inequality constrained vector field estimation problems, we apply tools from the Kuhn-Tucker theorem in optimization theory. Our technique differs from recently popular joint segmentation and registration algorithms, particularly in its coupled set of PDEs derived from the same set of energy terms for registration and segmentation. We present both the theory and results that demonstrate our approach

    Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications

    Get PDF
    Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications

    A framework for quantification and physical modeling of cell mixing applied to oscillator synchronization in vertebrate somitogenesis

    Get PDF
    In development and disease, cells move as they exchange signals. One example is found in vertebrate development, during which the timing of segment formation is set by a ‘segmentation clock’, in which oscillating gene expression is synchronized across a population of cells by Delta-Notch signaling. Delta-Notch signaling requires local cell-cell contact, but in the zebrafish embryonic tailbud, oscillating cells move rapidly, exchanging neighbors. Previous theoretical studies proposed that this relative movement or cell mixing might alter signaling and thereby enhance synchronization. However, it remains unclear whether the mixing timescale in the tissue is in the right range for this effect, because a framework to reliably measure the mixing timescale and compare it with signaling timescale is lacking. Here, we develop such a framework using a quantitative description of cell mixing without the need for an external reference frame and constructing a physical model of cell movement based on the data. Numerical simulations show that mixing with experimentally observed statistics enhances synchronization of coupled phase oscillators, suggesting that mixing in the tailbud is fast enough to affect the coherence of rhythmic gene expression. Our approach will find general application in analyzing the relative movements of communicating cells during development and disease.Fil: Uriu, Koichiro. Kanazawa University; JapónFil: Bhavna, Rajasekaran. Max Planck Institute of Molecular Cell Biology and Genetics; Alemania. Max Planck Institute for the Physics of Complex Systems; AlemaniaFil: Oates, Andrew C.. Francis Crick Institute; Reino Unido. University College London; Reino UnidoFil: Morelli, Luis Guillermo. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Parque Centenario. Instituto de Investigación en Biomedicina de Buenos Aires - Instituto Partner de la Sociedad Max Planck; Argentina. Max Planck Institute for Molecular Physiology; Alemania. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física; Argentin

    Colour, texture, and motion in level set based segmentation and tracking

    Get PDF
    This paper introduces an approach for the extraction and combination of different cues in a level set based image segmentation framework. Apart from the image grey value or colour, we suggest to add its spatial and temporal variations, which may provide important further characteristics. It often turns out that the combination of colour, texture, and motion permits to distinguish object regions that cannot be separated by one cue alone. We propose a two-step approach. In the first stage, the input features are extracted and enhanced by applying coupled nonlinear diffusion. This ensures coherence between the channels and deals with outliers. We use a nonlinear diffusion technique, closely related to total variation flow, but being strictly edge enhancing. The resulting features are then employed for a vector-valued front propagation based on level sets and statistical region models that approximate the distributions of each feature. The application of this approach to two-phase segmentation is followed by an extension to the tracking of multiple objects in image sequences
    corecore