302,928 research outputs found

    SEGMENT3D: A Web-based Application for Collaborative Segmentation of 3D images used in the Shoot Apical Meristem

    Full text link
    The quantitative analysis of 3D confocal microscopy images of the shoot apical meristem helps understanding the growth process of some plants. Cell segmentation in these images is crucial for computational plant analysis and many automated methods have been proposed. However, variations in signal intensity across the image mitigate the effectiveness of those approaches with no easy way for user correction. We propose a web-based collaborative 3D image segmentation application, SEGMENT3D, to leverage automatic segmentation results. The image is divided into 3D tiles that can be either segmented interactively from scratch or corrected from a pre-existing segmentation. Individual segmentation results per tile are then automatically merged via consensus analysis and then stitched to complete the segmentation for the entire image stack. SEGMENT3D is a comprehensive application that can be applied to other 3D imaging modalities and general objects. It also provides an easy way to create supervised data to advance segmentation using machine learning models

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches

    Contrastive Language-Image Pretrained Models are Zero-Shot Human Scanpath Predictors

    Full text link
    Understanding the mechanisms underlying human attention is a fundamental challenge for both vision science and artificial intelligence. While numerous computational models of free-viewing have been proposed, less is known about the mechanisms underlying task-driven image exploration. To address this gap, we present CapMIT1003, a database of captions and click-contingent image explorations collected during captioning tasks. CapMIT1003 is based on the same stimuli from the well-known MIT1003 benchmark, for which eye-tracking data under free-viewing conditions is available, which offers a promising opportunity to concurrently study human attention under both tasks. We make this dataset publicly available to facilitate future research in this field. In addition, we introduce NevaClip, a novel zero-shot method for predicting visual scanpaths that combines contrastive language-image pretrained (CLIP) models with biologically-inspired neural visual attention (NeVA) algorithms. NevaClip simulates human scanpaths by aligning the representation of the foveated visual stimulus and the representation of the associated caption, employing gradient-driven visual exploration to generate scanpaths. Our experimental results demonstrate that NevaClip outperforms existing unsupervised computational models of human visual attention in terms of scanpath plausibility, for both captioning and free-viewing tasks. Furthermore, we show that conditioning NevaClip with incorrect or misleading captions leads to random behavior, highlighting the significant impact of caption guidance in the decision-making process. These findings contribute to a better understanding of mechanisms that guide human attention and pave the way for more sophisticated computational approaches to scanpath prediction that can integrate direct top-down guidance of downstream tasks

    Taking aim at moving targets in computational cell migration

    Get PDF
    Cell migration is central to the development and maintenance of multicellular organisms. Fundamental understanding of cell migration can, for example, direct novel therapeutic strategies to control invasive tumor cells. However, the study of cell migration yields an overabundance of experimental data that require demanding processing and analysis for results extraction. Computational methods and tools have therefore become essential in the quantification and modeling of cell migration data. We review computational approaches for the key tasks in the quantification of in vitro cell migration: image pre-processing, motion estimation and feature extraction. Moreover, we summarize the current state-of-the-art for in silico modeling of cell migration. Finally, we provide a list of available software tools for cell migration to assist researchers in choosing the most appropriate solution for their needs
    • …
    corecore