16 research outputs found

    Landmark-Free Statistical Shape Modeling Via Neural Flow Deformations

    Full text link
    Statistical shape modeling aims at capturing shape variations of an anatomical structure that occur within a given population. Shape models are employed in many tasks, such as shape reconstruction and image segmentation, but also shape generation and classification. Existing shape priors either require dense correspondence between training examples or lack robustness and topological guarantees. We present FlowSSM, a novel shape modeling approach that learns shape variability without requiring dense correspondence between training instances. It relies on a hierarchy of continuous deformation flows, which are parametrized by a neural network. Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior for distal femur and liver. We show that the emerging latent representation is discriminative by separating healthy from pathological shapes. Ultimately, we demonstrate its effectiveness on two shape reconstruction tasks from partial data. Our source code is publicly available (https://github.com/davecasp/flowssm)

    ANIMATING A NOVEL MECHANISM OF CELL MIGRATION: SIGNAL TRANSDUCTION EXCITABLE NETWORK (STEN)

    Get PDF
    Life is dynamic. Cells are constantly changing shape. Many do so by displaying a variety of protrusions that not only vary their appearance but also play a key role in important cellular activities such as cell migration, division, and phagocytosis. These protrusions manifest in unique shapes and sizes, ranging from finger-like filopodia to sheet-like lamellipodia. It is well known that these protrusions drive outward from the cell body by a combination of actin polymerization and actomyosin-based contractions, referred to as “cytoskeletal activity”. However, what determines the shape, and hence the identity of the protrusions, has remained a mystery until recently. In recent years, a research team in the Johns Hopkins University Department of Cell Biology discovered a novel mechanism: Signal Transduction Excitable Network (STEN). It was found that STEN, a signaling network consisting of receptors, small GTPase proteins, and phosphoinositide lipids, determines the locations and lateral dimensions of cellular protrusions. Without STEN, cytoskeletal activity only produces transient, small extensions, or “puncta” which are ineffective in moving or reshaping cells. Manipulating the signal network can lead to alterations of the cytoskeletal system and morphing of the shape of the cell. Increasing or decreasing signal transduction activity can elevate or decrease the speed and range of wave propagation respectively, converting pseudopodia into wider lamellipodia, or narrower filopodia. This novel finding provides a direction for future biomedical research as it shows STEN plays a critical role in cell migration and morphology, and dysregulation of this system can lead to the development of a variety of diseases including cancer, and developmental and metabolic abnormalities. However, the mechanism of STEN is difficult to succinctly explain due to its three-dimensional, dynamic nature. Current teaching materials are limited to simple line diagrams and crude confocal microscopy videos and photographs, none of which are adequate to allow for in-depth understanding of this intricate process. To solve this challenge, I propose a narrative 3D animation that can help learners visualize and comprehend this novel mechanism. To maximize didactic efficacy, 2D images will be created to supplement the animation, and designed to be used independent of the animation if desired

    Real-time Deformation with Coupled Cages and Skeletons

    Get PDF
    Real-time character deformation is an essential topic in Computer Animation. Deformations can be achieved by using several techniques, but the skeleton-based ones are the most popular. Skeletons allow artists to deform articulated parts of the digital characters by moving their bones. Other techniques, like cage-based ones, are gaining popularity but struggle to be included in animation workflows because they require to change the animation pipeline substantially. This thesis formalizes a technique that allows animators to embed cage-based deformations in standard skeleton-based pipelines. The described skeleton/cage hybrid allows artists to enrich the expressive powers of the skeletons with the degrees of freedom offered by cages. Furthermore, this thesis describes two Graphical User Interfaces dedicated to deformations and animations. The first one, CageLab, allows artists to define cage-based deformations and perform cage editing. The second one, SuperCages GUI, allows artists to author animations and deformations by using the skeleton/cage hybrid described earlier

    Learning to Interpret Fluid Type Phenomena via Images

    Get PDF
    Learning to interpret fluid-type phenomena via images is a long-standing challenging problem in computer vision. The problem becomes even more challenging when the fluid medium is highly dynamic and refractive due to its transparent nature. Here, we consider imaging through such refractive fluid media like water and air. For water, we design novel supervised learning-based algorithms to recover its 3D surface as well as the highly distorted underground patterns. For air, we design a state-of-the-art unsupervised learning algorithm to predict the distortion-free image given a short sequence of turbulent images. Specifically, we design a deep neural network that estimates the depth and normal maps of a fluid surface by analyzing the refractive distortion of a reference background pattern. Regarding the recovery of severely downgraded underwater images due to the refractive distortions caused by water surface fluctuations, we present the distortion-guided network (DG-Net) for restoring distortion-free underwater images. The key idea is to use a distortion map to guide network training. The distortion map models the pixel displacement caused by water refraction. Furthermore, we present a novel unsupervised network to recover the latent distortion-free image. The key idea is to model non-rigid distortions as deformable grids. Our network consists of a grid deformer that estimates the distortion field and an image generator that outputs the distortion-free image. By leveraging the positional encoding operator, we can simplify the network structure while maintaining fine spatial details in the recovered images. We also develop a combinational deep neural network that can simultaneously perform recovery of the latent distortion-free image as well as 3D reconstruction of the transparent and dynamic fluid surface. Through extensive experiments on simulated and real captured fluid images, we demonstrate that our proposed deep neural networks outperform the current state-of-the-art on solving specific tasks

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    GPU-based volume deformation.

    Get PDF
    corecore