183 research outputs found

    Interactive Segmentation of 3D Medical Images with Implicit Surfaces

    Get PDF
    To cope with a variety of clinical applications, research in medical image processing has led to a large spectrum of segmentation techniques that extract anatomical structures from volumetric data acquired with 3D imaging modalities. Despite continuing advances in mathematical models for automatic segmentation, many medical practitioners still rely on 2D manual delineation, due to the lack of intuitive semi-automatic tools in 3D. In this thesis, we propose a methodology and associated numerical schemes enabling the development of 3D image segmentation tools that are reliable, fast and interactive. These properties are key factors for clinical acceptance. Our approach derives from the framework of variational methods: segmentation is obtained by solving an optimization problem that translates the expected properties of target objects in mathematical terms. Such variational methods involve three essential components that constitute our main research axes: an objective criterion, a shape representation and an optional set of constraints. As objective criterion, we propose a unified formulation that extends existing homogeneity measures in order to model the spatial variations of statistical properties that are frequently encountered in medical images, without compromising efficiency. Within this formulation, we explore several shape representations based on implicit surfaces with the objective to cover a broad range of typical anatomical structures. Firstly, to model tubular shapes in vascular imaging, we introduce convolution surfaces in the variational context of image segmentation. Secondly, compact shapes such as lesions are described with a new representation that generalizes Radial Basis Functions with non-Euclidean distances, which enables the design of basis functions that naturally align with salient image features. Finally, we estimate geometric non-rigid deformations of prior templates to recover structures that have a predictable shape such as whole organs. Interactivity is ensured by restricting admissible solutions with additional constraints. Translating user input into constraints on the sign of the implicit representation at prescribed points in the image leads us to consider inequality-constrained optimization

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Diffusion-based spatial priors for imaging.

    Get PDF
    We describe a Bayesian scheme to analyze images, which uses spatial priors encoded by a diffusion kernel, based on a weighted graph Laplacian. This provides a general framework to formulate a spatial model, whose parameters can be optimised. The standard practice using the software statistical parametric mapping (SPM) is to smooth imaging data using a fixed Gaussian kernel as a pre-processing step before applying a mass-univariate statistical model (e.g., a general linear model) to provide images of parameter estimates (Friston et al., 2006). This entails the strong assumption that data are generated smoothly throughout the brain. An alternative is to include smoothness in a multivariate statistical model (Penny et al., 2005). The advantage of the latter is that each parameter field is smoothed automatically, according to a measure of uncertainty, given the data. Explicit spatial priors enable formal model comparison of different prior assumptions, e.g. that data are generated from a stationary (i.e. fixed throughout the brain) or non-stationary spatial process. We describe the motivation, background material and theory used to formulate diffusion-based spatial priors for fMRI data and apply it to three different datasets, which include standard and high-resolution data. We compare mass-univariate ordinary least squares estimates of smoothed data and three Bayesian models spatially independent, stationary and non-stationary spatial models of non-smoothed data. The latter of which can be used to preserve boundaries between functionally selective regional responses of the brain, thereby increasing the spatial detail of inferences about cortical responses to experimental input

    Proceedings, MSVSCC 2015

    Get PDF
    The Virginia Modeling, Analysis and Simulation Center (VMASC) of Old Dominion University hosted the 2015 Modeling, Simulation, & Visualization Student capstone Conference on April 16th. The Capstone Conference features students in Modeling and Simulation, undergraduates and graduate degree programs, and fields from many colleges and/or universities. Students present their research to an audience of fellow students, faculty, judges, and other distinguished guests. For the students, these presentations afford them the opportunity to impart their innovative research to members of the M&S community from academic, industry, and government backgrounds. Also participating in the conference are faculty and judges who have volunteered their time to impart direct support to their students’ research, facilitate the various conference tracks, serve as judges for each of the tracks, and provide overall assistance to this conference. 2015 marks the ninth year of the VMASC Capstone Conference for Modeling, Simulation and Visualization. This year our conference attracted a number of fine student written papers and presentations, resulting in a total of 51 research works that were presented. This year’s conference had record attendance thanks to the support from the various different departments at Old Dominion University, other local Universities, and the United States Military Academy, at West Point. We greatly appreciated all of the work and energy that has gone into this year’s conference, it truly was a highly collaborative effort that has resulted in a very successful symposium for the M&S community and all of those involved. Below you will find a brief summary of the best papers and best presentations with some simple statistics of the overall conference contribution. Followed by that is a table of contents that breaks down by conference track category with a copy of each included body of work. Thank you again for your time and your contribution as this conference is designed to continuously evolve and adapt to better suit the authors and M&S supporters. Dr.Yuzhong Shen Graduate Program Director, MSVE Capstone Conference Chair John ShullGraduate Student, MSVE Capstone Conference Student Chai

    ISCR Annual Report: Fical Year 2004

    Full text link

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Methods for automated analysis of macular OCT data

    Get PDF
    Optical coherence tomography (OCT) is fast becoming one of the most important modalities for imaging the eye. It provides high resolution, cross-sectional images of the retina in three dimensions, distinctly showing its many layers. These layers are critical for normal eye function, and vision loss may occur when they are altered by disease. Specifically, the thickness of individual layers can change over time, thereby making the ability to accurately measure these thicknesses an important part of learning about how different diseases affect the eye. Since manual segmentation of the layers in OCT data is time consuming and tedious, automated methods are necessary to extract layer thicknesses. While a standard set of tools exist on the scanners to automatically segment the retina, the output is often limited, providing measurements restricted to only a few layers. Analysis of longitudinal data is also limited, with scans from the same subject often processed independently and registered using only a single landmark at the fovea. Quantification of other changes in the retina, including the accumulation of fluid, are also generally unavailable using the built-in software. In this thesis, we present four contributions for automatically processing OCT data, specifically for data acquired from the macular region of the retina. First, we present a layer segmentation algorithm to robustly segment the eight visible layers of the retina. Our approach combines the use of a random forest (RF) classifier, which produces boundary probabilities, with a boundary refinement algorithm to find surfaces maximizing the RF probabilities. Second, we present a pair of methods for processing longitudinal data from individual subjects: one combining registration and motion correction, and one for simultaneously segmenting the layers across all scans. Third, we develop a method for segmentation of microcystic macular edema, which appear as small, fluid-filled, cystoid spaces within the retina. Our approach again uses an RF classifier to produce a robust segmentation. Finally, we present the development of macular flatspace (MFS), a computational domain used to put data from different subjects in a common coordinate system where each layer appears flat, thereby simplifying any automated processing. We present two applications of MFS: inhomogeneity correction to normalize the intensities within each layer, and layer segmentation by adapting and simplifying a graph formulation used previously

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
    • …
    corecore