270 research outputs found

    Estimation of edges in magnetic resonance images

    Get PDF

    An analysis of surface area estimates of binary volumes under three tilings

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (leaves 77-79).by Erik G. Miller.M.S

    Some discrete approximations to a variational method for image segmentation

    Get PDF
    Cover title.Includes bibliographical references (p. 12-13).Research supported by the U.S. Army Research Office. DAAL03-86-K-0171 Research supported by the Air Force Office of Scientific Research. AFOSR 89-0276 Research supported by the Department of the Navy under an Air Force Contract. F19628-90-C-0002S.R. Kulkarni and S.K. Mitter

    Digital 3D documentation of cultural heritage sites based on terrestrial laser scanning

    Get PDF

    Piecewise smooth reconstruction of normal vector field on digital data

    Get PDF
    International audienceWe propose a novel method to regularize a normal vector field defined on a digital surface (boundary of a set of voxels). When the digital surface is a digitization of a piecewise smooth manifold, our method localizes sharp features (edges) while regularizing the input normal vector field at the same time. It relies on the optimisation of a variant of the Ambrosio-Tortorelli functional, originally defined for denoising and contour extraction in image processing [AT90]. We reformulate this functional to digital surface processing thanks to discrete calculus operators. Experiments show that the output normal field is very robust to digitization artifacts or noise, and also fairly independent of the sampling resolution. The method allows the user to choose independently the amount of smoothing and the length of the set of discontinuities. Sharp and vanishing features are correctly delineated even on extremely damaged data. Finally, our method can be used to enhance considerably the output of state-of- the-art normal field estimators like Voronoi Covariance Measure [MOG11] or Randomized Hough Transform [BM12]

    Quantitative assessment of the discrimination potential of class and randomly acquired characteristics for crime scene quality shoeprints

    Get PDF
    Footwear evidence has tremendous forensic value; it can focus a criminal investigation, link suspects to scenes, help reconstruct a series of events, or otherwise provide information vital to the successful resolution of a case. When considering the specific utility of a linkage, the strength of the connection between the source footwear and an impression left at the scene of a crime varies with the known rarity of the shoeprint itself, which is a function of the class characteristics, as well as the complexity, clarity, and quality of randomly acquired characteristics (RACs) available for analysis. To help elucidate the discrimination potential of footwear as a source of forensic evidence, the aim of this research was three-fold.;The first (and most time consuming obstacle) of this study was data acquisition. In order to efficiently process footwear exemplar inputs and extract meaningful data, including information about randomly acquired characteristics, a semi-automated image processing chain was developed. To date, 1,000 shoes have been fully processed, yielding a total of 57,426 RACs characterized in terms of position (theta, r, rnorm), shape (circle, line/curve, triangle, irregular) and complex perimeter (e.g., Fourier descriptor). A plot of each feature versus position allowed for the creation of a heat map detailing coincidental RAC co-occurrence in position and shape. Results indicate that random chance association is as high as 1:756 for lines/curves and as low as 1:9,571 for triangular-shaped features. However, when a detailed analysis of the RAC\u27s geometry is evaluated, each feature is distinguishable.;The second goal of this project was to ascertain the baseline performance of an automated footwear classification algorithm. A brief literature review reveals more than a dozen different approaches to automated shoeprint classification over the last decade. Unfortunately, despite the multitude of options and reports on algorithm inter-comparisons, few studies have assessed accuracy for crime-scene-like prints. To remedy this deficit, this research quantitatively assessed the baseline performance of a single metric, known as Phase Only Correlation (POC), on both high quality and crime-scene-like prints. The objective was to determine the baseline performance for high quality exemplars with high signal-to-noise ratios, and then determine the degree to which this performance declined as a function of variations in mixed media (blood and dust), transfer mechanisms (gel lifters), enhancement techniques (digital and chemical) and substrates (ceramic tiles, vinyl tiles, and paper). The results indicate probabilities greater than 0.850 (and as high as 0.989) that known matches will exhibit stochastic dominance, and probabilities of 0.99 with high quality exemplars (Handiprints or outsole edge images).;The third and final aim of this research was to mathematically evaluate the frequency and similarity of RACs in high quality exemplars versus crime-scene-like impressions as a function of RAC shape, perimeter, and area. This was accomplished using wet-residue impressions (created in the laboratory, but generated in a manner intended to replicate crime-scene-like prints). These impressions were processed in the same manner as their high quality exemplar mates, allowing for the determination of RAC loss and correlation of the entire RAC map between crime scene and high quality images. Results show that the unpredictable nature of crime scene print deposition causes RAC loss that varies from 33-100% with an average loss of 85%, and that up to 10% of the crime scene impressions fully lacked any identifiable RACs. Despite the loss of features present in the crime-scene-like impressions, there was a 0.74 probability that the actual shoe\u27s high quality RAC map would rank higher in an ordered list than a known non-match map when queried with the crime-scene-like print. Moreover, this was true despite the fact that 64% of the crime-scene-like impressions exhibit 10 or fewer RACs

    A Better Looking Brain: Image Pre-Processing Approaches for fMRI Data

    Get PDF
    Researchers in the field of functional neuroimaging have faced a long standing problem in pre-processing low spatial resolution data without losing meaningful details within. Commonly, the brain function is recorded by a technique known as echo-planar imaging that represents the measure of blood flow (BOLD signal) through a particular location in the brain as an array of intensity values changing over time. This approach to record a movie of blood flow in the brain is known as fMRI. The neural activity is then studied from the temporal correlation patterns existing within the fMRI time series. However, the resulting images are noisy and contain low spatial detail, thus making it imperative to pre-process them appropriately to derive meaningful activation patterns. Two of the several standard preprocessing steps employed just before the analysis stage are denoising and normalization. Fundamentally, it is difficult to perfectly remove noise from an image without making assumptions about signal and noise distributions. A convenient and commonly used alternative is to smooth the image with a Gaussian filter, but this method suffers from various obvious drawbacks, primarily loss of spatial detail. A greater challenge arises when we attempt to derive average activation patterns from fMRI images acquired from a group of individuals. The brain of one individual differs from others in a structural sense as well as in a functional sense. Commonly, the inter-individual differences in anatomical structures are compensated for by co-registering each subject\u27s data to a common normalization space, known as spatial normalization. However, there are no existing methods to compensate for the differences in functional organization of the brain. This work presents first steps towards data-driven robust algorithms for fMRI image denoising and multi-subject image normalization by utilizing inherent information within fMRI data. In addition, a new validation approach based on spatial shape of the activation regions is presented to quantify the effects of preprocessing and also as a tool to record the differences in activation patterns between individual subjects or within two groups such as healthy controls and patients with mental illness. Qualititative and quantitative results of the proposed framework compare favorably against existing and widely used model-driven approaches such as Gaussian smoothing and structure-based spatial normalization. This work is intended to provide neuroscience researchers tools to derive more meaningful activation patterns to accurately identify imaging biomarkers for various neurodevelopmental diseases and also maximize the specificity of a diagnosis

    Continuous Modeling of 3D Building Rooftops From Airborne LIDAR and Imagery

    Get PDF
    In recent years, a number of mega-cities have provided 3D photorealistic virtual models to support the decisions making process for maintaining the cities' infrastructure and environment more effectively. 3D virtual city models are static snap-shots of the environment and represent the status quo at the time of their data acquisition. However, cities are dynamic system that continuously change over time. Accordingly, their virtual representation need to be regularly updated in a timely manner to allow for accurate analysis and simulated results that decisions are based upon. The concept of "continuous city modeling" is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. However, developing a universal intelligent machine enabling continuous modeling still remains a challenging task. Therefore, this thesis proposes a novel research framework for continuously reconstructing 3D building rooftops using multi-sensor data. For achieving this goal, we first proposes a 3D building rooftop modeling method using airborne LiDAR data. The main focus is on the implementation of an implicit regularization method which impose a data-driven building regularity to noisy boundaries of roof planes for reconstructing 3D building rooftop models. The implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). Secondly, we propose a context-based geometric hashing method to align newly acquired image data with existing building models. The novelty is the use of context features to achieve robust and accurate matching results. Thirdly, the existing building models are refined by newly proposed sequential fusion method. The main advantage of the proposed method is its ability to progressively refine modeling errors frequently observed in LiDAR-driven building models. The refinement process is conducted in the framework of MDL combined with HAT. Markov Chain Monte Carlo (MDMC) coupled with Simulated Annealing (SA) is employed to perform a global optimization. The results demonstrates that the proposed continuous rooftop modeling methods show a promising aspects to support various critical decisions by not only reconstructing 3D rooftop models accurately, but also by updating the models using multi-sensor data

    Nuclei/Cell Detection in Microscopic Skeletal Muscle Fiber Images and Histopathological Brain Tumor Images Using Sparse Optimizations

    Get PDF
    Nuclei/Cell detection is usually a prerequisite procedure in many computer-aided biomedical image analysis tasks. In this thesis we propose two automatic nuclei/cell detection frameworks. One is for nuclei detection in skeletal muscle fiber images and the other is for brain tumor histopathological images. For skeletal muscle fiber images, the major challenges include: i) shape and size variations of the nuclei, ii) overlapping nuclear clumps, and iii) a series of z-stack images with out-of-focus regions. We propose a novel automatic detection algorithm consisting of the following components: 1) The original z-stack images are first converted into one all-in-focus image. 2) A sufficient number of hypothetical ellipses are then generated for each nuclei contour. 3) Next, a set of representative training samples and discriminative features are selected by a two-stage sparse model. 4) A classifier is trained using the refined training data. 5) Final nuclei detection is obtained by mean-shift clustering based on inner distance. The proposed method was tested on a set of images containing over 1500 nuclei. The results outperform the current state-of-the-art approaches. For brain tumor histopathological images, the major challenges are to handle significant variations in cell appearance and to split touching cells. The proposed novel automatic cell detection consists of: 1) Sparse reconstruction for splitting touching cells. 2) Adaptive dictionary learning for handling cell appearance variations. The proposed method was extensively tested on a data set with over 2000 cells. The result outperforms other state-of-the-art algorithms with F1 score = 0.96
    • …
    corecore