229 research outputs found

    Simulation of Ground-Truth Validation Data Via Physically- and Statistically-Based Warps

    Full text link
    Abstract. The problem of scarcity of ground-truth expert delineations of medi-cal image data is a serious one that impedes the training and validation of medi-cal image analysis techniques. We develop an algorithm for the automatic generation of large databases of annotated images from a single reference data-set. We provide a web-based interface through which the users can upload a reference data set (an image and its corresponding segmentation and landmark points), provide custom setting of parameters, and, following server-side com-putations, generate and download an arbitrary number of novel ground-truth data, including segmentations, displacement vector fields, intensity non-uniformity maps, and point correspondences. To produce realistic simulated data, we use variational (statistically-based) and vibrational (physically-based) spatial deformations, nonlinear radiometric warps mimicking imaging non-homogeneity, and additive random noise with different underlying distributions. We outline the algorithmic details, present sample results, and provide the web address to readers for immediate evaluation and usage

    A Benchmark and Evaluation of Non-Rigid Structure from Motion

    Full text link
    Non-Rigid structure from motion (NRSfM), is a long standing and central problem in computer vision, allowing us to obtain 3D information from multiple images when the scene is dynamic. A main issue regarding the further development of this important computer vision topic, is the lack of high quality data sets. We here address this issue by presenting of data set compiled for this purpose, which is made publicly available, and considerably larger than previous state of the art. To validate the applicability of this data set, and provide and investigation into the state of the art of NRSfM, including potential directions forward, we here present a benchmark and a scrupulous evaluation using this data set. This benchmark evaluates 16 different methods with available code, which we argue reasonably spans the state of the art in NRSfM. We also hope, that the presented and public data set and evaluation, will provide benchmark tools for further development in this field

    Efficient transfer entropy analysis of non-stationary neural time series

    Full text link
    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these observations, available estimators assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that deals with the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method. We test the performance and robustness of our implementation on data from simulated stochastic processes and demonstrate the method's applicability to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscientific data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON

    Visibility in underwater robotics: Benchmarking and single image dehazing

    Get PDF
    Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales

    Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    Get PDF
    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study

    Deep Learning Methods for Estimation of Elasticity and Backscatter Quantitative Ultrasound

    Get PDF
    Ultrasound (US) imaging is increasingly attracting the attention of both academic and industrial researchers due to being a real-time and nonionizing imaging modality. It is also less expensive and more portable compared to other medical imaging techniques. However, the granular appearance hinders the interpretation of US images, hindering its wider adoption. This granular appearance (also referred to as speckles) arises from the backscattered echo from microstructural components smaller than the ultrasound wavelength, which are called scatterers. While significant effort has been undertaken to reduce the appearance of speckles, they contain scatterer properties that are highly correlated with the microstructure of the tissue that can be employed to diagnose different types of disease. There are many properties that can be extracted from speckles that are clinically valuable, such as the elasticity and organization of scatterers. Analyzing the motion of scatterers in the presence of an internal or external force can be used to obtain the elastic properties of the tissue. The technique is called elastography and has been widely used to characterize the tissue. Estimating the scatterer organization (scatterer number density and coherent to diffuse scattering power) is also crucial as it provides information about tissue microstructure and potentially aids in disease diagnosis and treatment monitoring. This thesis proposes several deep learning-based methods to facilitate and improve the estimation of speckle motion and scatterer properties, potentially simplifying the interpretation of US images. In particular, we propose new methods for displacement estimation in Chapters 2 to 6 and introduce novel techniques in Chapters 7 to 11 to quantify scatterers’ number density and organization

    Analysis and Strategies to Enhance Intensity-Base Image Registration.

    Full text link
    The availability of numerous complementary imaging modalities allows us to obtain a detailed picture of the body and its functioning. To aid diagnostics and surgical planning, all available information can be presented by visually aligning images from different modalities using image registration. This dissertation investigates strategies to improve the performance of image registration algorithms that use intensity-based similarity metrics. Nonrigid warp estimation using intensity-based registration can be very time consuming. We develop a novel framework based on importance sampling and stochastic approximation techniques to accelerate nonrigid registration methods while preserving their accuracy. Registration results for simulated brain MRI data and human lung CT data demonstrate the efficacy of the proposed framework. Functional MRI (fMRI) is used to non-invasively detect brain-activation by acquiring a series of brain images, called a time-series, while the subject performs tasks designed to stimulate parts of the brain. Consequently, these studies are plagued by subject head motion. Mutual information (MI) based slice-to-volume (SV) registration algorithms used to estimate time-series motion are less accurate for end-slices (i.e., slices near the top of the head scans), where a loss in image complexity yields noisy MI estimates. We present a strategy, dubbed SV-JP, to improve SV registration accuracy for time-series end-slices by using joint pdf priors derived from successfully registered high complexity slices near the middle of the head scans to bolster noisy MI estimates. Although fMRI time-series registration can estimate head motion, this motion also spawns extraneous intensity fluctuations called spin saturation artifacts. These artifacts hamper brain-activation detection. We describe spin saturation using mathematical expressions and develop a weighted-average spin saturation (WASS) correction scheme. An algorithm to identify time-series voxels affected by spin saturation and to implement WASS correction is outlined. The performance of registration methods is dependant on the tuning parameters used to implement their similarity metrics. To facilitate finding optimal tuning parameters, we develop a computationally efficient linear approximation of the (co)variance of MI-based registration estimates. However, empirically, our approximation was satisfactory only for a simple mono-modality registration example and broke down for realistic multi-modality registration where the MI metric becomes strongly nonlinear.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61552/1/rbhagali_1.pd
    • …
    corecore