158 research outputs found

    Coherence Filtering to Enhance the Mandibular Canal in Cone-Beam CT data

    Get PDF
    Segmenting the mandibular canal from cone beam CT data, is difficult due to low edge contrast and high image noise. We introduce 3D coherence filtering as a method to close the interrupted edges and denoise the structure of the mandibular canal. Coherence Filtering is an anisotropic non-linear tensor based diffusion algorithm for edge enhancing image filtering. We test different numerical schemes of the tensor diffusion equation, non-negative, standard discretization and also a rotation invariant scheme of Weickert [1]. Only the\ud scheme of Weickert did not blur the high spherical images frequencies on the image diagonals of our test volume. Thus this scheme is chosen to enhance the small curved mandibular canal structure. The best choice of the diffusion equation parameters c1 and c2, depends on the image noise. Coherence filtering on the CBCT-scan works well, the noise in the mandibular canal is gone and the edges are connected. Because the algorithm is tensor based it cannot deal with edge joints or splits, thus is less fit for more complex image structures

    Radial Basis Functions: Biomedical Applications and Parallelization

    Get PDF
    Radial basis function (RBF) is a real-valued function whose values depend only on the distances between an interpolation point and a set of user-specified points called centers. RBF interpolation is one of the primary methods to reconstruct functions from multi-dimensional scattered data. Its abilities to generalize arbitrary space dimensions and to provide spectral accuracy have made it particularly popular in different application areas, including but not limited to: finding numerical solutions of partial differential equations (PDEs), image processing, computer vision and graphics, deep learning and neural networks, etc. The present thesis discusses three applications of RBF interpolation in biomedical engineering areas: (1) Calcium dynamics modeling, in which we numerically solve a set of PDEs by using meshless numerical methods and RBF-based interpolation techniques; (2) Image restoration and transformation, where an image is restored from its triangular mesh representation or transformed under translation, rotation, and scaling, etc. from its original form; (3) Porous structure design, in which the RBF interpolation used to reconstruct a 3D volume containing porous structures from a set of regularly or randomly placed points inside a user-provided surface shape. All these three applications have been investigated and their effectiveness has been supported with numerous experimental results. In particular, we innovatively utilize anisotropic distance metrics to define the distance in RBF interpolation and apply them to the aforementioned second and third applications, which show significant improvement in preserving image features or capturing connected porous structures over the isotropic distance-based RBF method. Beside the algorithm designs and their applications in biomedical areas, we also explore several common parallelization techniques (including OpenMP and CUDA-based GPU programming) to accelerate the performance of the present algorithms. In particular, we analyze how parallel programming can help RBF interpolation to speed up the meshless PDE solver as well as image processing. While RBF has been widely used in various science and engineering fields, the current thesis is expected to trigger some more interest from computational scientists or students into this fast-growing area and specifically apply these techniques to biomedical problems such as the ones investigated in the present work

    Extracting the Structure and Conformations of Biological Entities from Large Datasets

    Get PDF
    In biology, structure determines function, which often proceeds via changes in conformation. Efficient means for determining structure exist, but mapping conformations continue to present a serious challenge. Single-particles approaches, such as cryogenic electron microscopy (cryo-EM) and emerging diffract & destroy X-ray techniques are, in principle, ideally positioned to overcome these challenges. But the algorithmic ability to extract information from large heterogeneous datasets consisting of unsorted snapshots - each emanating from an unknown orientation of an object in an unknown conformation - remains elusive. It is the objective of this thesis to describe and validate a powerful suite of manifold-based algorithms able to extract structural and conformational information from large datasets. These computationally efficient algorithms offer a new approach to determining the structure and conformations of viruses and macromolecules. After an introduction, we demonstrate a distributed, exact k-Nearest Neighbor Graph (k-NNG) construction method, in order to establish a firm algorithmic basis for manifold-based analysis. The proposed algorithm uses Graphics Processing Units (GPUs) and exploits multiple levels of parallelism in distributed computational environment and it is scalable for different cluster sizes, with each compute node in the cluster containing multiple GPUs. Next, we present applications of manifold-based analysis in determining structure and conformational variability. Using the Diffusion Map algorithm, a new approach is presented, which is capable of determining structure of symmetric objects, such as viruses, to 1/100th of the object diameter, using low-signal diffraction snapshots. This is demonstrated by means of a successful 3D reconstruction of the Satellite Tobacco Necrosis Virus (STNV) to atomic resolution from simulated diffraction snapshots with and without noise. We next present a new approach for determining discrete conformational changes of the enzyme Adenylate kinase (ADK) from very large datasets of up to 20 million snapshots, each with ~104 pixels. This exceeds by an order of magnitude the largest dataset previously analyzed. Finally, we present a theoretical framework and an algorithmic pipeline for capturing continuous conformational changes of the ribosome from ultralow-signal (-12dB) experimental cryo-EM. Our analysis shows a smooth, concerted change in molecular structure in two-dimensional projection, which might be indicative of the way the ribosome functions as a molecular machine. The thesis ends with a summary and future prospects

    TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning

    Get PDF
    Background and objective: Processing of medical images such as MRI or CT presents different challenges compared to RGB images typically used in computer vision. These include a lack of labels for large datasets, high computational costs, and the need of metadata to describe the physical properties of voxels. Data augmentation is used to artificially increase the size of the training datasets. Training with image subvolumes or patches decreases the need for computational power. Spatial metadata needs to be carefully taken into account in order to ensure a correct alignment and orientation of volumes. Methods: We present TorchIO, an open-source Python library to enable efficient loading, preprocessing, augmentation and patch-based sampling of medical images for deep learning. TorchIO follows the style of PyTorch and integrates standard medical image processing libraries to efficiently process images during training of neural networks. TorchIO transforms can be easily composed, reproduced, traced and extended. Most transforms can be inverted, making the library suitable for test-time augmentation and estimation of aleatoric uncertainty in the context of segmentation. We provide multiple generic preprocessing and augmentation operations as well as simulation of MRI-specific artifacts. Results: Source code, comprehensive tutorials and extensive documentation for TorchIO can be found at http://torchio.rtfd.io/. The package can be installed from the Python Package Index (PyPI) running pip install torchio. It includes a command-line interface which allows users to apply transforms to image files without using Python. Additionally, we provide a graphical user interface within a TorchIO extension in 3D Slicer to visualize the effects of transforms. Conclusion: TorchIO was developed to help researchers standardize medical image processing pipelines and allow them to focus on the deep learning experiments. It encourages good open-science practices, as it supports experiment reproducibility and is version-controlled so that the software can be cited precisely. Due to its modularity, the library is compatible with other frameworks for deep learning with medical images

    Towards a data-driven treatment of epilepsy: computational methods to overcome low-data regimes in clinical settings

    Get PDF
    Epilepsy is the most common neurological disorder, affecting around 1 % of the population. One third of patients with epilepsy are drug-resistant. If the epileptogenic zone can be localized precisely, curative resective surgery may be performed. However, only 40 to 70 % of patients remain seizure-free after surgery. Presurgical evaluation, which in part aims to localize the epileptogenic zone (EZ), is a complex multimodal process that requires subjective clinical decisions, often relying on a multidisciplinary team’s experience. Thus, the clinical pathway could benefit from data-driven methods for clinical decision support. In the last decade, deep learning has seen great advancements due to the improvement of graphics processing units (GPUs), the development of new algorithms and the large amounts of generated data that become available for training. However, using deep learning in clinical settings is challenging as large datasets are rare due to privacy concerns and expensive annotation processes. Methods to overcome the lack of data are especially important in the context of presurgical evaluation of epilepsy, as only a small proportion of patients with epilepsy end up undergoing surgery, which limits the availability of data to learn from. This thesis introduces computational methods that pave the way towards integrating data-driven methods into the clinical pathway for the treatment of epilepsy, overcoming the challenge presented by the relatively small datasets available. We used transfer learning from general-domain human action recognition to characterize epileptic seizures from video–telemetry data. We developed a software framework to predict the location of the epileptogenic zone given seizure semiologies, based on retrospective information from the literature. We trained deep learning models using self-supervised and semi-supervised learning to perform quantitative analysis of resective surgery by segmenting resection cavities on brain magnetic resonance images (MRIs). Throughout our work, we shared datasets and software tools that will accelerate research in medical image computing, particularly in the field of epilepsy

    Gradient light interference microscopy for imaging strongly scattering samples

    Get PDF
    A growing interest in three-dimensional cellular systems has raised new challenges for light microscopy. The fundamental difficulty is the tendency for the optical field to scramble when interacting with turbid media, leading to contrast images. In this work, we outline the development of an instrument that uses broadband optical fields in conjunction with phase-shifting interferometry to extract high-resolution and high-contrast structures from otherwise cloudy images. We construct our system from a differential interference contrast microscope, demonstrating our new modality in transmission and reflection geometries. We call this modality Gradient Light Interference Microscopy (GLIM) as the image measures the gradient of the object’s scattering potential. To facilitate complex experiments, we develop a high-throughput acquisition software and propose several ways to analyze this new kind of data using deep convolutional neural networks. This new proposal, termed phase imaging with computational specificity (PICS), allows for non-destructive yet chemically motivated annotation of microscopy images. The results presented in this dissertation provide templates that are readily extendible to other quantitative phase imaging modalities

    Turbulence: Numerical Analysis, Modelling and Simulation

    Get PDF
    The problem of accurate and reliable simulation of turbulent flows is a central and intractable challenge that crosses disciplinary boundaries. As the needs for accuracy increase and the applications expand beyond flows where extensive data is available for calibration, the importance of a sound mathematical foundation that addresses the needs of practical computing increases. This Special Issue is directed at this crossroads of rigorous numerical analysis, the physics of turbulence and the practical needs of turbulent flow simulations. It seeks papers providing a broad understanding of the status of the problem considered and open problems that comprise further steps

    Aeronautical engineering: A continuing bibliography with indexes (supplement 266)

    Get PDF
    This bibliography lists 645 reports, articles, and other documents introduced into the NASA scientific and technical information system in May 1991. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Data Acquisition Applications

    Get PDF
    Data acquisition systems have numerous applications. This book has a total of 13 chapters and is divided into three sections: Industrial applications, Medical applications and Scientific experiments. The chapters are written by experts from around the world, while the targeted audience for this book includes professionals who are designers or researchers in the field of data acquisition systems. Faculty members and graduate students could also benefit from the book
    • 

    corecore