186 research outputs found

    Evaluating Algorithms Used For Fetal Brain Scan Segmentation

    Get PDF
    The goal for this project was to successfully segment a fetal brain scan (fetal scan) using the algorithms provided by the program Slicer3D. To better understand the hurdles that arose when segmenting a fetal scan, we first look at the segmentation of an adult brain scan. This will allow us to see the straightforward nature of a brain segmentation when a high quality, high resolution volume with distinct structures is available. After examining the adult brain scan, attention will be moved to the segmentation of the fetal scan, where we’ll first look at the algorithms used and methods followed. Finally the outcomes and issues of the segmentations of the fetal scans and their corresponding algorithms will be discussed

    Automated modeling of brain bioelectric activity within the 3D Slicer environment

    Full text link
    Electrocorticography (ECoG) or intracranial electroencephalography (iEEG) monitors electric potential directly on the surface of the brain and can be used to inform treatment planning for epilepsy surgery when paired with numerical modeling. For solving the inverse problem in epilepsy seizure onset localization, accurate solution of the iEEG forward problem is critical which requires accurate representation of the patient's brain geometry and tissue electrical conductivity. In this study, we present an automatic framework for constructing the brain volume conductor model for solving the iEEG forward problem and visualizing the brain bioelectric field on a deformed patient-specific brain model within the 3D Slicer environment. We solve the iEEG forward problem on the predicted postoperative geometry using the finite element method (FEM) which accounts for patient-specific inhomogeneity and anisotropy of tissue conductivity. We use an epilepsy case study to illustrate the workflow of our framework developed and integrated within 3D Slicer

    Smoothing Module for Optimization Cranium Segmentation Using 3D Slicer

    Get PDF
    Anatomy is the most essential course in health and medical education to study parts of human body and also the function of it.  Cadaver is a media used by medical student to study anatomical subject. Because of limited access to cadaver and also due to high prices, this situation makes it necessary to develope an alternative anatomical education media, one of them is the use 3D printing to produce anatomical models. Before 3D Print the cranium, it is necessary to do the segmentation process and often the segmentation result is not good enough and appear a lot of noises. The purpose of this research is  to optimize a 3D cranium based on DICOM (digital imaging and communications in medicine) data processing using the smoothing modules on 3D Slicer. The method of this research is to process the Cranium DICOM data using 3D Slicer software by varying the 5 types of smoothing modules. The results with default parameter fill holes and median have better results compared to others. Kernel size variations are performed for smoothing module fill holes and medians. The result is fill holes get optimal segmentation results using a kernel size of 3 mm and the median is 5 m

    Advanced Imaging Techniques for Point-Measurement Analysis of Pharmaceutical Materials

    Get PDF
    Drugs are an essential element protecting human lives from many diseases such as cancer, diabetes, and cardiovascular disorders. One of the highlights in drug development in recent years is the establishment of rational drug design: a collection of various multi-disciplinary approaches that at the core, focus on designing molecules with specific properties for identified targets and biomolecules with known functional roles and structural information. The candidate molecules will then go through a series of examinations to characterize their physiochemical properties, and an iterative process is used to improve the design of the drug to achieve desirable attributes. The time consuming and highly expensive nature of drug development constantly calls for new analytical techniques that have increasingly higher throughput, faster analysis speed, richer chemical and structural information, and lower risk and cost. Conventional analytical methods for pharmaceutical materials, such as X-ray diffraction analysis and Raman spectroscopy, often suffer from prolonged measurement time. In many cases, the identification of regions of interest within the sample is non-trivial in itself. Nonlinear optical imaging techniques, including second harmonic generation (SHG) microscopy and two-photon excited ultraviolet fluorescence (TPE-UVF) microscopy were developed as fast, real-time, and non-destructive methods for selective identification and characterization of crystalline materials present in pharmaceutical samples. These techniques were integrated with synchrotron X-ray diffraction analysis and Raman spectroscopy to significantly reduce the overall measurement time of these structure characterization techniques. In the meanwhile, with the now increased speed of measurement, the amount of experimental data acquired per unit time has also drastically increased. The rate at which data are analyzed, digested, and interpreted is becoming the bottleneck in data-driving decision-making. Novel electronics that only collect data at the most information-rich time points were employed to significantly increase the signal-to-noise ratio (SNR) during data acquisition, reducing the total amount of data needed for material characterization. Advanced sampling algorithms to reduce the total amount of measurements required for perfect data space reconstruction, automated programs for data acquisition and analysis, and efficient data analysis algorithms based on machine learning were developed for accelerated data processing for nonlinear optical imaging analysis, Raman spectra processing, and X-ray diffraction indexing

    A Deep Learning-Based Fully Automated Pipeline for Regurgitant Mitral Valve Anatomy Analysis From 3D Echocardiography

    Get PDF
    Three-dimensional transesophageal echocardiography (3DTEE) is the recommended imaging technique for the assessment of mitral valve (MV) morphology and lesions in case of mitral regurgitation (MR) requiring surgical or transcatheter repair. Such assessment is key to thorough intervention planning and to intraprocedural guidance. However, it requires segmentation from 3DTEE images, which is timeconsuming, operator-dependent, and often merely qualitative. In the present work, a novel workflow to quantify the patient-specific MV geometry from 3DTEE is proposed. The developed approach relies on a 3D multi-decoder residual convolutional neural network (CNN) with a U-Net architecture for multi-class segmentation of MV annulus and leaflets. The CNN was trained and tested on a dataset comprising 55 3DTEE examinations of MR-affected patients. After training, the CNN is embedded into a fully automatic, and hence fully repeatable, pipeline that refines the predicted segmentation, detects MV anatomical landmarks and quantifies MV morphology. The trained 3D CNN achieves an average Dice score of 0.82 +/- 0.06, mean surface distance of 0.43 +/- 0.14 mm and 95% Hausdorff Distance (HD) of 3.57 +/- 1.56 mm before segmentation refinement, outperforming a state-of-the-art baseline residual U-Net architecture, and provides an unprecedented multi-class segmentation of the annulus, anterior and posterior leaflet. The automatic 3D linear morphological measurements of the annulus and leaflets, specifically diameters and lengths, exhibit differences of less than 1.45 mm when compared to ground truth values. These measurements also demonstrate strong overall agreement with analyses conducted by semi-automated commercial software. The whole process requires minimal user interaction and requires approximately 15 seconds

    Software tool for visualization of a probabilistic map of the epileptogenic zone from seizure semiologies

    Get PDF
    Around one third of epilepsies are drug-resistant. For these patients, seizures may be reduced or cured by surgically removing the epileptogenic zone (EZ), which is the portion of the brain giving rise to seizures. If noninvasive data are not sufficiently lateralizing or localizing, the EZ may need to be localized by precise implantation of intracranial electroencephalography (iEEG) electrodes. The choice of iEEG targets is influenced by clinicians' experience and personal knowledge of the literature, which leads to substantial variations in implantation strategies across different epilepsy centers. The clinical diagnostic pathway for surgical planning could be supported and standardized by an objective tool to suggest EZ locations, based on the outcomes of retrospective clinical cases reported in the literature. We present an open-source software tool that presents clinicians with an intuitive and data-driven visualization to infer the location of the symptomatogenic zone, that may overlap with the EZ. The likely EZ is represented as a probabilistic map overlaid on the patient's images, given a list of seizure semiologies observed in that specific patient. We demonstrate a case study on retrospective data from a patient treated in our unit, who underwent resective epilepsy surgery and achieved 1-year seizure freedom after surgery. The resected brain structures identified as EZ location overlapped with the regions highlighted by our tool, demonstrating its potential utility

    Open-Full-Jaw: An open-access dataset and pipeline for finite element models of human jaw

    Full text link
    Developing computational models of the human jaw acquired from cone-beam computed tomography (CBCT) scans is time-consuming and labor-intensive. Besides, a quantitative comparison is not attainable in the literature due to the involved manual tasks and the lack of surface/volumetric meshes. We share an open-access repository of 17 patient-specific finite-element (FE) models of human jaws acquired from CBCT scans and the utilized pipeline for generating them. The proposed pipeline minimizes model generation time and potential errors caused by human interventions. It gets dense surface meshes and provides reduced conformal surface/volumetric meshes suitable for FE analysis. We have quantified the geometrical variations of developed models and assessed models' accuracy from different aspects; (1) the maximum deviations from the input meshes, (2) the mesh quality, and (3) the simulation results. Our results indicate that the developed computational models are precise and have quality meshes suitable for various FE scenarios. Therefore, we believe this dataset will pave the way for future population studies
    • …
    corecore