116 research outputs found

    An Introductory Module in Medical Image Segmentation for BME Students

    Get PDF
    To support recent trends toward the use of patient-specific anatomical models from medical imaging data, we present a learning module for use in the undergraduate BME curriculum that introduces image segmentation, the process of partitioning digital images to isolate specific anatomical features. Five commercially available software packages were evaluated based on their perceived learning curve, ease of use, tools for segmentation and rendering, special tools, and cost: ITK-SNAP, 3D Slicer, OsiriX, Mimics, and Amira. After selecting the package best suited for a stand-alone course module on medical image segmentation, instructional materials were developed that included a general introduction to imaging, a tutorial guiding students through a step-by-step process to extract a skull from a provided stack of CT images, and a culminating assignment where students extract a different body part from clinical imaging data. This module was implemented in three different engineering courses, impacting more than 150 students, and student achievement of learning goals was assessed. ITK-SNAP was identified as the best software package for this application because it is free, easiest to learn, and includes a powerful, semi-automated segmentation tool. After completing the developed module based on ITK-SNAP, all students attained sufficient mastery of the image segmentation process to independently apply the technique to extract a new body part from clinical imaging data. This stand-alone module provides a low-cost, flexible way to bring the clinical and industry trends combining medical image segmentation, CAD, and 3D printing into the undergraduate BME curriculum

    An Introductory Module in Medical Image Segmentation for BME Students

    Get PDF

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    Get PDF

    AUTOMATED MIDLINE SHIFT DETECTION ON BRAIN CT IMAGES FOR COMPUTER-AIDED CLINICAL DECISION SUPPORT

    Get PDF
    Midline shift (MLS), the amount of displacement of the brain’s midline from its normal symmetric position due to illness or injury, is an important index for clinicians to assess the severity of traumatic brain injury (TBI). In this dissertation, an automated computer-aided midline shift estimation system is proposed. First, a CT slice selection algorithm (SSA) is designed to automatically select a subset of appropriate CT slices from a large number of raw images for MLS detection. Next, ideal midline detection is implemented based on skull bone anatomical features and global rotation assumptions. For the actual midline detection algorithm, a window selection algorithm (WSA) is applied first to confine the region of interest, then the variational level set method is used to segment the image and extract the ventricle contours. With a ventricle identification algorithm (VIA), the position of actual midline is detected based on the identified right and left lateral ventricle contours. Finally, the brain midline shift is calculated using the positions of detected ideal midline and actual midline. One of the important applications of midline shift in clinical medical decision making is to estimate the intracranial pressure (ICP). ICP monitoring is a standard procedure in the care of severe traumatic brain injury (TBI) patients. An automated ICP level prediction model based on machine learning method is proposed in this work. Multiple features, including midline shift, intracranial air cavities, ventricle size, texture patterns, and blood amount, are used in the ICP level prediction. Finally, the results are evaluated to assess the effectiveness of the proposed method in ICP level prediction

    A 3D environment for surgical planning and simulation

    Get PDF
    The use of Computed Tomography (CT) images and their three-dimensional (3D) reconstruction has spread in the last decade for implantology and surgery. A common use of acquired CT datasets is to be handled by dedicated software that provide a work context to accomplish preoperative planning upon. These software are able to exploit image processing techniques and computer graphics to provide fundamental information needed to work in safety, in order to minimize the surgeon possible error during the surgical operation. However, most of them carry on lacks and flaws, that compromise the precision and additional safety that their use should provide. The research accomplished during my PhD career has concerned the development of an optimized software for surgical preoperative planning. With this purpose, the state of the art has been analyzed, and main deficiencies have been identified. Then, in order to produce practical solutions, those lacks and defects have been contextualized in a medical field in particular: it has been opted for oral implantology, due to the available support of a pool of implantologists. It has emerged that most software systems for oral implantology, that are based on a multi-view approach, often accompanied with a 3D rendered model, are affected by the following problems: unreliability of measurements computed upon misleading views (panoramic one), as well as a not optimized use of the 3D environment, significant planning errors implied by the software work context (incorrect cross-sectional planes), and absence of automatic recognition of fundamental anatomies (as the mandibular canal). Thus, it has been defined a fully 3D approach, and a planning software system in particular, where image processing and computer graphic techniques have been used to create a smooth and user-friendly completely-3D environment to work upon for oral implant planning and simulation. Interpolation of the axial slices is used to produce a continuous radiographic volume and to get an isotropic voxel, in order to achieve a correct work context. Freedom of choosing, arbitrarily, during the planning phase, the best cross-sectional plane for achieving correct measurements is obtained through interpolation and texture generation. Correct orientation of the planned implants is also easily computed, by exploiting a radiological mask with radio-opaque markers, worn by the patient during the CT scan, and reconstructing the cross-sectional images along the preferred directions. The mandibular canal is automatically recognised through an adaptive surface-extracting statistical-segmentation based algorithm developed on purpose. Then, aiming at completing the overall approach, interfacing between the software and an anthropomorphic robot, in order to being able to transfer the planning on a surgical guide, has been achieved through proper coordinates change and exploiting a physical reference frame in the radiological stent. Finally, every software feature has been evaluated and validated, statistically or clinically, and it has resulted that the precision achieved outperforms the one in literature

    Imaging : making the invisible visible : proceedings of the symposium, 18 May 2000, Technische Universiteit Eindhoven

    Get PDF

    Automatic Construction of Immobilisation Masks for use in Radiotherapy Treatment of Head-and-Neck Cancer

    Get PDF
    Current clinical practice for immobilisation for patients undergoing brain or head and neck radiotherapy is normally achieved using Perspex or thermoplastic shells that are moulded to patient anatomy during a visit to the mould room. The shells are “made to measure” and the methods currently employed to make them require patients to visit the mould room. The mould room visit can be depressing and some patients find this process particularly unpleasant. In some cases, as treatment progresses, the tumour may shrink and therefore there may be a need for a further mould room visits. With modern manufacturing and rapid prototyping comes the possibility of determining the shape of the shells from the CT-scan of the patient directly, alleviating the need for making physical moulds from the patients’ head. However, extracting such a surface model remains a challenge and is the focus of this thesis. The aim of the work in this thesis is to develop an automatic pipeline capable of creating physical models of immobilisation shells directly from CT scans. The work includes an investigation of a number of image segmentation techniques to segment the skin/air interface from CT images. To enable the developed pipeline to be quantitatively evaluated we compared the 3D model generated from the CT data to ground truth obtained by 3D laser scans of masks produced by the mould room in the frame of a clinical trial. This involved automatically removing image artefacts due to fixations from CT imagery, automatic alignment (registration) between two meshes, measuring the degree of similarity between two 3D volumes, and automatic approach to evaluate the accuracy of segmentation. This thesis has raised and addressed many challenges within this pipeline. We have examined and evaluated each stage of the pipeline separately. The outcomes of the pipeline as a whole are currently being evaluated by a clinical trial (IRAS ID:209119, REC Ref.:16/YH/0485). Early results from the trial indicate that the approach is viable
    corecore