3,306 research outputs found

    Three--dimensional medical imaging: Algorithms and computer systems

    Get PDF
    This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical images, with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including image resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallelism used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging

    The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, and DTI

    Get PDF
    A steady series of advances in physics, mathematics, computers and clinical imaging science have progressively transformed diagnosis and treatment of neurological and neurosurgical disorders in the 115 years between the discovery of the X-ray and the advent of high resolution diffusion based functional MRI. The story of the progress in human terms, with its battles for priorities, forgotten advances, competing claims, public battles for Nobel Prizes, and patent priority litigations bring alive the human drama of this remarkable collective achievement in computed medical imaging

    The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, and DTI

    Get PDF
    A steady series of advances in physics, mathematics, computers and clinical imaging science have progressively transformed diagnosis and treatment of neurological and neurosurgical disorders in the 115 years between the discovery of the X-ray and the advent of high resolution diffusion based functional MRI. The story of the progress in human terms, with its battles for priorities, forgotten advances, competing claims, public battles for Nobel Prizes, and patent priority litigations bring alive the human drama of this remarkable collective achievement in computed medical imaging

    First results from the LUCID-Timepix spacecraft payload onboard the TechDemoSat-1 satellite in Low Earth Orbit

    Full text link
    The Langton Ultimate Cosmic ray Intensity Detector (LUCID) is a payload onboard the satellite TechDemoSat-1, used to study the radiation environment in Low Earth Orbit (\sim635km). LUCID operated from 2014 to 2017, collecting over 2.1 million frames of radiation data from its five Timepix detectors on board. LUCID is one of the first uses of the Timepix detector technology in open space, with the data providing useful insight into the performance of this technology in new environments. It provides high-sensitivity imaging measurements of the mixed radiation field, with a wide dynamic range in terms of spectral response, particle type and direction. The data has been analysed using computing resources provided by GridPP, with a new machine learning algorithm that uses the Tensorflow framework. This algorithm provides a new approach to processing Medipix data, using a training set of human labelled tracks, providing greater particle classification accuracy than other algorithms. For managing the LUCID data, we have developed an online platform called Timepix Analysis Platform at School (TAPAS). This provides a swift and simple way for users to analyse data that they collect using Timepix detectors from both LUCID and other experiments. We also present some possible future uses of the LUCID data and Medipix detectors in space.Comment: Accepted for publication in Advances in Space Researc

    Ultrasonography in the assessment of tendon disease:methodology and diagnosis

    Get PDF

    InterNAV3D: A Navigation Tool for Robot-Assisted Needle-Based Intervention for the Lung

    Get PDF
    Lung cancer is one of the leading causes of cancer deaths in North America. There are recent advances in cancer treatment techniques that can treat cancerous tumors, but require a real-time imaging modality to provide intraoperative assistive feedback. Ultrasound (US) imaging is one such modality. However, while its application to the lungs has been limited because of the deterioration of US image quality (due to the presence of air in the lungs); recent work has shown that appropriate lung deflation can help to improve the quality sufficiently to enable intraoperative, US-guided robotics-assisted techniques to be used. The work described in this thesis focuses on this approach. The thesis describes a project undertaken at Canadian Surgical Technologies and Advanced Robotics (CSTAR) that utilizes the image processing techniques to further enhance US images and implements an advanced 3D virtual visualization software approach. The application considered is that for minimally invasive lung cancer treatment using procedures such as brachytherapy and microwave ablation while taking advantage of the accuracy and teleoperation capabilities of surgical robots, to gain higher dexterity and precise control over the therapy tools (needles and probes). A number of modules and widgets are developed and explained which improve the visibility of the physical features of interest in the treatment and help the clinician to have more reliable and accurate control of the treatment. Finally the developed tools are validated with extensive experimental evaluations and future developments are suggested to enhance the scope of the applications

    Sixth Annual Users' Conference

    Get PDF
    Conference papers and presentation outlines which address the use of the Transportable Applications Executive (TAE) and its various applications programs are compiled. Emphasis is given to the design of the user interface and image processing workstation in general. Alternate ports of TAE and TAE subsystems are also covered

    SkullGAN: Synthetic Skull CT Generation with Generative Adversarial Networks

    Full text link
    Deep learning offers potential for various healthcare applications involving the human skull but requires extensive datasets of curated medical images. To overcome this challenge, we propose SkullGAN, a generative adversarial network (GAN), to create large datasets of synthetic skull CT slices, reducing reliance on real images and accelerating the integration of machine learning into healthcare. In our method, CT slices of 38 subjects were fed to SkullGAN, a neural network comprising over 200 million parameters. The synthetic skull images generated were evaluated based on three quantitative radiological features: skull density ratio (SDR), mean thickness, and mean intensity. They were further analyzed using t-distributed stochastic neighbor embedding (t-SNE) and by applying the SkullGAN discriminator as a classifier. The results showed that SkullGAN-generated images demonstrated similar key quantitative radiological features to real skulls. Further definitive analysis was undertaken by applying the discriminator of SkullGAN, where the SkullGAN discriminator classified 56.5% of a test set of real skull images and 55.9% of the SkullGAN-generated images as reals (the theoretical optimum being 50%), demonstrating that the SkullGAN-generated skull set is indistinguishable from the real skull set - within the limits of our nonlinear classifier. Therefore, SkullGAN makes it possible to generate large numbers of synthetic skull CT segments, necessary for training neural networks for medical applications involving the human skull. This mitigates challenges associated with preparing large, high-quality training datasets, such as access, capital, time, and the need for domain expertise.Comment: The first two authors contributed equall
    corecore