481 research outputs found
Recommended from our members
Pattern recognition systems design on parallel GPU architectures for breast lesions characterisation employing multimodality images
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.The aim of this research was to address the computational complexity in designing multimodality Computer-Aided Diagnosis (CAD) systems for characterising breast lesions, by harnessing the general purpose computational potential of consumer-level Graphics Processing Units (GPUs) through parallel programming methods. The complexity in designing such systems lies on the increased dimensionality of the problem, due to the multiple imaging modalities involved, on the inherent complexity of optimal design methods for securing high precision, and on assessing the performance of the design prior to deployment in a clinical environment, employing unbiased system evaluation methods. For the purposes of this research, a Pattern Recognition (PR)-system was designed to provide highest possible precision by programming in parallel the multiprocessors of the NVIDIA’s GPU-cards, GeForce 8800GT or 580GTX, and using the CUDA programming framework and C++. The PR-system was built around the Probabilistic Neural Network classifier and its performance was evaluated by a re-substitution method, for estimating the system’s highest accuracy, and by the external cross validation method, for assessing the PR-system’s unbiased accuracy to new, “unseen” by the system, data. Data comprised images of patients with histologically verified (benign or malignant) breast lesions, who underwent both ultrasound (US) and digital mammography (DM). Lesions were outlined on the images by an experienced radiologist, and textural features were calculated. Regarding breast lesion classification, the accuracies for discriminating malignant from benign lesions were, 85.5% using US-features alone, 82.3% employing DM-features alone, and 93.5% combining US and DM features. Mean accuracy to new “unseen” data for the combined US and DM features was 81%. Those classification accuracies were about 10% higher than accuracies achieved on a single CPU, using sequential programming methods, and 150-fold faster. In addition, benign lesions were found smoother, more homogeneous, and containing larger structures. Additionally, the PR-system design was adapted for tackling other medical problems, as a proof of its generalisation. These included classification of rare brain tumours, (achieving 78.6% for overall accuracy (OA) and 73.8% for estimated generalisation accuracy (GA), and accelerating system design 267 times), discrimination of patients with micro-ischemic and multiple sclerosis lesions (90.2% OA and 80% GA with 32-fold design acceleration), classification of normal and pathological knee cartilages (93.2% OA and 89% GA with 257-fold design acceleration), and separation of low from high grade laryngeal cancer cases (93.2% OA and 89% GA, with 130-fold design acceleration). The proposed PR-system improves breast-lesion discrimination accuracy, it may be redesigned on site when new verified data are incorporated in its depository, and it may serve as a second opinion tool in a clinical environment
Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses
To generate evidence regarding the safety and efficacy of artificial
intelligence (AI) enabled medical devices, AI models need to be evaluated on a
diverse population of patient cases, some of which may not be readily
available. We propose an evaluation approach for testing medical imaging AI
models that relies on in silico imaging pipelines in which stochastic digital
models of human anatomy (in object space) with and without pathology are imaged
using a digital replica imaging acquisition system to generate realistic
synthetic image datasets. Here, we release M-SYNTH, a dataset of cohorts with
four breast fibroglandular density distributions imaged at different exposure
levels using Monte Carlo x-ray simulations with the publicly available Virtual
Imaging Clinical Trial for Regulatory Evaluation (VICTRE) toolkit. We utilize
the synthetic dataset to analyze AI model performance and find that model
performance decreases with increasing breast density and increases with higher
mass density, as expected. As exposure levels decrease, AI model performance
drops with the highest performance achieved at exposure levels lower than the
nominal recommended dose for the breast type.Comment: NeurIPS 2023 Datasets and Benchmarks Trac
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
GPU-based ultra-fast direct aperture optimization for online adaptive radiation therapy
Online adaptive radiation therapy (ART) has great promise to significantly
reduce normal tissue toxicity and/or improve tumor control through real-time
treatment adaptations based on the current patient anatomy. However, the major
technical obstacle for clinical realization of online ART, namely the inability
to achieve real-time efficiency in treatment re-planning, has yet to be solved.
To overcome this challenge, this paper presents our work on the implementation
of an intensity modulated radiation therapy (IMRT) direct aperture optimization
(DAO) algorithm on graphics processing unit (GPU) based on our previous work on
CPU. We formulate the DAO problem as a large-scale convex programming problem,
and use an exact method called column generation approach to deal with its
extremely large dimensionality on GPU. Five 9-field prostate and five 5-field
head-and-neck IMRT clinical cases with 5\times5 mm2 beamlet size and
2.5\times2.5\times2.5 mm3 voxel size were used to evaluate our algorithm on
GPU. It takes only 0.7~2.5 seconds for our implementation to generate optimal
treatment plans using 50 MLC apertures on an NVIDIA Tesla C1060 GPU card. Our
work has therefore solved a major problem in developing ultra-fast
(re-)planning technologies for online ART
Real-time Knowledge-based Fuzzy Logic Model for Soft Tissue Deformation
In this research, the improved mass spring model is presented to simulate the human liver deformation. The underlying MSM is redesigned where fuzzy knowledge-based approaches are implemented to determine the stiffness values. Results show that fuzzy approaches are in very good agreement to the benchmark model. The novelty of this research is that for liver deformation in particular, no specific contributions in the literature exist reporting on real-time knowledge-based fuzzy MSM for liver deformation
Recommended from our members
A Survey on Nature-Inspired Medical Image Analysis: A Step Further in Biomedical Data Integration
Towards Real-time Remote Processing of Laparoscopic Video
Laparoscopic surgery is a minimally invasive technique where surgeons insert a small video camera into the patient\u27s body to visualize internal organs and use small tools to perform these procedures. However, the benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic system is the daVinci-si robotic surgical vision system. The video streams generate approximately 360 megabytes of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Real-time processing this large stream of data on a bedside PC, single or dual node setup, may be challenging and a high-performance computing (HPC) environment is not typically available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate (fps), it is required that each 11.9 MB (1080p) video frame be processed by a server and returned within the time this frame is displayed or 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. We have implemented and compared performance of compression, segmentation and registration algorithms on Clemson\u27s Palmetto supercomputer using dual Nvidia graphics processing units (GPUs) per node and compute unified device architecture (CUDA) programming model. We developed three separate applications that run simultaneously: video acquisition, image processing, and video display. The image processing application allows several algorithms to run simultaneously on different cluster nodes and transfer images through message passing interface (MPI). Our segmentation and registration algorithms resulted in an acceleration factor of around 2 and 8 times respectively. To achieve a higher frame rate, we also resized images and reduced the overall processing time. As a result, using high-speed network to access computing clusters with GPUs to implement these algorithms in parallel will improve surgical procedures by providing real-time medical image processing and laparoscopic data
Ono: an open platform for social robotics
In recent times, the focal point of research in robotics has shifted from industrial ro- bots toward robots that interact with humans in an intuitive and safe manner. This evolution has resulted in the subfield of social robotics, which pertains to robots that function in a human environment and that can communicate with humans in an int- uitive way, e.g. with facial expressions. Social robots have the potential to impact many different aspects of our lives, but one particularly promising application is the use of robots in therapy, such as the treatment of children with autism. Unfortunately, many of the existing social robots are neither suited for practical use in therapy nor for large scale studies, mainly because they are expensive, one-of-a-kind robots that are hard to modify to suit a specific need. We created Ono, a social robotics platform, to tackle these issues. Ono is composed entirely from off-the-shelf components and cheap materials, and can be built at a local FabLab at the fraction of the cost of other robots. Ono is also entirely open source and the modular design further encourages modification and reuse of parts of the platform
- …