870 research outputs found
A Survey on Artificial Intelligence Techniques for Biomedical Image Analysis in Skeleton-Based Forensic Human Identification
This paper represents the first survey on the application of AI techniques for the analysis
of biomedical images with forensic human identification purposes. Human identification is of
great relevance in today’s society and, in particular, in medico-legal contexts. As consequence,
all technological advances that are introduced in this field can contribute to the increasing necessity
for accurate and robust tools that allow for establishing and verifying human identity. We first
describe the importance and applicability of forensic anthropology in many identification scenarios.
Later, we present the main trends related to the application of computer vision, machine learning
and soft computing techniques to the estimation of the biological profile, the identification through
comparative radiography and craniofacial superimposition, traumatism and pathology analysis,
as well as facial reconstruction. The potentialities and limitations of the employed approaches are
described, and we conclude with a discussion about methodological issues and future research.Spanish Ministry of Science, Innovation and UniversitiesEuropean Union (EU)
PGC2018-101216-B-I00Regional Government of Andalusia under grant EXAISFI
P18-FR-4262Instituto de Salud Carlos IIIEuropean Union (EU)
DTS18/00136European Commission H2020-MSCA-IF-2016 through the Skeleton-ID Marie Curie Individual Fellowship
746592Spanish Ministry of Science, Innovation and Universities-CDTI, Neotec program 2019
EXP-00122609/SNEO-20191236European Union (EU)Xunta de Galicia
ED431G 2019/01European Union (EU)
RTI2018-095894-B-I0
ADVANCED INTRAOPERATIVE IMAGE REGISTRATION FOR PLANNING AND GUIDANCE OF ROBOT-ASSISTED SURGERY
Robot-assisted surgery offers improved accuracy, precision, safety, and workflow for a variety of surgical procedures spanning different surgical contexts (e.g., neurosurgery, pulmonary interventions, orthopaedics). These systems can assist with implant placement, drilling, bone resection, and biopsy while reducing human errors (e.g., hand tremors and limited dexterity) and easing the workflow of such tasks. Furthermore, such systems can reduce radiation dose to the clinician in fluoroscopically-guided procedures since many robots can perform their task in the imaging field-of-view (FOV) without the surgeon.
Robot-assisted surgery requires (1) a preoperative plan defined relative to the patient that instructs the robot to perform a task, (2) intraoperative registration of the patient to transform the planning data into the intraoperative space, and (3) intraoperative registration of the robot to the patient to guide the robot to execute the plan. However, despite the operational improvements achieved using robot-assisted surgery, there are geometric inaccuracies and significant challenges to workflow associated with (1-3) that impact widespread adoption.
This thesis aims to address these challenges by using image registration to plan and guide robot- assisted surgical (RAS) systems to encourage greater adoption of robotic-assistance across surgical contexts (in this work, spinal neurosurgery, pulmonary interventions, and orthopaedic trauma). The proposed methods will also be compatible with diverse imaging and robotic platforms (including low-cost systems) to improve the accessibility of RAS systems for a wide range of hospital and use settings.
This dissertation advances important components of image-guided, robot-assisted surgery, including: (1) automatic target planning using statistical models and surgeon-specific atlases for application in spinal neurosurgery; (2) intraoperative registration and guidance of a robot to the planning data using 3D-2D image registration (i.e., an “image-guided robot”) for assisting pelvic orthopaedic trauma; (3) advanced methods for intraoperative registration of planning data in deformable anatomy for guiding pulmonary interventions; and (4) extension of image-guided robotics in a piecewise rigid, multi-body context in which the robot directly manipulates anatomy for assisting ankle orthopaedic trauma
The impact of AI on radiographic image reporting – perspectives of the UK reporting radiographer population
Background: It is predicted that medical imaging services will be greatly impacted by AI in the future. Developments in computer vision have allowed AI to be used for assisted reporting. Studies have investigated radiologists' opinions of AI for image interpretation (Huisman et al., 2019 a/b) but there remains a paucity of information in reporting radiographers' opinions on this topic.Method: A survey was developed by AI expert radiographers and promoted via LinkedIn/Twitter and professional networks for radiographers from all specialities in the UK. A sub analysis was performed for reporting radiographers only.Results: 411 responses were gathered to the full survey (Rainey et al., 2021) with 86 responses from reporting radiographers included in the data analysis. 10.5% of respondents were using AI tools? as part of their reporting role. 59.3% and 57% would not be confident in explaining an AI decision to other healthcare practitioners and 'patients and carers' respectively. 57% felt that an affirmation from AI would increase confidence in their diagnosis. Only 3.5% would not seek second opinion following disagreement from AI. A moderate level of trust in AI was reported: mean score = 5.28 (0 = no trust; 10 = absolute trust). 'Overall performance/accuracy of the system', 'visual explanation (heatmap/ROI)', 'Indication of the confidence of the system in its diagnosis' were suggested as measures to increase trust.Conclusion: AI may impact reporting professionals' confidence in their diagnoses. Respondents are not confident in explaining an AI decision to key stakeholders. UK radiographers do not yet fully trust AI. Improvements are suggested
An evaluation of a training tool and study day in chest image interpretation
Background: With the use of expert consensus a digital tool was developed by the research team which proved useful when teaching radiographers how to interpret chest images. The training tool included A) a search strategy training tool and B) an educational tool to communicate the search strategies using eye tracking technology. This training tool has the potential to improve interpretation skills for other healthcare professionals.Methods: To investigate this, 31 healthcare professionals i.e. nurses and physiotherapists, were recruited and participants were randomised to receive access to the training tool (intervention group) or not to have access to the training tool (control group) for a period of 4-6 weeks. Participants were asked to interpret different sets of 20 chest images before and after the intervention period. A study day was then provided to all participants following which participants were again asked to interpret a different set of 20 chest images (n=1860). Each participant was asked to complete a questionnaire on their perceptions of the training provided. Results: Data analysis is in progress. 50% of participants did not have experience in image interpretation prior to the study. The study day and training tool were useful in improving image interpretation skills. Participants perception of the usefulness of the tool to aid image interpretation skills varied among respondents.Conclusion: This training tool has the potential to improve patient diagnosis and reduce healthcare costs
A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends
Computer vision (CV) is a big and important field
in artificial intelligence covering a wide range of applications.
Image analysis is a major task in CV aiming to extract, analyse
and understand the visual content of images. However, imagerelated
tasks are very challenging due to many factors, e.g., high
variations across images, high dimensionality, domain expertise
requirement, and image distortions. Evolutionary computation
(EC) approaches have been widely used for image analysis with
significant achievement. However, there is no comprehensive
survey of existing EC approaches to image analysis. To fill
this gap, this paper provides a comprehensive survey covering
all essential EC approaches to important image analysis tasks
including edge detection, image segmentation, image feature
analysis, image classification, object detection, and others. This
survey aims to provide a better understanding of evolutionary
computer vision (ECV) by discussing the contributions of different
approaches and exploring how and why EC is used for
CV and image analysis. The applications, challenges, issues, and
trends associated to this research field are also discussed and
summarised to provide further guidelines and opportunities for
future research
IMAGE ANALYSIS FOR SPINE SURGERY: DATA-DRIVEN DETECTION OF SPINE INSTRUMENTATION & AUTOMATIC ANALYSIS OF GLOBAL SPINAL ALIGNMENT
Spine surgery is a therapeutic modality for treatment of spine disorders, including spinal deformity, degeneration, and trauma. Such procedures benefit from accurate localization of surgical targets, precise delivery of instrumentation, and reliable validation of surgical objectives – for example, confirming that the surgical implants are delivered as planned and desired changes to the global spinal alignment (GSA) are achieved. Recent advances in surgical navigation have helped to improve the accuracy and precision of spine surgery, including intraoperative imaging integrated with real-time tracking and surgical robotics. This thesis aims to develop two methods for improved image-guided surgery using image analytic techniques. The first provides a means for automatic detection of pedicle screws in intraoperative radiographs – for example, to streamline intraoperative assessment of implant placement. The algorithm achieves a precision and recall of 0.89 and 0.91, respectively, with localization accuracy within ~10 mm. The second develops two algorithms for automatic assessment of GSA in computed tomography (CT) or cone-beam CT (CBCT) images, providing a means to quantify changes in spinal curvature and reduce the variability in GSA measurement associated with manual methods. The algorithms demonstrate GSA estimates with 93.8% of measurements within a 95% confidence interval of manually defined truth. Such methods support the goals of safe, effective spine surgery and provide a means for more quantitative intraoperative quality assurance. In turn, the ability to quantitatively assess instrument placement and changes in GSA could represent important elements of retrospective analysis of large image datasets, improved clinical decision support, and improved patient outcomes
Deep learning in medical imaging and radiation therapy
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd
- …