36 research outputs found

    DYNAMIC MEASUREMENT OF THREE-DIMENSIONAL MOTION FROM SINGLE-PERSPECTIVE TWO-DIMENSIONAL RADIOGRAPHIC PROJECTIONS

    Get PDF
    The digital evolution of the x-ray imaging modality has spurred the development of numerous clinical and research tools. This work focuses on the design, development, and validation of dynamic radiographic imaging and registration techniques to address two distinct medical applications: tracking during image-guided interventions, and the measurement of musculoskeletal joint kinematics. Fluoroscopy is widely employed to provide intra-procedural image-guidance. However, its planar images provide limited information about the location of surgical tools and targets in three-dimensional space. To address this limitation, registration techniques, which extract three-dimensional tracking and image-guidance information from planar images, were developed and validated in vitro. The ability to accurately measure joint kinematics in vivo is an important tool in studying both normal joint function and pathologies associated with injury and disease, however it still remains a clinical challenge. A technique to measure joint kinematics from single-perspective x-ray projections was developed and validated in vitro, using clinically available radiography equipmen

    Quantitative Analysis of Three-Dimensional Cone-Beam Computed Tomography Using Image Quality Phantoms

    Get PDF
    In the clinical setting, weight-bearing static 2D radiographic imaging and supine 3D radiographic imaging modalities are used to evaluate radiographic changes such as, joint space narrowing, subchondral sclerosis, and osteophyte formation. These respective imaging modalities cannot distinguish between tissues with similar densities (2D imaging), and do not accurately represent functional joint loading (supine 3D imaging). Recent advances in cone-beam CT (CBCT) have allowed for scanner designs that can obtain weight-bearing 3D volumetric scans. The purpose of this thesis was to analyze, design, and implement advanced imaging techniques to quantify image quality parameters of reconstructed image volumes generated by a commercially-available CBCT scanner, and a novel ceiling-mounted CBCT scanner. In addition, imperfections during rotation of the novel ceiling-mounted CBCT scanner were characterized using a 3D printed calibration object with a modification to the single marker bead method, and prospective geometric calibration matrices

    Development and Assessment of a Micro-CT Based System for Quantifying Loaded Knee Joint Kinematics and Tissue Mechanics

    Get PDF
    Although anterior cruciate ligament (ACL) reconstruction is a highly developed surgical procedure, sub-optimal treatment outcomes persist. This can be partially attributed to an incomplete understanding of knee joint kinematics and regional tissue mechanic properties. A system for minimally-invasive investigation of knee joint kinematics and tissue mechanics under clinically relevant joint loads was developed to address this gap in understanding. A five degree-of-freedom knee joint motion simulator capable of dynamically loading intact human cadaveric knee joints to within 1% of user defined multi-axial target loads was developed. This simulator was uniquely designed to apply joint loads to a joint centered within the field of view of a micro-CT scanner. The use of micro-CT imaging and tissue-embedded radiopaque beads demonstrated high-resolution strain measurement, distinguishing differences in inter-bead distances as low as 0.007 mm. Inter-bead strain measurement was highly accurate and repeatable, with no significant error introduced from cyclic joint loading. Finally, regional strain was repeatably measured using radiopaque markers in four intact, human cadaveric knees to within 0.003 strain in response to multi-directional joint loads. This novel combination of dynamic knee joint motion simulation, tissue-embedded radiopaque markers, and micro-CT imaging provides the opportunity to increase our understanding of the kinematics and tissue mechanics of the knee, with the potential to improve ACL reconstruction outcomes

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Computer assistance in orthopaedic surgery

    Get PDF

    An Image-Based Tool to Examine Joint Congruency at the Elbow

    Get PDF
    Post-traumatic osteoarthritis commonly occurs as a result of a traumatic event to the articulation. Although the majority of this type of arthritis is preventable, the sequence and mechanism of the interaction between joint injury and the development of osteoarthritis (OA) is not well understood. It is hypothesized that alterations to the joint alignment can cause excessive and damaging wear to the cartilage surfaces resulting in OA. The lack of understanding of both the cause and progression of OA has contributed to the slow development of interventions which can modify the course of the disease. Currently, there have been no reported techniques that have been developed to examine the relationship between joint injury and joint alignment. Therefore, the objective of this thesis was to develop a non-invasive image-based technique that can be used to assess joint congruency and alignment of joints undergoing physiologic motion. An inter-bone distance algorithm was developed and validated to measure joint congruency at the ulnohumeral joint of the elbow. Subsequently, a registration algorithm was created and its accuracy was assessed. This registration algorithm registered 3D reconstructed bone models obtained using x-ray CT to motion capture data of cadaveric upper extremities undergoing simulated elbow flexion. In this way, the relative position and orientation of the 3D bone models could be visualized for any frame of motion. The effect of radial head arthroplasty was used to illustrate the utility of this technique. Once this registration was refined, the inter-bone distance algorithm was integrated to be able to visualize the joint congruency of the ulnohumeral joint undergoing simulated elbow flexion. The effect of collateral ligament repair was examined. This technique proved to be sensitive enough to detect large changes in joint congruency in spite of only small changes in the motion pathways of the ulnohumeral joint following simulated ligament repair. Efforts were also made in this thesis to translate this research into a clinical environment by examining CT scanning protocols that could reduce the amount of radiation exposure required to image patient’s joints. For this study, the glenohumeral joint of the shoulder was examined as this joint is particularly sensitive to potential harmful effects of radiation due to its proximity to highly radiosensitive organs. Using the CT scanning techniques examined in this thesis, the effective dose applied to the shoulder was reduced by almost 90% compared to standard clinical CT imaging. In summary, these studies introduced a technique that can be used to non-invasively and three-dimensionally examine joint congruency. The accuracy of this technique was assessed and its ability to predict regions of joint surface interactions was validated against a gold standard casting approach. Using the techniques developed in this thesis the complex relationship between injury, loading and mal-alignment as contributors to the development and progression of osteoarthritis in the upper extremity can be examined

    The impact of AI on radiographic image reporting – perspectives of the UK reporting radiographer population

    Get PDF
    Background: It is predicted that medical imaging services will be greatly impacted by AI in the future. Developments in computer vision have allowed AI to be used for assisted reporting. Studies have investigated radiologists' opinions of AI for image interpretation (Huisman et al., 2019 a/b) but there remains a paucity of information in reporting radiographers' opinions on this topic.Method: A survey was developed by AI expert radiographers and promoted via LinkedIn/Twitter and professional networks for radiographers from all specialities in the UK. A sub analysis was performed for reporting radiographers only.Results: 411 responses were gathered to the full survey (Rainey et al., 2021) with 86 responses from reporting radiographers included in the data analysis. 10.5% of respondents were using AI tools? as part of their reporting role. 59.3% and 57% would not be confident in explaining an AI decision to other healthcare practitioners and 'patients and carers' respectively. 57% felt that an affirmation from AI would increase confidence in their diagnosis. Only 3.5% would not seek second opinion following disagreement from AI. A moderate level of trust in AI was reported: mean score = 5.28 (0 = no trust; 10 = absolute trust). 'Overall performance/accuracy of the system', 'visual explanation (heatmap/ROI)', 'Indication of the confidence of the system in its diagnosis' were suggested as measures to increase trust.Conclusion: AI may impact reporting professionals' confidence in their diagnoses. Respondents are not confident in explaining an AI decision to key stakeholders. UK radiographers do not yet fully trust AI. Improvements are suggested

    An evaluation of a training tool and study day in chest image interpretation

    Get PDF
    Background: With the use of expert consensus a digital tool was developed by the research team which proved useful when teaching radiographers how to interpret chest images. The training tool included A) a search strategy training tool and B) an educational tool to communicate the search strategies using eye tracking technology. This training tool has the potential to improve interpretation skills for other healthcare professionals.Methods: To investigate this, 31 healthcare professionals i.e. nurses and physiotherapists, were recruited and participants were randomised to receive access to the training tool (intervention group) or not to have access to the training tool (control group) for a period of 4-6 weeks. Participants were asked to interpret different sets of 20 chest images before and after the intervention period. A study day was then provided to all participants following which participants were again asked to interpret a different set of 20 chest images (n=1860). Each participant was asked to complete a questionnaire on their perceptions of the training provided. Results: Data analysis is in progress. 50% of participants did not have experience in image interpretation prior to the study. The study day and training tool were useful in improving image interpretation skills. Participants perception of the usefulness of the tool to aid image interpretation skills varied among respondents.Conclusion: This training tool has the potential to improve patient diagnosis and reduce healthcare costs
    corecore