29 research outputs found

    TINA manual landmarking tool: software for the precise digitization of 3D landmarks

    Get PDF
    Background: Interest in the placing of landmarks and subsequent morphometric analyses of shape for 3D data has increased with the increasing accessibility of computed tomography (CT) scanners. However, current computer programs for this task suffer from various practical drawbacks. We present here a free software tool that overcomes many of these problems. Results: The TINA Manual Landmarking Tool was developed for the digitization of 3D data sets. It enables the generation of a modifiable 3D volume rendering display plus matching orthogonal 2D cross-sections from DICOM files. The object can be rotated and axes defined and fixed. Predefined lists of landmarks can be loaded and the landmarks identified within any of the representations. Output files are stored in various established formats, depending on the preferred evaluation software. Conclusions: The software tool presented here provides several options facilitating the placing of landmarks on 3D objects, including volume rendering from DICOM files, definition and fixation of meaningful axes, easy import, placement, control, and export of landmarks, and handling of large datasets. The TINA Manual Landmark Tool runs under Linux and can be obtained for free from http://www.tina-vision.net/tarballs/

    Quantitative shape analysis with weighted covariance estimates for increased statistical efficiency

    Get PDF
    BACKGROUND: The introduction and statistical formalisation of landmark-based methods for analysing biological shape has made a major impact on comparative morphometric analyses. However, a satisfactory solution for including information from 2D/3D shapes represented by ‘semi-landmarks’ alongside well-defined landmarks into the analyses is still missing. Also, there has not been an integration of a statistical treatment of measurement error in the current approaches. RESULTS: We propose a procedure based upon the description of landmarks with measurement covariance, which extends statistical linear modelling processes to semi-landmarks for further analysis. Our formulation is based upon a self consistent approach to the construction of likelihood-based parameter estimation and includes corrections for parameter bias, induced by the degrees of freedom within the linear model. The method has been implemented and tested on measurements from 2D fly wing, 2D mouse mandible and 3D mouse skull data. We use these data to explore possible advantages and disadvantages over the use of standard Procrustes/PCA analysis via a combination of Monte-Carlo studies and quantitative statistical tests. In the process we show how appropriate weighting provides not only greater stability but also more efficient use of the available landmark data. The set of new landmarks generated in our procedure (‘ghost points’) can then be used in any further downstream statistical analysis. CONCLUSIONS: Our approach provides a consistent way of including different forms of landmarks into an analysis and reduces instabilities due to poorly defined points. Our results suggest that the method has the potential to be utilised for the analysis of 2D/3D data, and in particular, for the inclusion of information from surfaces represented by multiple landmark points

    Artificial intelligence projects in healthcare:10 practical tips for success in a clinical environment

    Get PDF
    There is much discussion concerning ‘digital transformation’ in healthcare and the potential of artificial intelligence (AI) in healthcare systems. Yet it remains rare to find AI solutions deployed in routine healthcare settings. This is in part due to the numerous challenges inherent in delivering an AI project in a clinical environment. In this article, several UK healthcare professionals and academics reflect on the challenges they have faced in building AI solutions using routinely collected healthcare data.These personal reflections are summarised as 10 practical tips. In our experience, these are essential considerations for an AI healthcare project to succeed. They are organised into four phases: conceptualisation, data management, AI application and clinical deployment. There is a focus on conceptualisation, reflecting our view that initial set-up is vital to success. We hope that our personal experiences will provide useful insights to others looking to improve patient care through optimal data use

    Opportunistic diagnosis of osteoporosis, fragile bone strength and vertebral fractures from routine CT scans; a review of approved technology systems and pathways to implementation

    Get PDF
    Osteoporosis causes bones to become weak, porous and fracture more easily. While a vertebral fracture is the archetypal fracture of osteoporosis, it is also the most difficult to diagnose clinically. Patients often suffer further spine or other fractures, deformity, height loss and pain before diagnosis. There were an estimated 520,000 fragility fractures in the United Kingdom (UK) in 2017 (costing £4.5 billion), a figure set to increase 30% by 2030. One way to improve both vertebral fracture identification and the diagnosis of osteoporosis is to assess a patient’s spine or hips during routine computed tomography (CT) scans. Patients attend routine CT for diagnosis and monitoring of various medical conditions, but the skeleton can be overlooked as radiologists concentrate on the primary reason for scanning. More than half a million CT scans done each year in the National Health Service (NHS) could potentially be screened for osteoporosis (increasing 5% annually). If CT-based screening became embedded in practice, then the technique could have a positive clinical impact in the identification of fragility fracture and/or low bone density. Several companies have developed software methods to diagnose osteoporosis/fragile bone strength and/or identify vertebral fractures in CT datasets, using various methods that include image processing, computational modelling, artificial intelligence and biomechanical engineering concepts. Technology to evaluate Hounsfield units is used to calculate bone density, but not necessarily bone strength. In this rapid evidence review, we summarise the current literature underpinning approved technologies for opportunistic screening of routine CT images to identify fractures, bone density or strength information. We highlight how other new software technologies have become embedded in NHS clinical practice (having overcome barriers to implementation) and highlight how the novel osteoporosis technologies could follow suit. We define the key unanswered questions where further research is needed to enable the adoption of these technologies for maximal patient benefit

    Risk propensity in the foreign direct investment location decision of emerging multinationals

    Get PDF
    A distinguishing feature of emerging economy multinationals is their apparent tolerance for host country institutional risk. Employing behavioral decision theory and quasi-experimental data, we find that managers’ domestic experience satisfaction increases their relative risk propensity regarding controllable risk (legally protectable loss), but decreases their tendency to accept non-controllable risk (e.g., political instability). In contrast, firms’ potential slack reduces relative risk propensity regarding controllable risk, yet amplifies the tendency to take non-controllable risk. We suggest that these counterbalancing effects might help explain observation that risk-taking in FDI location decisions is influenced by firm experience and context. The study provides a new understanding of why firms exhibit heterogeneous responses to host country risks, and the varying effects of institutions

    Computer-Aided Diagnostic Systems for Osteoporotic Vertebral Fracture Detection: Opportunities and Challenges.

    Get PDF
    In the current issue of JBMR, Kolanu and colleagues evaluate a computer-aided diagnostic (CAD) system designed to identify osteoporotic vertebral fractures (VFs) visualized opportunistically from computed tomography (CT) images.(1) The system, developed by Zebra Medical Vision (Shefayim, Israel; www.zebramed. com), extracts a virtual sagittal section visualizing the spinal midplane and identifies VFs using machine-learning algorithms. It outputs the probability that the volume contains a VF and a heat map indicating the probable locations of VFs in the sagittal image. In a single-site study involving thoracic CT scans from 1696 patients with a VF prevalence of 24%, the system achieved a sensitivity, specificity, and accuracy of 54%, 92%, and 83%, respectively.None. Acknowledged Cambridge NIHR Biomedical Research Centr

    Cerebrospinal Fluid Pulsatility: Design and Validation in Healthy Normals

    No full text
    During the cardiac cycle a complex series of fluid shifts occur within the skull in order to protect the brain from the pressure variations which occur in the cerebral arteries. The extracerebral intracranial arteries dilate during systole generating a pressure wave within the cerebral spinal fluid (CSF). This pressure wave is dissipated by flow of CSF into the compliant spinal subarachnoid space and by direct transmission to the cerebral venous sinuses. This mechanism reduces the pulsatility of the pressure wave to which the brain is exposed during the cardiac cycle. Failure of this mechanism has been implicated in a number of cerebral diseases. The mechanism can be investigated using quantitative magnetic resonance phase imaging but results can be difficult to interpret due to the complexity of the interactions. We present a novel physiological model of this mechanism based on the concept of electrical equivalence. The model allows privation of seven parameters which are not directly measurable: 1) the arterial compliance, 2) brain compliance, 3) ventricular compliance, 4) venous compliance, 5) arterial impedance, 6) brain impedance and 7) impedance of the cerebral aqueduct. We tested the model in a group of 24 healthy normal volunteers. Analysis of individual subjects showed that the data contained in adequate information for reliable fitting. Groupwise analysis showed that the model described all of the statistically significant variation in the data. We conclude that this model forms a basis for the analysis of CSF flow studies although it will requir
    corecore