17 research outputs found

    StenUNet: Automatic Stenosis Detection from X-ray Coronary Angiography

    Full text link
    Coronary angiography continues to serve as the primary method for diagnosing coronary artery disease (CAD), which is the leading global cause of mortality. The severity of CAD is quantified by the location, degree of narrowing (stenosis), and number of arteries involved. In current practice, this quantification is performed manually using visual inspection and thus suffers from poor inter- and intra-rater reliability. The MICCAI grand challenge: Automatic Region-based Coronary Artery Disease diagnostics using the X-ray angiography imagEs (ARCADE) curated a dataset with stenosis annotations, with the goal of creating an automated stenosis detection algorithm. Using a combination of machine learning and other computer vision techniques, we propose the architecture and algorithm StenUNet to accurately detect stenosis from X-ray Coronary Angiography. Our submission to the ARCADE challenge placed 3rd among all teams. We achieved an F1 score of 0.5348 on the test set, 0.0005 lower than the 2nd place.Comment: 12 pages, 5 figures, 1 tabl

    YOLO-Angio: An Algorithm for Coronary Anatomy Segmentation

    Full text link
    Coronary angiography remains the gold standard for diagnosis of coronary artery disease, the most common cause of death worldwide. While this procedure is performed more than 2 million times annually, there remain few methods for fast and accurate automated measurement of disease and localization of coronary anatomy. Here, we present our solution to the Automatic Region-based Coronary Artery Disease diagnostics using X-ray angiography images (ARCADE) challenge held at MICCAI 2023. For the artery segmentation task, our three-stage approach combines preprocessing and feature selection by classical computer vision to enhance vessel contrast, followed by an ensemble model based on YOLOv8 to propose possible vessel candidates by generating a vessel map. A final segmentation is based on a logic-based approach to reconstruct the coronary tree in a graph-based sorting method. Our entry to the ARCADE challenge placed 3rd overall. Using the official metric for evaluation, we achieved an F1 score of 0.422 and 0.4289 on the validation and hold-out sets respectively.Comment: MICCAI Conference ARCADE Grand Challenge, YOLO, Computer Vision

    Multimodal Imaging of Cortical Networks Controlling Lower Limb Locomotion: Towards the Development of Brain-Computer Interfaces

    No full text
    In 2015 the National Spinal Cord Injury Association of Canada reported that 30,000 Canadians suffer from paralysis in two or more limbs. In many cases this takes away the fundamental ability to walk. Walking, an intricate sensorimotor task, involves the interactions of both dynamic and balancing neurological processes. Brain computer interfaces (BCIs) are attempting to bridge the gap that will allow persons with compromised mobility to interact with the world via control of prosthetic devices that can ‘act’ by using solely neural input (i.e. thoughts). The goal of this thesis was to aid in the development of a BCI for lower limb locomotion by identifying similarities and differences between cortical activity associated with executed and imagined left and right lower limb movements using electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI). Data from 16 participants showed that it was possible to differentiate between right versus left executed and imagined thought processes for lower limb locomotion using solely information from an EEG, and that these patterns of brain activity were generalizable across time points and trials. It was also found, through the use of fMRI, that areas of brain activation in executed and imagined conditions were similar for some areas but showed unique activation areas as well. A novel paradigm to co-register EEG and fMRI data was developed that can easily be utilized in other contexts. Finally, using EEG and fMRI data allowed for an efficient model to use in a machine learning paradigm that successfully predicted left versus right lower limb movement. This research adds to the existing body of knowledge in understanding psychomotor brain activity associated with thought coordination processes involved in the task of walking in normal persons represented by algorithmic patterns

    Medical Image Deidentification, Cleaning and Compression Using Pylogik

    Full text link
    Leveraging medical record information in the era of big data and machine learning comes with the caveat that data must be cleaned and de-identified. Facilitating data sharing and harmonization for multi-center collaborations are particularly difficult when protected health information (PHI) is contained or embedded in image meta-data. We propose a novel library in the Python framework, called PyLogik, to help alleviate this issue for ultrasound images, which are particularly challenging because of the frequent inclusion of PHI directly on the images. PyLogik processes the image volumes through a series of text detection/extraction, filtering, thresholding, morphological and contour comparisons. This methodology de-identifies the images, reduces file sizes, and prepares image volumes for applications in deep learning and data sharing. To evaluate its effectiveness in processing ultrasound data, a random sample of 50 cardiac ultrasounds (echocardiograms) were processed through PyLogik, and the outputs were compared with the manual segmentations by an expert user. The Dice coefficient of the two approaches achieved an average value of 0.976. Next, an investigation was conducted to ascertain the degree of information compression achieved using the algorithm. Resultant data was found to be on average ~72% smaller after processing by PyLogik. Our results suggest that PyLogik is a viable methodology for data cleaning and de-identification, determining ROI, and file compression which will facilitate efficient storage, use, and dissemination of ultrasound data. Variants of the pipeline have also been created for use with other medical imaging data types.Comment: updates needed to manuscrip
    corecore