271 research outputs found

    Lumbar Scoliosis Analysis Using Deep Learning Based Technique

    Get PDF
    Medical image interpretation automation saves physicians time and boosts diagnostic confidence. Most medical imaging diagnosis is done manually or semi-automatically. These methods vary when performed by several physicians. This thesis includes mid- sagittal lumbar spine magnetic resonance imaging (MRI) images with labeling and spinal metrics. Two pieces were marked. Professional radiologists created pixel-wise masks to detect vertebral bodies (VBs) in each picture, which a panel of spine surgeons subsequently examined. An enhanced method (VBSeg) compares the segmentation work of traditional and deep-learning architectural techniques. A novel computerized spinal misalignment evaluation method may help spine surgeons make objective decisions about critical surgeries. Angular deviation classifies spondylolisthesis 89% accurately, whereas the area inside the enclosed lumbar curve zone classifies LL adequacy/inadequacy 93%   accurately

    Simultaneous lesion and neuroanatomy segmentation in Multiple Sclerosis using deep neural networks

    Get PDF
    Segmentation of both white matter lesions and deep grey matter structures is an important task in the quantification of magnetic resonance imaging in multiple sclerosis. Typically these tasks are performed separately: in this paper we present a single segmentation solution based on convolutional neural networks (CNNs) for providing fast, reliable segmentations of multimodal magnetic resonance images into lesion classes and normal-appearing grey- and white-matter structures. We show substantial, statistically significant improvements in both Dice coefficient and in lesion-wise specificity and sensitivity, compared to previous approaches, and agreement with individual human raters in the range of human inter-rater variability. The method is trained on data gathered from a single centre: nonetheless, it performs well on data from centres, scanners and field-strengths not represented in the training dataset. A retrospective study found that the classifier successfully identified lesions missed by the human raters. Lesion labels were provided by human raters, while weak labels for other brain structures (including CSF, cortical grey matter, cortical white matter, cerebellum, amygdala, hippocampus, subcortical GM structures and choroid plexus) were provided by Freesurfer 5.3. The segmentations of these structures compared well, not only with Freesurfer 5.3, but also with FSL-First and Freesurfer 6.0

    Estimating animal pose using deep learning a trained deep learning model outperforms morphological analysis

    Get PDF
    INTRODUCTION: Analyzing animal behavior helps researchers understand their decision-making process and helper tools are rapidly becoming an indispensable part of many interdisciplinary studies. However, researchers are often challenged to estimate animal pose because of the limitation of the tools and its vulnerability to a specific environment. Over the years, deep learning has been introduced as an alternative solution to overcome these challenges. OBJECTIVES: This study investigates how deep learning models can be applied for the accurate prediction of animal behavior, comparing with traditional morphological analysis based on image pixels. METHODS: Transparent Omnidirectional Locomotion Compensator (TOLC), a tracking device, is used to record videos with a wide range of animal behavior. Recorded videos contain two insects: a walking red imported fire ant (Solenopsis invicta) and a walking fruit fly (Drosophila melanogaster). Body parts such as the head, legs, and thorax, are estimated by using an open-source deep-learning toolbox. A deep learning model, ResNet-50, is trained to predict the body parts of the fire ant and the fruit fly respectively. 500 image frames for each insect were annotated by humans and then compared with the predictions of the deep learning model as well as the points generated from the morphological analysis. RESULTS: The experimental results show that the average distance between the deep learning-predicted centroids and the human-annotated centroids is 2.54, while the average distance between the morphological analysis-generated centroids and the human-annotated centroids is 6.41 over the 500 frames of the fire ant. For the fruit fly, the average distance of the centroids between the deep learning- predicted and the human-annotated is 2.43, while the average distance of the centroids between the morphological analysis-generated and the human-annotated is 5.06 over the 477 image frames. CONCLUSION: In this paper, we demonstrate that the deep learning model outperforms traditional morphological analysis in terms of estimating animal pose in a series of video frames

    Skolyoz için Kapsül Ağları Tabanlı Otomatik Ölçüm Sistemi

    Get PDF
    Skolyoz, omurganın eğrilmesi ile birlikte omurga genel yapısını deforme eden bir hastalıktır. Skolyoz tanı ve tedavi aşamasında çeşitli yöntemler olmakla birlikte, temel amaç Cobb açısı adı verilen eğrilik açısını azaltarak Skolyoz seviyesini düşürme çerçevesinde şekillenmektedir. Cobb açısı ölçümü esasında uzman tarafından, omurga röntgen filmleri üzerinde manuel olarak gerçekleştirilmektedir. Ancak bu sürecin derin öğrenme gibi bir Yapay Zeka yaklaşımıyla otomatikleştirilmesi hem hasta hem de uzman açısından büyük kolaylık ve kesinlik sağlayacaktır. Açıklamalardan hareketle bu çalışmada, öncelikli olarak Skolyoz ve derin öğrenme odaklı çalışmalar açısından literatürün güncel durumu ele alınmış, ardından Kapsül Ağları (CapsNet) tabanlı bir çözüm ile Cobb açısı ölçümlerinin otomatik bir hale getirilmesi sağlanmıştır. CapsNet çözümünün, ConvNet, BoostNet, RFR ve ResNet-50 modelleri ile karşılaştırılması neticesinde en iyi bulguları CapsNet modelinin verdiği tespit edilmiştir

    Artificial Intelligence: Development and Applications in Neurosurgery

    Get PDF
    The last decade has witnessed a significant increase in the relevance of artificial intelligence (AI) in neuroscience. Gaining notoriety from its potential to revolutionize medical decision making, data analytics, and clinical workflows, AI is poised to be increasingly implemented into neurosurgical practice. However, certain considerations pose significant challenges to its immediate and widespread implementation. Hence, this chapter will explore current developments in AI as it pertains to the field of clinical neuroscience, with a primary focus on neurosurgery. Additionally included is a brief discussion of important economic and ethical considerations related to the feasibility and implementation of AI-based technologies in neurosciences, including future horizons such as the operational integrations of human and non-human capabilities

    Dragline excavation simulation, real-time terrain recognition and object detection

    Get PDF
    The contribution of coal to global energy is expected to remain above 30% through 2030. Draglines are the preferred excavation equipment in most surface coal mines. Recently, studies toward dragline excavation efficiency have focused on two specific areas. The first area is dragline bucket studies, where the goal is to develop new designs which perform better than conventional buckets. Drawbacks in the current approach include operator inconsistencies and the inability to physically test every proposed design. Previous simulation models used Distinct Element Methods (DEM) but they over-predict excavation forces by 300% to 500%. In this study, a DEM-based simulation model has been developed to predict bucket payloads within a 16.55% error. The excavation model includes a novel method for calibrating formation parameters. The method combines DEM-based tri-axial material testing with the XGBoost machine learning algorithm to achieve prediction accuracies of between 80.6% and 95.54%. The second area is dragline vision studies towards efficient dragline operation. Current dragline vision models use image segmentation methods that are neither scalable nor multi-purpose. In this study, a scalable and multi-purpose vision model has been developed for draglines using Convolutional Neural Networks. This vision system achieves an 87.32% detection rate, 80.9% precision and 91.3% recall performance across multiple operation tasks. The main novelty of this research includes the bucket payload prediction accuracy, formation parameter calibration and the vision system accuracy, precision and recall performance toward improving dragline operating efficiencies --Abstract, page iii

    Air Force Institute of Technology Research Report 2019

    Get PDF
    This Research Report presents the FY19 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs). Interested individuals may discuss ideas for new research collaborations, potential CRADAs, or research proposals with individual faculty using the contact information in this document
    corecore