9 research outputs found

    QU-BraTS: MICCAI BraTS 2020 challenge on quantifying uncertainty in brain tumor segmentation -- analysis of ranking metrics and benchmarking results

    Get PDF
    Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing metrics to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a metric developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This metric (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QUBraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTSResearch reported in this publication was partly supported by the Informatics Technology for Cancer Research (ITCR) program of the National Cancer Institute (NCI) of the National Institutes of Health (NIH), under award numbers NIH/NCI/ITCR:U01CA242871 and NIH/NCI/ITCR:U24CA189523. It was also partly supported by the National Institute of Neurological Disorders and Stroke (NINDS) of the NIH, under award number NIH/NINDS:R01NS042645.Document signat per 92 autors/autores: Raghav Mehta1 , Angelos Filos2 , Ujjwal Baid3,4,5 , Chiharu Sako3,4 , Richard McKinley6 , Michael Rebsamen6 , Katrin D¨atwyler6,53, Raphael Meier54, Piotr Radojewski6 , Gowtham Krishnan Murugesan7 , Sahil Nalawade7 , Chandan Ganesh7 , Ben Wagner7 , Fang F. Yu7 , Baowei Fei8 , Ananth J. Madhuranthakam7,9 , Joseph A. Maldjian7,9 , Laura Daza10, Catalina Gómez10, Pablo Arbeláez10, Chengliang Dai11, Shuo Wang11, Hadrien Raynaud11, Yuanhan Mo11, Elsa Angelini12, Yike Guo11, Wenjia Bai11,13, Subhashis Banerjee14,15,16, Linmin Pei17, Murat AK17, Sarahi Rosas-González18, Illyess Zemmoura18,52, Clovis Tauber18 , Minh H. Vu19, Tufve Nyholm19, Tommy L¨ofstedt20, Laura Mora Ballestar21, Veronica Vilaplana21, Hugh McHugh22,23, Gonzalo Maso Talou24, Alan Wang22,24, Jay Patel25,26, Ken Chang25,26, Katharina Hoebel25,26, Mishka Gidwani25, Nishanth Arun25, Sharut Gupta25 , Mehak Aggarwal25, Praveer Singh25, Elizabeth R. Gerstner25, Jayashree Kalpathy-Cramer25 , Nicolas Boutry27, Alexis Huard27, Lasitha Vidyaratne28, Md Monibor Rahman28, Khan M. Iftekharuddin28, Joseph Chazalon29, Elodie Puybareau29, Guillaume Tochon29, Jun Ma30 , Mariano Cabezas31, Xavier Llado31, Arnau Oliver31, Liliana Valencia31, Sergi Valverde31 , Mehdi Amian32, Mohammadreza Soltaninejad33, Andriy Myronenko34, Ali Hatamizadeh34 , Xue Feng35, Quan Dou35, Nicholas Tustison36, Craig Meyer35,36, Nisarg A. Shah37, Sanjay Talbar38, Marc-Andr Weber39, Abhishek Mahajan48, Andras Jakab47, Roland Wiest6,46 Hassan M. Fathallah-Shaykh45, Arash Nazeri40, Mikhail Milchenko140,44, Daniel Marcus40,44 , Aikaterini Kotrotsou43, Rivka Colen43, John Freymann41,42, Justin Kirby41,42, Christos Davatzikos3,4 , Bjoern Menze49,50, Spyridon Bakas∗3,4,5 , Yarin Gal∗2 , Tal Arbel∗1,51 // 1Centre for Intelligent Machines (CIM), McGill University, Montreal, QC, Canada, 2Oxford Applied and Theoretical Machine Learning (OATML) Group, University of Oxford, Oxford, England, 3Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA, 4Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA, 5Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA, 6Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, University of Bern, Inselspital, Bern University Hospital, Bern, Switzerland, 7Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA, 8Department of Bioengineering, University of Texas at Dallas, Texas, USA, 9Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, TX, USA, 10Universidad de los Andes, Bogotá, Colombia, 11Data Science Institute, Imperial College London, London, UK, 12NIHR Imperial BRC, ITMAT Data Science Group, Imperial College London, London, UK, 13Department of Brain Sciences, Imperial College London, London, UK, 14Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India, 15Department of CSE, University of Calcutta, Kolkata, India, 16 Division of Visual Information and Interaction (Vi2), Department of Information Technology, Uppsala University, Uppsala, Sweden, 17Department of Diagnostic Radiology, The University of Pittsburgh Medical Center, Pittsburgh, PA, USA, 18UMR U1253 iBrain, Université de Tours, Inserm, Tours, France, 19Department of Radiation Sciences, Ume˚a University, Ume˚a, Sweden, 20Department of Computing Science, Ume˚a University, Ume˚a, Sweden, 21Signal Theory and Communications Department, Universitat Politècnica de Catalunya, BarcelonaTech, Barcelona, Spain, 22Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand, 23Radiology Department, Auckland City Hospital, Auckland, New Zealand, 24Auckland Bioengineering Institute, University of Auckland, New Zealand, 25Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA, 26Massachusetts Institute of Technology, Cambridge, MA, USA, 27EPITA Research and Development Laboratory (LRDE), France, 28Vision Lab, Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529, USA, 29EPITA Research and Development Laboratory (LRDE), Le Kremlin-Bicˆetre, France, 30School of Science, Nanjing University of Science and Technology, 31Research Institute of Computer Vision and Robotics, University of Girona, Spain, 32Department of Electrical and Computer Engineering, University of Tehran, Iran, 33School of Computer Science, University of Nottingham, UK, 34NVIDIA, Santa Clara, CA, US, 35Biomedical Engineering, University of Virginia, Charlottesville, USA, 36Radiology and Medical Imaging, University of Virginia, Charlottesville, USA, 37Department of Electrical Engineering, Indian Institute of Technology - Jodhpur, Jodhpur, India, 38SGGS ©2021 Mehta et al.. License: CC-BY 4.0. arXiv:2112.10074v1 [eess.IV] 19 Dec 2021 Mehta et al. Institute of Engineering and Technology, Nanded, India, 39Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center, 40Department of Radiology, Washington University, St. Louis, MO, USA, 41Leidos Biomedical Research, Inc, Frederick National Laboratory for Cancer Research, Frederick, MD, USA, 42Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA, 43Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, TX, USA, 44Neuroimaging Informatics and Analysis Center, Washington University, St. Louis, MO, USA, 45Department of Neurology, The University of Alabama at Birmingham, Birmingham, AL, USA, 46Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland, 47Center for MR-Research, University Children’s Hospital Zurich, Zurich, Switzerland, 48Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India, 49Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland, 50Department of Informatics, Technical University of Munich, Munich, Germany, 51MILA - Quebec Artificial Intelligence Institute, Montreal, QC, Canada, 52Neurosurgery department, CHRU de Tours, Tours, France, 53 Human Performance Lab, Schulthess Clinic, Zurich, Switzerland, 54 armasuisse S+T, Thun, Switzerland.Preprin

    Interpretation of natural tibio-femoral kinematics critically depends upon the kinematic analysis approach: A survey and comparison of methodologies

    No full text
    While there is general agreement on the transverse plane knee joint motion for loaded flexion activities, its kinematics during functional movements such as level walking are discussed more controversially. One possible cause of this controversy could originate from the interpretation of kinematics based on different analysis approaches. In order to understand the impact of these approaches on the interpretation of tibio-femoral motion, a set of dynamic videofluoroscopy data presenting continuous knee bending and complete cycles of walking in ten subjects was analysed using six different kinematic analysis approaches. Use of a functional flexion axis resulted in significantly smaller ranges of condylar translation compared to anatomical axes and contact approaches. All contact points were located significantly more anteriorly than the femur fixed axes after 70° of flexion, but also during the early/mid stance and late swing phases of walking. Overall, a central to medial transverse plane centre of rotation was found for both activities using all six kinematic analysis approaches, although individual subjects exhibited lateral centres of rotation using certain approaches. The results of this study clearly show that deviations from the true functional axis of rotation result in kinematic crosstalk, suggesting that functional axes should be reported in preference to anatomical axes. Contact approaches, on the other hand, can present additional information on the local tibio-femoral contact conditions. To allow a more standardised comparison and interpretation of tibio-femoral kinematics, results should therefore be reported using at least a functionally determined axis and possibly also a contact point approach

    Interpretation of natural tibio-femoral kinematics critically depends upon the kinematic analysis approach: A survey and comparison of methodologies

    No full text
    While there is general agreement on the transverse plane knee joint motion for loaded flexion activities, its kinematics during functional movements such as level walking are discussed more controversially. One possible cause of this controversy could originate from the interpretation of kinematics based on different analysis approaches. In order to understand the impact of these approaches on the interpretation of tibio-femoral motion, a set of dynamic videofluoroscopy data presenting continuous knee bending and complete cycles of walking in ten subjects was analysed using six different kinematic analysis approaches. Use of a functional flexion axis resulted in significantly smaller ranges of condylar translation compared to anatomical axes and contact approaches. All contact points were located significantly more anteriorly than the femur fixed axes after 70° of flexion, but also during the early/mid stance and late swing phases of walking. Overall, a central to medial transverse plane centre of rotation was found for both activities using all six kinematic analysis approaches, although individual subjects exhibited lateral centres of rotation using certain approaches. The results of this study clearly show that deviations from the true functional axis of rotation result in kinematic crosstalk, suggesting that functional axes should be reported in preference to anatomical axes. Contact approaches, on the other hand, can present additional information on the local tibio-femoral contact conditions. To allow a more standardised comparison and interpretation of tibio-femoral kinematics, results should therefore be reported using at least a functionally determined axis and possibly also a contact point approach.ISSN:0021-9290ISSN:1873-238

    Videofluoroscopic Evaluation of the Influence of a Gradually Reducing Femoral Radius on Joint Kinematics During Daily Activities in Total Knee Arthroplasty

    No full text
    Background Paradoxical anterior translation in midflexion is reduced in total knee arthroplasties (TKAs) with a gradually reducing femoral radius, when compared to a 2-radii design. This reduction has been shown in finite element model simulations, in vitro tests, intraoperatively, and recently also in vivo during a lunge and unloaded flexion-extension. However, TKA kinematics are task dependent and this reduction has not been tested for gait activities. Methods Thirty good outcome subjects (≥1 year postoperatively) with a unilateral cruciate-retaining TKA with a gradually reducing (n = 15) or dual (n = 15) femoral radius design were assessed during 5 complete cycles of level walking, stair descent (0.18-m steps), deep knee bend, and sitting down onto and standing up from a chair, using a moving fluoroscope (25 Hz, 1 ms shutter time). Kinematic data were extracted by 2D/3D image registration. Results Tibiofemoral ranges of motion for flexion-extension, abduction-adduction, internal-external rotation, and anteroposterior (AP) translation were similar for both groups, whereas the pattern of AP translation-flexion-coupling differed. The subjects with the dual-radii design showed a sudden change in direction of AP translation around 30° of flexion, which was not present in the subjects with the gradually reducing femoral radius design. Conclusion Through the unique ability of moving fluoroscopy, the present study confirmed that the gradually reducing femoral radii eliminated the paradoxical sudden anterior translation at 30° present in the dual-radii design in vivo during daily activities, including gait and stair descent.ISSN:0883-540

    Development of VariLeg, an exoskeleton with variable stiffness actuation: first results and user evaluation from the CYBATHLON 2016

    No full text
    Abstract Background Powered exoskeletons are a promising approach to restore the ability to walk after spinal cord injury (SCI). However, current exoskeletons remain limited in their walking speed and ability to support tasks of daily living, such as stair climbing or overcoming ramps. Moreover, training progress for such advanced mobility tasks is rarely reported in literature. The work presented here aims to demonstrate the basic functionality of the VariLeg exoskeleton and its ability to enable people with motor complete SCI to perform mobility tasks of daily life. Methods VariLeg is a novel powered lower limb exoskeleton that enables adjustments to the compliance in the leg, with the objective of improving the robustness of walking on uneven terrain. This is achieved by an actuation system with variable mechanical stiffness in the knee joint, which was validated through test bench experiments. The feasibility and usability of the exoskeleton was tested with two paraplegic users with motor complete thoracic lesions at Th4 and Th12. The users trained three times a week, in 60 min sessions over four months with the aim of participating in the CYBATHLON 2016 competition, which served as a field test for the usability of the exoskeleton. The progress on basic walking skills and on advanced mobility tasks such as incline walking and stair climbing is reported. Within this first study, the exoskeleton was used with a constant knee stiffness. Results Test bench evaluation of the variable stiffness actuation system demonstrate that the stiffness could be rendered with an error lower than 30 Nm/rad. During training with the exoskeleton, both users acquired proficient skills in basic balancing, walking and slalom walking. In advanced mobility tasks, such as climbing ramps and stairs, only basic (needing support) to intermediate (able to perform task independently in 25% of the attempts) skill levels were achieved. After 4 months of training, one user competed at the CYBATHLON 2016 and was able to perform 3 (stand-sit-stand, slalom and tilted path) out of 6 obstacles of the track. No adverse events occurred during the training or the competition. Conclusion Demonstration of the applicability to restore ambulation for people with motor complete SCI was achieved. The CYBATHLON highlighted the importance of training and gaining experience in piloting an exoskeleton, which were just as important as the technical realization of the robot

    QU-BraTS : MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results

    No full text
    Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraT

    QU-BraTS : MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results

    Get PDF
    Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraT
    corecore