61 research outputs found

    Notch pathway inhibition controls myeloma bone disease in the murine MOPC315.BM model

    Get PDF
    Despite evidence that deregulated Notch signalling is a master regulator of multiple myeloma (MM) pathogenesis, its contribution to myeloma bone disease remains to be resolved. Notch promotes survival of human MM cells and triggers human osteoclast activity in vitro. Here, we show that inhibition of Notch through the γ-secretase inhibitor XII (GSI XII) induces apoptosis of murine MOPC315.BM myeloma cells with high Notch activity. GSI XII impairs murine osteoclast differentiation of receptor activator of NF-κB ligand (RANKL)-stimulated RAW264.7 cells in vitro. In the murine MOPC315.BM myeloma model GSI XII has potent anti-MM activity and reduces osteolytic lesions as evidenced by diminished myeloma-specific monoclonal immunoglobulin (Ig)-A serum levels and quantitative assessment of bone structure changes via high-resolution microcomputed tomography scans. Thus, we suggest that Notch inhibition through GSI XII controls myeloma bone disease mainly by targeting Notch in MM cells and possibly in osteoclasts in their microenvironment. We conclude that Notch inhibition is a valid therapeutic strategy in MM

    Common Limitations of Image Processing Metrics:A Picture Story

    Get PDF
    While the importance of automatic image analysis is continuously increasing, recent meta-research revealed major flaws with respect to algorithm validation. Performance metrics are particularly key for meaningful, objective, and transparent performance assessment and validation of the used automatic algorithms, but relatively little attention has been given to the practical pitfalls when using specific metrics for a given image analysis task. These are typically related to (1) the disregard of inherent metric properties, such as the behaviour in the presence of class imbalance or small target structures, (2) the disregard of inherent data set properties, such as the non-independence of the test cases, and (3) the disregard of the actual biomedical domain interest that the metrics should reflect. This living dynamically document has the purpose to illustrate important limitations of performance metrics commonly applied in the field of image analysis. In this context, it focuses on biomedical image analysis problems that can be phrased as image-level classification, semantic segmentation, instance segmentation, or object detection task. The current version is based on a Delphi process on metrics conducted by an international consortium of image analysis experts from more than 60 institutions worldwide.Comment: This is a dynamic paper on limitations of commonly used metrics. The current version discusses metrics for image-level classification, semantic segmentation, object detection and instance segmentation. For missing use cases, comments or questions, please contact [email protected] or [email protected]. Substantial contributions to this document will be acknowledged with a co-authorshi

    Understanding metric-related pitfalls in image analysis validation

    Get PDF
    Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.Comment: Shared first authors: Annika Reinke, Minu D. Tizabi; shared senior authors: Paul F. J\"ager, Lena Maier-Hei

    Why is the Winner the Best?

    Get PDF
    International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work

    Biomarker candidates of neurodegeneration in Parkinson’s disease for the evaluation of disease-modifying therapeutics

    Get PDF
    Reliable biomarkers that can be used for early diagnosis and tracking disease progression are the cornerstone of the development of disease-modifying treatments for Parkinson’s disease (PD). The German Society of Experimental and Clinical Neurotherapeutics (GESENT) has convened a Working Group to review the current status of proposed biomarkers of neurodegeneration according to the following criteria and to develop a consensus statement on biomarker candidates for evaluation of disease-modifying therapeutics in PD. The criteria proposed are that the biomarker should be linked to fundamental features of PD neuropathology and mechanisms underlying neurodegeneration in PD, should be correlated to disease progression assessed by clinical rating scales, should monitor the actual disease status, should be pre-clinically validated, and confirmed by at least two independent studies conducted by qualified investigators with the results published in peer-reviewed journals. To date, available data have not yet revealed one reliable biomarker to detect early neurodegeneration in PD and to detect and monitor effects of drug candidates on the disease process, but some promising biomarker candidates, such as antibodies against neuromelanin, pathological forms of α-synuclein, DJ-1, and patterns of gene expression, metabolomic and protein profiling exist. Almost all of the biomarker candidates were not investigated in relation to effects of treatment, validated in experimental models of PD and confirmed in independent studies

    Why is the winner the best?

    Get PDF
    International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The 'typical' lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work

    Effective condition monitoring and assessment for more sophisticated asset management systems

    No full text
    Abstract not availabl
    corecore