223 research outputs found

    The Medical Segmentation Decathlon

    Full text link
    International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)—a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training

    Vilhelm Lundstedt’s ‘Legal Machinery’ and the Demise of Juristic Practice

    Get PDF
    This article aims to contribute to the academic debate on the general crisis faced by law schools and the legal professions by discussing why juristic practice is a matter of experience rather than knowledge. Through a critical contextualisation of Vilhelm Lundstedt’s thought under processes of globalisation and transnationalism, it is argued that the demise of the jurist’s function is related to law’s scientification as brought about by the metaphysical construction of reality. The suggested roadmap will in turn reveal that the current voiding of juristic practice and its teaching is part of the crisis regarding what makes us human

    The Medical Segmentation Decathlon

    Get PDF
    International challenges have become the de facto standard for comparative assessment of image analysis algorithms given a specific task. Segmentation is so far the most widely investigated medical image processing task, but the various segmentation challenges have typically been organized in isolation, such that algorithm development was driven by the need to tackle a single specific clinical problem. We hypothesized that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. To investigate the hypothesis, we organized the Medical Segmentation Decathlon (MSD) - a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities. The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data and small objects. The MSD challenge confirmed that algorithms with a consistent good performance on a set of tasks preserved their good average performance on a different set of previously unseen tasks. Moreover, by monitoring the MSD winner for two years, we found that this algorithm continued generalizing well to a wide range of other clinical problems, further confirming our hypothesis. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms are mature, accurate, and generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to non AI experts

    All-flavor constraints on nonstandard neutrino interactions and generalized matter potential with three years of IceCube DeepCore data

    Get PDF
    We report constraints on nonstandard neutrino interactions (NSI) from the observation of atmospheric neutrinos with IceCube, limiting all individual coupling strengths from a single dataset. Furthermore, IceCube is the first experiment to constrain flavor-violating and nonuniversal couplings simultaneously. Hypothetical NSI are generically expected to arise due to the exchange of a new heavy mediator particle. Neutrinos propagating in matter scatter off fermions in the forward direction with negligible momentum transfer. Hence the study of the matter effect on neutrinos propagating in the Earth is sensitive to NSI independently of the energy scale of new physics. We present constraints on NSI obtained with an all-flavor event sample of atmospheric neutrinos based on three years of IceCube DeepCore data. The analysis uses neutrinos arriving from all directions, with reconstructed energies between 5.6 GeV and 100 GeV. We report constraints on the individual NSI coupling strengths considered singly, allowing for complex phases in the case of flavor-violating couplings. This demonstrates that IceCube is sensitive to the full NSI flavor structure at a level competitive with limits from the global analysis of all other experiments. In addition, we investigate a generalized matter potential, whose overall scale and flavor structure are also constrained

    Searching for High-energy Neutrino Emission from Galaxy Clusters with IceCube

    Get PDF
    Galaxy clusters have the potential to accelerate cosmic rays (CRs) to ultrahigh energies via accretion shocks or embedded CR acceleration sites. The CRs with energies below the Hillas condition will be confined within the cluster and eventually interact with the intracluster medium gas to produce secondary neutrinos and gamma rays. Using 9.5 yr of muon neutrino track events from the IceCube Neutrino Observatory, we report the results of a stacking analysis of 1094 galaxy clusters with masses ≳1014^{14} M⊙ and redshifts between 0.01 and ∌1 detected by the Planck mission via the Sunyaev–Zel’dovich effect. We find no evidence for significant neutrino emission and report upper limits on the cumulative unresolved neutrino flux from massive galaxy clusters after accounting for the completeness of the catalog up to a redshift of 2, assuming three different weighting scenarios for the stacking and three different power-law spectra. Weighting the sources according to mass and distance, we set upper limits at a 90% confidence level that constrain the flux of neutrinos from massive galaxy clusters (≳1014^{14} M⊙) to be no more than 4.6% of the diffuse IceCube observations at 100 TeV, assuming an unbroken E−2.5^{2.5} power-law spectrum
    • 

    corecore