25 research outputs found

    Lesion Search with Self-supervised Learning

    Full text link
    Content-based image retrieval (CBIR) with self-supervised learning (SSL) accelerates clinicians' interpretation of similar images without manual annotations. We develop a CBIR from the contrastive learning SimCLR and incorporate a generalized-mean (GeM) pooling followed by L2 normalization to classify lesion types and retrieve similar images before clinicians' analysis. Results have shown improved performance. We additionally build an open-source application for image analysis and retrieval. The application is easy to integrate, relieving manual efforts and suggesting the potential to support clinicians' everyday activities.Comment: ICLR 2023 Tiny Pape

    Guided Proofreading of Automatic Segmentations for Connectomics

    Full text link
    Automatic cell image segmentation methods in connectomics produce merge and split errors, which require correction through proofreading. Previous research has identified the visual search for these errors as the bottleneck in interactive proofreading. To aid error correction, we develop two classifiers that automatically recommend candidate merges and splits to the user. These classifiers use a convolutional neural network (CNN) that has been trained with errors in automatic segmentations against expert-labeled ground truth. Our classifiers detect potentially-erroneous regions by considering a large context region around a segmentation boundary. Corrections can then be performed by a user with yes/no decisions, which reduces variation of information 7.5x faster than previous proofreading methods. We also present a fully-automatic mode that uses a probability threshold to make merge/split decisions. Extensive experiments using the automatic approach and comparing performance of novice and expert users demonstrate that our method performs favorably against state-of-the-art proofreading methods on different connectomics datasets.Comment: Supplemental material available at http://rhoana.org/guidedproofreading/supplemental.pd

    Promoting Sustainability through Next-Generation Biologics Drug Development

    Get PDF
    The fourth industrial revolution in 2011 aimed to transform the traditional manufacturing processes. As part of this revolution, disruptive innovations in drug development and data science approaches have the potential to optimize CMC (chemistry, manufacture, and control). The real-time simulation of processes using “digital twins” can maximize efficiency while improving sustainability. As part of this review, we investigate how the World Health Organization’s 17 sustainability goals can apply toward next-generation drug development. We analyze the state-of-the-art laboratory leadership, inclusive personnel recruiting, the latest therapy approaches, and intelligent process automation. We also outline how modern data science techniques and machine tools for CMC help to shorten drug development time, reduce failure rates, and minimize resource usage. Finally, we systematically analyze and compare existing approaches to our experiences with the high-throughput laboratory KIWI-biolab at the TU Berlin. We describe a sustainable business model that accelerates scientific innovations and supports global action toward a sustainable future.BMBF, 01DD20002A, Verbundprojekt: Internationales Zukunftslabor fĂŒr KI-gestĂŒtzte Bioprozessentwicklung "KIWI-biolab"; Teilvorhaben: Koordination und Aufbau eines KI-Exzellenzzentrum

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    Full text link
    Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbackComment: 16 page

    Evaluating ‘Graphical Perception’ with CNNs

    No full text

    SlicerTMS: Interactive Real-time Visualization of Transcranial Magnetic Stimulation using Augmented Reality and Deep Learning

    Full text link
    Transcranial magnetic stimulation (TMS) is a non-invasive neuromodulation approach that effectively treats various brain disorders. One of the critical factors in the success of TMS treatment is accurate coil placement, which can be challenging, especially when targeting specific brain areas for individual patients. Calculating the optimal coil placement and the resulting electric field on the brain surface can be expensive and time-consuming. We introduce SlicerTMS, a simulation method that allows the real-time visualization of the TMS electromagnetic field within the medical imaging platform 3D Slicer. Our software leverages a 3D deep neural network, supports cloud-based inference, and includes augmented reality visualization using WebXR. We evaluate the performance of SlicerTMS with multiple hardware configurations and compare it against the existing TMS visualization application SimNIBS. All our code, data, and experiments are openly available: \url{https://github.com/lorifranke/SlicerTMS}Comment: 11 pages, 3 figures, 2 tables, MICCA
    corecore