316 research outputs found

    TuNet: End-to-end Hierarchical Brain Tumor Segmentation using Cascaded Networks

    Full text link
    Glioma is one of the most common types of brain tumors; it arises in the glial cells in the human brain and in the spinal cord. In addition to having a high mortality rate, glioma treatment is also very expensive. Hence, automatic and accurate segmentation and measurement from the early stages are critical in order to prolong the survival rates of the patients and to reduce the costs of the treatment. In the present work, we propose a novel end-to-end cascaded network for semantic segmentation that utilizes the hierarchical structure of the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation modules after each convolution and concatenation block. By utilizing cross-validation, an average ensemble technique, and a simple post-processing technique, we obtained dice scores of 88.06, 80.84, and 80.29, and Hausdorff Distances (95th percentile) of 6.10, 5.17, and 2.21 for the whole tumor, tumor core, and enhancing tumor, respectively, on the online test set.Comment: Accepted at MICCAI BrainLes 201

    UBQLN4 Represses Homologous Recombination and Is Overexpressed in Aggressive Tumors

    Get PDF
    Genomic instability can be a hallmark of both human genetic disease and cancer. We identify a deleterious UBQLN4 mutation in families with an autosomal recessive syndrome reminiscent of genome instability disorders. UBQLN4 deficiency leads to increased sensitivity to genotoxic stress and delayed DNA double-strand break (DSB) repair. The proteasomal shuttle factor UBQLN4 is phosphorylated by ATM and interacts with ubiquitylated MRE11 to mediate early steps of homologous recombination-mediated DSB repair (HRR). Loss of UBQLN4 leads to chromatin retention of MRE11, promoting non-physiological HRR activity in vitro and in vivo. Conversely, UBQLN4 overexpression represses HRR and favors non-homologous end joining. Moreover, we find UBQLN4 overexpressed in aggressive tumors. In line with an HRR defect in these tumors, UBQLN4 overexpression is associated with PARP1 inhibitor sensitivity. UBQLN4 therefore curtails HRR activity through removal of MRE11 from damaged chromatin and thus offers a therapeutic window for PARP1 inhibitor treatment in UBQLN4-overexpressing tumors

    3D U-Net Based Brain Tumor Segmentation and Survival Days Prediction

    Full text link
    Past few years have witnessed the prevalence of deep learning in many application scenarios, among which is medical image processing. Diagnosis and treatment of brain tumors requires an accurate and reliable segmentation of brain tumors as a prerequisite. However, such work conventionally requires brain surgeons significant amount of time. Computer vision techniques could provide surgeons a relief from the tedious marking procedure. In this paper, a 3D U-net based deep learning model has been trained with the help of brain-wise normalization and patching strategies for the brain tumor segmentation task in the BraTS 2019 competition. Dice coefficients for enhancing tumor, tumor core, and the whole tumor are 0.737, 0.807 and 0.894 respectively on the validation dataset. These three values on the test dataset are 0.778, 0.798 and 0.852. Furthermore, numerical features including ratio of tumor size to brain size and the area of tumor surface as well as age of subjects are extracted from predicted tumor labels and have been used for the overall survival days prediction task. The accuracy could be 0.448 on the validation dataset, and 0.551 on the final test dataset.Comment: Third place award of the 2019 MICCAI BraTS challenge survival task [BraTS 2019](https://www.med.upenn.edu/cbica/brats2019.html

    Sampling Once…Using Data Multiple Times.

    Get PDF
    presentaciónMarine ecosystem variability shows large fluctuations on a wide variety of scales, from seconds to millennia and from local to global. This limits our ability to observe these systems and to develop good tools to predict how changes in the environment may affect their physical and biological properties. It also limits our ability to differentiate anthropogenic from natural processes. An example is how difficult it is to compare data collected in different sampling locations and at different times. Time series data help resolve both short- and longer-term scales of variability and provide context for traditional process-oriented studies. Time series projects focusing on biogeochemical and ecological observations have yielded important scientific results. They have helped to: (i) evaluate the statistical significance of the ranges of variability of many parameters and environmental variables and biological communities, and (ii) quantify and evaluate the dimension of the interactions between key physical/chemical oceanographic processes and biological rates in plankton communities. As a result, time series are helping estimate warming rates and trends as well as the effects of global change on biota. They have established reference baselines to evaluate the magnitude of environmental perturbations and estimate recovery times on biodiversity and productivity of specific trophic levels. In spite of their scientific value, marine time series are difficult to maintain over time because of costs and availability of trained personnel. Only a few survive beyond a decade. There is great potential in sharing and combining marine data sets from different time series programs from around the world. This allows for comparisons of changes occurring in distant locations, and helps detect changes that occur at broad scales, perhaps even global scales, and to distinguish them from local imbalances or fluctuation. Sharing data can have important economic and social benefits. For instance, efficient use of existing marine data represents a significant cost saving from the 2 billion Euro spent each year now in the EU collecting and accessing to marine data. From the social point of view, the demand from different stakeholders for answers to the challenges posed by changes in the marine environment is growing rapidly. Sharing and accessing time series data would reduce the uncertainties in the management of marine resources and ecosystem services. The UNESCO IOC advocates that: (i) an observation not made today is lost forever, (ii) existing observations are lost if not made accessible, (iii) the collective value of data sets is greater than its dispersed value, and (iv) open access to standardised time series data must be pursued as a common, coordinated international goal.IOC-UNESCO, IE

    Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation

    Full text link
    We propose a new deep learning method for tumour segmentation when dealing with missing imaging modalities. Instead of producing one network for each possible subset of observed modalities or using arithmetic operations to combine feature maps, our hetero-modal variational 3D encoder-decoder independently embeds all observed modalities into a shared latent representation. Missing data and tumour segmentation can be then generated from this embedding. In our scenario, the input is a random subset of modalities. We demonstrate that the optimisation problem can be seen as a mixture sampling. In addition to this, we introduce a new network architecture building upon both the 3D U-Net and the Multi-Modal Variational Auto-Encoder (MVAE). Finally, we evaluate our method on BraTS2018 using subsets of the imaging modalities as input. Our model outperforms the current state-of-the-art method for dealing with missing modalities and achieves similar performance to the subset-specific equivalent networks.Comment: Accepted at MICCAI 201

    The DWD climate predictions website: Towards a seamless outlook based on subseasonal, seasonal and decadal predictions

    Get PDF
    The climate predictions website of the Deutscher Wetterdienst (DWD, https://www.dwd.de/climatepredictions) presents a consistent operational outlook for the coming weeks, months and years, focusing on the needs of German users. At global scale, subseasonal predictions from the European Centre of Medium-Range Weather Forecasts as well as seasonal and decadal predictions from the DWD are used. Statistical downscaling is applied to achieve high resolution over Germany. Lead-time dependent bias correction is performed on all time scales. Additionally, decadal predictions are recalibrated. The website offers ensemble mean and probabilistic predictions for temperature and precipitation combined with their skill (mean squared error skill score, ranked probability skill score). Two levels of complexity are offered: basic climate predictions display simple, regionally averaged information for Germany, German regions and cities as maps, time series and tables. The skill is presented as traffic light. Expert climate predictions show complex, gridded predictions for Germany (at high resolution), Europe and the world as maps and time series. The skill is displayed as the size of dots. Their color is related to the signal in the prediction. The website was developed in cooperation with users from different sectors via surveys, workshops and meetings to guarantee its understandability and usability. The users realize the potential of climate predictions, but some need advice in using probabilistic predictions and skill. Future activities will include the further development of predictions to improve skill (multi-model ensembles, teleconnections), the introduction of additional products (data provision, extremes) and the further clarification of the information (interactivity, video clips)

    [Work in progress] Scalable, out-of-the box segmentation of individual particles from mineral samples acquired with micro CT

    Full text link
    Minerals are indispensable for a functioning modern society. Yet, their supply is limited causing a need for optimizing their exploration and extraction both from ores and recyclable materials. Typically, these processes must be meticulously adapted to the precise properties of the processed particles, an extensive characterization of their shapes, appearances as well as the overall material composition. Current approaches perform this analysis based on bulk segmentation and characterization of particles imaged with a micro CT, and rely on rudimentary postprocessing techniques to separate touching particles. However, due to their inability to reliably perform this separation as well as the need to retrain or reconfigure methods for each new image, these approaches leave untapped potential to be leveraged. Here, we propose ParticleSeg3D, an instance segmentation method that is able to extract individual particles from large micro CT images taken from mineral samples embedded in an epoxy matrix. Our approach is based on the powerful nnU-Net framework, introduces a particle size normalization, makes use of a border-core representation to enable instance segmentation and is trained with a large dataset containing particles of numerous different materials and minerals. We demonstrate that ParticleSeg3D can be applied out-of-the box to a large variety of particle types, including materials and appearances that have not been part of the training set. Thus, no further manual annotations and retraining are required when applying the method to new mineral samples, enabling substantially higher scalability of experiments than existing methods. Our code and dataset are made publicly available
    corecore