47 research outputs found

    LatentAugment: Data Augmentation via Guided Manipulation of GAN's Latent Space

    Full text link
    Data Augmentation (DA) is a technique to increase the quantity and diversity of the training data, and by that alleviate overfitting and improve generalisation. However, standard DA produces synthetic data for augmentation with limited diversity. Generative Adversarial Networks (GANs) may unlock additional information in a dataset by generating synthetic samples having the appearance of real images. However, these models struggle to simultaneously address three key requirements: fidelity and high-quality samples; diversity and mode coverage; and fast sampling. Indeed, GANs generate high-quality samples rapidly, but have poor mode coverage, limiting their adoption in DA applications. We propose LatentAugment, a DA strategy that overcomes the low diversity of GANs, opening up for use in DA applications. Without external supervision, LatentAugment modifies latent vectors and moves them into latent space regions to maximise the synthetic images' diversity and fidelity. It is also agnostic to the dataset and the downstream task. A wide set of experiments shows that LatentAugment improves the generalisation of a deep model translating from MRI-to-CT beating both standard DA as well GAN-based sampling. Moreover, still in comparison with GAN-based sampling, LatentAugment synthetic samples show superior mode coverage and diversity. Code is available at: https://github.com/ltronchin/LatentAugment

    Simulated Data for Linear Regression with Structured and Sparse Penalties: Introducing pylearn-simulate

    Get PDF
    A currently very active field of research is how to incorporate structure and prior knowledge in machine learning methods. It has lead to numerous developments in the field of non-smooth convex minimization. With recently developed methods it is possible to perform an analysis in which the computed model can be linked to a given structure of the data and simultaneously do variable selection to find a few important features in the data. However, there is still no way to unambiguously simulate data to test proposed algorithms, since the exact solutions to such problems are unknown. The main aim of this paper is to present a theoretical framework for generating simulated data. These simulated data are appropriate when comparing optimization algorithms in the context of linear regression problems with sparse and structured penalties. Additionally, this approach allows the user to control the signal-to-noise ratio, the correlation structure of the data and the optimization problem to which they are the solution. The traditional approach is to simulate random data without taking into account the actual model that will be fit to the data. But when using such an approach it is not possible to know the exact solution of the underlying optimization problem. With our contribution, it is possible to know the exact theoretical solution of a penalized linear regression problem, and it is thus possible to compare algorithms without the need to use, e.g., cross-validation. We also present our implementation, the Python package pylearn-simulate, available at https://github.com/neurospin/pylearn-simulate and released under the BSD 3clause license. We describe the package and give examples at the end of the paper

    Localization Network and End-to-End Cascaded U-Nets for Kidney Tumor Segmentation

    Get PDF
    Kidney tumor segmentation emerges as a new frontier of computer vision in medical imaging. This is partly due to its challenging manual annotation and great medical impact. Within the scope of the Kidney Tumor Segmentation Challenge 2019, that is aiming at combined kidney and tumor segmentation, this work proposes a novel combination of 3D U-Nets—collectively denoted TuNet—utilizing the resulting kidney masks for the consecutive tumor segmentation. The proposed method achieves a Sørensen-Dice coefficient score of 0.902 for the kidney, and 0.408 for the tumor segmentation, computed from a five-fold cross-validation on the 210 patients available in the data

    Modelling human musculoskeletal functional movements using ultrasound imaging

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A widespread and fundamental assumption in the health sciences is that muscle functions are related to a wide variety of conditions, for example pain, ischemic and neurological disorder, exercise and injury. It is therefore highly desirable to study musculoskeletal contributions in clinical applications such as the treatment of muscle injuries, post-surgery evaluations, monitoring of progressive degeneration in neuromuscular disorders, and so on.</p> <p>The spatial image resolution in ultrasound systems has improved tremendously in the last few years and nowadays provides detailed information about tissue characteristics. It is now possible to study skeletal muscles in real-time during activity.</p> <p>Methods</p> <p>The ultrasound images are transformed to be congruent and are effectively compressed and stacked in order to be analysed with multivariate techniques. The method is applied to a relevant clinical orthopaedic research field, namely to describe the dynamics in the Achilles tendon and the calf during real-time movements.</p> <p>Results</p> <p>This study introduces a novel method to medical applications that can be used to examine ultrasound image sequences and to detect, visualise and quantify skeletal muscle dynamics and functions.</p> <p>Conclusions</p> <p>This new objective method is a powerful tool to use when visualising tissue activity and dynamics of musculoskeletal ultrasound registrations.</p

    Fractal Geometry, Graph and Tree Constructions

    No full text
    In the 18th and 19th centuries the branch of mathematics that would later be known as fractal geometry was developed. It was the ideas of Benoˆıt Mandelbrot that made the area expand so rapidly as it has done recently, and since the publication of his works there have for fractals, and most commonly the estimation of the fractal dimension, been found uses in the most diverse applications. Fractal geometry has been used in information theory, economics, flow dynamics and image analysis, among many different areas. This thesis covers the foundations of fractal geometry, and gives most of the fun- damental definitions and theorems that are needed to understand the area. Concepts such as measure and dimension are explained thoroughly, especially for the Hausdorff di- mension and the Box-counting dimension. An account of the graph-theoretic approach, which is a more general way to describe self-similar sets is given, as well as a tree- construction method that is shown to be equivalent to the graph-theoretic approach.

    Denoising and uncertainty estimation in parameter mapping with approximate Bayesian deep image priors

    No full text
    Purpose: To mitigate the problem of noisy parameter maps with high uncertainties by casting parameter mapping as a denoising task based on Deep Image Priors. Methods: We extend the concept of denoising with Deep Image Prior (DIP) into parameter mapping by treating the output of an image-generating network as a parametrization of tissue parameter maps. The method implicitly denoises the parameter mapping process by filtering low-level image features with an untrained convolutional neural network (CNN). Our implementation includes uncertainty estimation from Bernoulli approximate variational inference, implemented with MC dropout, which provides model uncertainty in each voxel of the denoised parameter maps. The method is modular, so the specifics of different applications (e.g., T1 mapping) separate into application-specific signal equation blocks. We evaluate the method on variable flip angle T1 mapping, multi-echo T2 mapping, and apparent diffusion coefficient mapping. Results: We found that deep image prior adapts successfully to several applications in parameter mapping. In all evaluations, the method produces noise-reduced parameter maps with decreased uncertainty compared to conventional methods. The downsides of the proposed method are the long computational time and the introduction of some bias from the denoising prior. Conclusion: DIP successfully denoise the parameter mapping process and applies to several applications with limited hyperparameter tuning. Further, it is easy to implement since DIP methods do not use network training data. Although time-consuming, uncertainty information from MC dropout makes the method more robust and provides useful information when properly calibrated

    Safety-critical computer vision : an empirical survey of adversarial evasion attacks and defenses on computer vision systems

    No full text
    Considering the growing prominence of production-level AI and the threat of adversarial attacks that can poison a machine learning model against a certain label, evade classification, or reveal sensitive data about the model and training data to an attacker, adversaries pose fundamental problems to machine learning systems. Furthermore, much research has focused on the inverse relationship between robustness and accuracy, raising problems for real-time and safety-critical systems particularly since they are governed by legal constraints in which software changes must be explainable and every change must be thoroughly tested. While many defenses have been proposed, they are often computationally expensive and tend to reduce model accuracy. We have therefore conducted a large survey of attacks and defenses and present a simple and practical framework for analyzing any machine-learning system from a safety-critical perspective using adversarial noise to find the upper bound of the failure rate. Using this method, we conclude that all tested configurations of the ResNet architecture fail to meet any reasonable definition of ‘safety-critical’ when tested on even small-scale benchmark data. We examine state of the art defenses and attacks against computer vision systems with a focus on safety-critical applications in autonomous driving, industrial control, and healthcare. By testing a combination of attacks and defenses, their efficacy, and their run-time requirements, we provide substantial empirical evidence that modern neural networks consistently fail to meet established safety-critical standards by a wide margin
    corecore