1,117 research outputs found

    GAN-Based Super-Resolution And Segmentation Of Retinal Layers In Optical Coherence Tomography Scans

    Get PDF
    Optical Coherence Tomography (OCT) has been identified as a noninvasive and cost-effective imaging modality for identifying potential biomarkers for Alzheimer\u27s diagnosis and progress detection. Current hypotheses indicate that retinal layer thickness, which can be assessed via OCT scans, is an efficient biomarker for identifying Alzheimer\u27s disease. Due to factors such as speckle noise, a small target region, and unfavorable imaging conditions manual segmentation of retina layers is a challenging task. Therefore, as a reasonable first step, this study focuses on automatically segmenting retinal layers to separate them for subsequent investigations. Another important challenge commonly faced is the lack of clarity of the layer boundaries in retina OCT scans, which compels the research of super-resolving the images for improved clarity. Deep learning pipelines have stimulated substantial progress for the segmentation tasks. Generative adversarial networks (GANs) are a prominent field of deep learning which achieved astonishing performance in semantic segmentation. Conditional adversarial networks as a general-purpose solution to image-to-image translation problems not only learn the mapping from the input image to the output image but also learn a loss function to train this mapping. We propose a GAN-based segmentation model and evaluate incorporating popular networks, namely, U-Net and ResNet, in the GAN architecture with additional blocks of transposed convolution and sub-pixel convolution for the task of upscaling OCT images from low to high resolution by a factor of four. We also incorporate the Dice loss as an additional reconstruction loss term to improve the performance of this joint optimization task. Our best model configuration empirically achieved the Dice coefficient of 0.867 and mIOU of 0.765

    Bi-objective Motion Planning Approach for Safe Motions: Application to a Collaborative Robot

    Get PDF
    International audienceAccepted version freely available here: [ http://bit.ly/2qlyjJ6 ] Online version via SpringerLink: [ http://link.springer.com/article/10.1007/s10846-019-01110-1 ] Abstract: This paper presents a new bi-objective safety-oriented path planning strategy for robotic manipulators. Integrated into a sampling-based algorithm, our approach can successfully enhance the task safety by guiding the expansion of the path towards the safest configurations. Our safety notion consists of avoiding dangerous situations, e.g. being very close to the obstacles, human awareness, e.g. being as much as possible in the human vision field, as well as ensuring human safety by being as far as possible from human with hierarchical priority between human body parts. Experimental validations are conducted in simulation and on the real Baxter research robot. They revealed the efficiency of the proposed method, mainly in the case of a collaborative robot sharing the workspace with humans

    Learning Theory and Approximation

    Get PDF
    The main goal of this workshop – the third one of this type at the MFO – has been to blend mathematical results from statistical learning theory and approximation theory to strengthen both disciplines and use synergistic effects to work on current research questions. Learning theory aims at modeling unknown function relations and data structures from samples in an automatic manner. Approximation theory is naturally used for the advancement and closely connected to the further development of learning theory, in particular for the exploration of new useful algorithms, and for the theoretical understanding of existing methods. Conversely, the study of learning theory also gives rise to interesting theoretical problems for approximation theory such as the approximation and sparse representation of functions or the construction of rich kernel reproducing Hilbert spaces on general metric spaces. This workshop has concentrated on the following recent topics: Pitchfork bifurcation of dynamical systems arising from mathematical foundations of cell development; regularized kernel based learning in the Big Data situation; deep learning; convergence rates of learning and online learning algorithms; numerical refinement algorithms to learning; statistical robustness of regularized kernel based learning

    Mathematical Foundations of Machine Learning (hybrid meeting)

    Get PDF
    Machine learning has achieved remarkable successes in various applications, but there is wide agreement that a mathematical theory for deep learning is missing. Recently, some first mathematical results have been derived in different areas such as mathematical statistics and statistical learning. Any mathematical theory of machine learning will have to combine tools from different fields such as nonparametric statistics, high-dimensional statistics, empirical process theory and approximation theory. The main objective of the workshop was to bring together leading researchers contributing to the mathematics of machine learning. A focus of the workshop was on theory for deep neural networks. Mathematically speaking, neural networks define function classes with a rich mathematical structure that are extremely difficult to analyze because of non-linearity in the parameters. Until very recently, most existing theoretical results could not cope with many of the distinctive characteristics of deep networks such as multiple hidden layers or the ReLU activation function. Other topics of the workshop are procedures for quantifying the uncertainty of machine learning methods and the mathematics of data privacy

    Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow

    Full text link
    We present rectified flow, a surprisingly simple approach to learning (neural) ordinary differential equation (ODE) models to transport between two empirically observed distributions \pi_0 and \pi_1, hence providing a unified solution to generative modeling and domain transfer, among various other tasks involving distribution transport. The idea of rectified flow is to learn the ODE to follow the straight paths connecting the points drawn from \pi_0 and \pi_1 as much as possible. This is achieved by solving a straightforward nonlinear least squares optimization problem, which can be easily scaled to large models without introducing extra parameters beyond standard supervised learning. The straight paths are special and preferred because they are the shortest paths between two points, and can be simulated exactly without time discretization and hence yield computationally efficient models. We show that the procedure of learning a rectified flow from data, called rectification, turns an arbitrary coupling of \pi_0 and \pi_1 to a new deterministic coupling with provably non-increasing convex transport costs. In addition, recursively applying rectification allows us to obtain a sequence of flows with increasingly straight paths, which can be simulated accurately with coarse time discretization in the inference phase. In empirical studies, we show that rectified flow performs superbly on image generation, image-to-image translation, and domain adaptation. In particular, on image generation and translation, our method yields nearly straight flows that give high quality results even with a single Euler discretization step

    Deep learning based single image super-resolution : a survey

    Get PDF
    Single image super-resolution has attracted increasing attention and has a wide range of applications in satellite imaging, medical imaging, computer vision, security surveillance imaging, remote sensing, objection detection, and recognition. Recently, deep learning techniques have emerged and blossomed, producing “the state-of-the-art” in many domains. Due to their capability in feature extraction and mapping, it is very helpful to predict high-frequency details lost in low-resolution images. In this paper, we give an overview of recent advances in deep learning-based models and methods that have been applied to single image super-resolution tasks. We also summarize, compare and discuss various models from the past and present for comprehensive understanding and finally provide open problems and possible directions for future research
    • …
    corecore