51 research outputs found

    A learned joint depth and intensity prior using Markov Random fields

    Get PDF
    International audienceWe present a joint prior that takes intensity and depth information into account. The prior is defined using a flexible Field-of-Experts model and is learned from a database of natural images. It is a generative model and has an efficient method for sampling. We use sampling from the model to perform in painting and up sampling of depth maps when intensity information is available. We show that including the intensity information in the prior improves the results obtained from the model. We also compare to another two-channel inpainting approach and show superior results

    High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks

    Full text link
    Synthesizing face sketches from real photos and its inverse have many applications. However, photo/sketch synthesis remains a challenging problem due to the fact that photo and sketch have different characteristics. In this work, we consider this task as an image-to-image translation problem and explore the recently popular generative models (GANs) to generate high-quality realistic photos from sketches and sketches from photos. Recent GAN-based methods have shown promising results on image-to-image translation problems and photo-to-sketch synthesis in particular, however, they are known to have limited abilities in generating high-resolution realistic images. To this end, we propose a novel synthesis framework called Photo-Sketch Synthesis using Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution to high resolution images in an adversarial way. The hidden layers of the generator are supervised to first generate lower resolution images followed by implicit refinement in the network to generate higher resolution images. Furthermore, since photo-sketch synthesis is a coupled/paired translation problem, we leverage the pair information using CycleGAN framework. Both Image Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to demonstrate the superior performance of our framework in comparison to existing state-of-the-art solutions. Code available at: https://github.com/lidan1/PhotoSketchMAN.Comment: Accepted by 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)(Oral

    Toward more scalable structured models

    Get PDF
    While deep learning has achieved huge success across different disciplines from computer vision and natural language processing to computational biology and physical sciences, training such models is known to require significant amounts of data. One possible reason is that the structural properties of the data and problem are not modeled explicitly. Effectively exploiting the structure can help build more efficient and performing models. The complexity of the structure requires models with enough representation capabilities. However, increased structured model complexity usually leads to increased inference complexity and trickier learning procedures. Also, making progress on real-world applications requires learning paradigms that circumvent the limitation of evaluating the partition function and scale to high-dimensional datasets. In this dissertation, we develop more scalable structured models, i.e., models with inference procedures that can handle complex dependencies between variables efficiently, and learning algorithms that operate in high-dimensional spaces. First, we extend Gaussian conditional random fields, traditionally unimodal and only capturing pairwise variables interactions, to model multi-modal distributions with high-order dependencies between the output space variables, while enabling exact inference and incorporating external constraints at runtime. We show compelling results on the task of diverse gray-image colorization. Then, we introduce a reinforcement learning-based method for solving inference in models with general higher-order potentials, that are intractable with traditional techniques. We show promising results on semantic segmentation. Finally, we propose a new loss, max-sliced score matching (MSSM), for learning structured models at scale. We assess our model on an estimation of densities and scores for implicit distributions in Variational and Wasserstein auto-encoders

    Bayesian Optimization for Image Segmentation, Texture Flow Estimation and Image Deblurring

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Filter-Based Probabilistic Markov Random Field Image Priors: Learning, Evaluation, and Image Analysis

    Get PDF
    Markov random fields (MRF) based on linear filter responses are one of the most popular forms for modeling image priors due to their rigorous probabilistic interpretations and versatility in various applications. In this dissertation, we propose an application-independent method to quantitatively evaluate MRF image priors using model samples. To this end, we developed an efficient auxiliary-variable Gibbs samplers for a general class of MRFs with flexible potentials. We found that the popular pairwise and high-order MRF priors capture image statistics quite roughly and exhibit poor generative properties. We further developed new learning strategies and obtained high-order MRFs that well capture the statistics of the inbuilt features, thus being real maximum-entropy models, and other important statistical properties of natural images, outlining the capabilities of MRFs. We suggest a multi-modal extension of MRF potentials which not only allows to train more expressive priors, but also helps to reveal more insights of MRF variants, based on which we are able to train compact, fully-convolutional restricted Boltzmann machines (RBM) that can model visual repetitive textures even better than more complex and deep models. The learned high-order MRFs allow us to develop new methods for various real-world image analysis problems. For denoising of natural images and deconvolution of microscopy images, the MRF priors are employed in a pure generative setting. We propose efficient sampling-based methods to infer Bayesian minimum mean squared error (MMSE) estimates, which substantially outperform maximum a-posteriori (MAP) estimates and can compete with state-of-the-art discriminative methods. For non-rigid registration of live cell nuclei in time-lapse microscopy images, we propose a global optical flow-based method. The statistics of noise in fluorescence microscopy images are studied to derive an adaptive weighting scheme for increasing model robustness. High-order MRFs are also employed to train image filters for extracting important features of cell nuclei and the deformation of nuclei are then estimated in the learned feature spaces. The developed method outperforms previous approaches in terms of both registration accuracy and computational efficiency

    A Semi-Automated Approach to Medical Image Segmentation using Conditional Random Field Inference

    Full text link
    Medical image segmentation plays a crucial role in delivering effective patient care in various diagnostic and treatment modalities. Manual delineation of target volumes and all critical structures is a very tedious and highly time-consuming process and introduce uncertainties of treatment outcomes of patients. Fully automatic methods holds great promise for reducing cost and time, while at the same time improving accuracy and eliminating expert variability, yet there are still great challenges. Legally and ethically, human oversight must be integrated with ”smart tools” favoring a semi-automatic technique which can leverage the best aspects of both human and computer. In this work we show that we can formulate a semi-automatic framework for the segmentation problem by formulating it as an energy minimization problem in Conditional Random Field (CRF). We show that human input can be used as adaptive training data to condition a probabilistic boundary term modeled for the heterogeneous boundary characteristics of anatomical structures. We demonstrated that our method can effortlessly adapt to multiple structures and image modalities using a single CRF framework and tools to learn probabilistic terms interactively. To tackle a more difficult multi-class segmentation problem, we developed a new ensemble one-vs-rest graph cut algorithm. Each graph in the ensemble performs a simple and efficient bi-class (a target class vs the rest of the classes) segmentation. The final segmentation is obtained by majority vote. Our algorithm is both faster and more accurate when compared with the prior multi-class method which iteratively swaps classes. In this Thesis, we also include novel volumetric segmentation algorithms which employ deep learning and indicate how to synthesize our CRF framework with convolutional neural networks (CNN). This would allow incorporating user guidance into CNN based deep learning for this task. We think a deep learning based method interactively guided by human expert is the ideal solution for medical image segmentation

    Computer Vision

    Full text link

    Extracting structured information from 2D images

    Get PDF
    Convolutional neural networks can handle an impressive array of supervised learning tasks while relying on a single backbone architecture, suggesting that one solution fits all vision problems. But for many tasks, we can directly make use of the problem structure within neural networks to deliver more accurate predictions. In this thesis, we propose novel deep learning components that exploit the structured output space of an increasingly complex set of problems. We start from Optical Character Recognition (OCR) in natural scenes and leverage the constraints imposed by a spatial outline of letters and language requirements. Conventional OCR systems do not work well in natural scenes due to distortions, blur, or letter variability. We introduce a new attention-based model, equipped with extra information about the neuron positions to guide its focus across characters sequentially. It beats the previous state-of-the-art benchmark by a significant margin. We then turn to dense labeling tasks employing encoder-decoder architectures. We start with an experimental study that documents the drastic impact that decoder design can have on task performance. Rather than optimizing one decoder per task separately, we propose new robust layers for the upsampling of high-dimensional encodings. We show that these better suit the structured per pixel output across the board of all tasks. Finally, we turn to the problem of urban scene understanding. There is an elaborate structure in both the input space (multi-view recordings, aerial and street-view scenes) and the output space (multiple fine-grained attributes for holistic building understanding). We design new models that benefit from a relatively simple cuboidal-like geometry of buildings to create a single unified representation from multiple views. To benchmark our model, we build a new multi-view large-scale dataset of buildings images and fine-grained attributes and show systematic improvements when compared to a broad range of strong CNN-based baselines
    corecore