151 research outputs found

    A Non-Local Low-Rank Approach to Enforce Integrability

    Get PDF
    International audienceWe propose a new approach to enforce integrability using recent advances in non-local methods. Our formulation consists in a sparse gradient data-fitting term to handle outliers together with a gradient-domain non-local low-rank prior. This regularization has two main advantages : 1) the low-rank prior ensures similarity between non-local gradient patches, which helps recovering high-quality clean patches from severe outliers corruption, 2) the low-rank prior efficiently reduces dense noise as it has been shown in recent image restoration works. We propose an efficient solver for the resulting optimization formulation using alternate minimization. Experiments show that the new method leads to an important improvement compared to previous optimization methods and is able to efficiently handle both outliers and dense noise mixed together

    Scene Analysis under Variable Illumination using Gradient Domain Methods

    Get PDF
    The goal of this research is to develop algorithms for reconstruction and manipulation of gradient fields for scene analysis, from intensity images captured under variable illumination. These methods utilize gradients or differential measurements of intensity and depth for analyzing a scene, such as estimating shape and intrinsic images, and edge suppression under variable illumination. The differential measurements lead to robust reconstruction from gradient fields in the presence of outliers and avoid hard thresholds and smoothness assumptions in manipulating image gradient fields. Reconstruction from gradient fields is important in several applications including shape extraction using Photometric Stereo and Shape from Shading, image editing and matting, retinex, mesh smoothing and phase unwrapping. In these applications, a non-integrable gradient field is available, which needs to be integrated to obtain the final image or surface. Previous approaches for enforcing integrability have focused on least square solutions which do not work well in the presence of outliers and do not locally confine errors during reconstruction. I present a generalized equation to represent a continuum of surface reconstructions of a given non-integrable gradient field. This equation is used to derive new types of feature preserving surface reconstructions in the presence of noise and outliers. The range of solutions is related to the degree of anisotropy of the weights applied to the gradients in the integration process. Traditionally, image gradient fields have been manipulated using hard thresholds for recovering reflectance/illumination maps or to remove illumination effects such as shadows. Smoothness of reflectance/illumination maps is often assumed in such scenarios. By analyzing the direction of intensity gradient vectors in images captured under different illumination conditions, I present a framework for edge suppression which avoids hard thresholds and smoothness assumptions. This framework can be used to manipulate image gradient fields to synthesize computationally useful and visually pleasing images, and is based on two approaches: (a) gradient projection and (b) affine transformation of gradient fields using cross-projection tensors. These approaches are demonstrated in the context of several applications such as removing shadows and glass reflections, and recovering reflectance/illumination maps and foreground layers under varying illumination

    Régularisation parcimonieuse pour le problème d'intégration en traitement d'images

    Get PDF
    International audienceLa reconstruction d'une surface ou image à partir d'un champ de gradient corrompu est une étape primordiale dans plusieurs applications en traitement d'images. Un tel champ peut contenir du bruit et des données aberrantes qui nuisent à la qualité de la reconstruction. On propose dans ce papier d'utiliser la parcimonie pour régulariser le problème, ainsi qu'une méthode efficace pour estimer une bonne solution du problème d'optimisation qui en résulte. Les expériences montrent que la méthode proposée permet d'améliorer considérablement la qualité de la reconstruction comparée aux méthodes précédentes

    Gradient-Based Surface Reconstruction and the Application to Wind Waves

    Get PDF
    New gradient-based surface reconstruction techniques are presented: regularized least absolute deviations based methods using common discrete differential operators, and spline based methods. All new methods are formulated in the same mathematical framework as convex optimization problems and can handle non-rectangular domains. For the spline based methods, either common P-splines or P1-splines can be used. Extensive reconstruction error analysis shows that the new P1-spline based method is superior to conventional methods in the case of gradient fields corrupted with outliers. In the analysis, both spline based methods provide the lowest reconstruction errors for reconstructions from incomplete gradient fields. Furthermore, the pre-processing of gradient fields is investigated. Median filter pre-processing offers a computationally efficient method that is robust to outliers. After the reconstruction error analysis, selected reconstruction methods are applied to imaging slope gauge data measured in the wind-wave facility Aeolotron in Heidelberg. Using newly developed segmentation methods, it is possible to detect different coordinate system orientations of gradient field data and reconstruction algorithms. In addition, the use of a zero slope correction for reconstructions from the provided imaging slope gauge data is justified. The impact of light refracting bubbles on reconstructions from this data is part of this thesis as well. Finally, some water surface reconstructions for measurement conditions with different fetch lengths at the same wind speed in the Aeolotron are shown

    Living at the Edge: A Large Deviations Approach to the Outage MIMO Capacity

    Full text link
    Using a large deviations approach we calculate the probability distribution of the mutual information of MIMO channels in the limit of large antenna numbers. In contrast to previous methods that only focused at the distribution close to its mean (thus obtaining an asymptotically Gaussian distribution), we calculate the full distribution, including its tails which strongly deviate from the Gaussian behavior near the mean. The resulting distribution interpolates seamlessly between the Gaussian approximation for rates RR close to the ergodic value of the mutual information and the approach of Zheng and Tse for large signal to noise ratios ρ\rho. This calculation provides us with a tool to obtain outage probabilities analytically at any point in the (R,ρ,N)(R, \rho, N) parameter space, as long as the number of antennas NN is not too small. In addition, this method also yields the probability distribution of eigenvalues constrained in the subspace where the mutual information per antenna is fixed to RR for a given ρ\rho. Quite remarkably, this eigenvalue density is of the form of the Marcenko-Pastur distribution with square-root singularities, and it depends on the values of RR and ρ\rho.Comment: Accepted for publication, IEEE Transactions on Information Theory (2010). Part of this work appears in the Proc. IEEE Information Theory Workshop, June 2009, Volos, Greec

    Three-dimensional modeling of the human jaw/teeth using optics and statistics.

    Get PDF
    Object modeling is a fundamental problem in engineering, involving talents from computer-aided design, computational geometry, computer vision and advanced manufacturing. The process of object modeling takes three stages: sensing, representation, and analysis. Various sensors may be used to capture information about objects; optical cameras and laser scanners are common with rigid objects, while X-ray, CT and MRI are common with biological organs. These sensors may provide a direct or an indirect inference about the object, requiring a geometric representation in the computer that is suitable for subsequent usage. Geometric representations that are compact, i.e., capture the main features of the objects with a minimal number of data points or vertices, fall into the domain of computational geometry. Once a compact object representation is in the computer, various analysis steps can be conducted, including recognition, coding, transmission, etc. The subject matter of this dissertation is object reconstruction from a sequence of optical images using shape from shading (SFS) and SFS with shape priors. The application domain is dentistry. Most of the SFS approaches focus on the computational part of the SFS problem, i.e. the numerical solution. As a result, the imaging model in most conventional SFS algorithms has been simplified under three simple, but restrictive assumptions: (1) the camera performs an orthographic projection of the scene, (2) the surface has a Lambertian reflectance and (3) the light source is a single point source at infinity. Unfortunately, such assumptions are no longer held in the case of reconstruction of real objects as intra-oral imaging environment for human teeth. In this work, we introduce a more realistic formulation of the SFS problem by considering the image formation components: the camera, the light source, and the surface reflectance. This dissertation proposes a non-Lambertian SFS algorithm under perspective projection which benefits from camera calibration parameters. The attenuation of illumination is taken account due to near-field imaging. The surface reflectance is modeled using the Oren-Nayar-Wolff model which accounts for the retro-reflection case. In this context, a new variational formulation is proposed that relates an evolving surface model with image information, taking into consideration that the image is taken by a perspective camera with known parameters. A new energy functional is formulated to incorporate brightness, smoothness and integrability constraints. In addition, to further improve the accuracy and practicality of the results, 3D shape priors are incorporated in the proposed SFS formulation. This strategy is motivated by the fact that humans rely on strong prior information about the 3D world around us in order to perceive 3D shape information. Such information is statistically extracted from training 3D models of the human teeth. The proposed SFS algorithms have been used in two different frameworks in this dissertation: a) holistic, which stitches a sequence of images in order to cover the entire jaw, and then apply the SFS, and b) piece-wise, which focuses on a specific tooth or a segment of the human jaw, and applies SFS using physical teeth illumination characteristics. To augment the visible portion, and in order to have the entire jaw reconstructed without the use of CT or MRI or even X-rays, prior information were added which gathered from a database of human jaws. This database has been constructed from an adult population with variations in teeth size, degradation and alignments. The database contains both shape and albedo information for the population. Using this database, a novel statistical shape from shading (SSFS) approach has been created. Extending the work on human teeth analysis, Finite Element Analysis (FEA) is adapted for analyzing and calculating stresses and strains of dental structures. Previous Finite Element (FE) studies used approximate 2D models. In this dissertation, an accurate three-dimensional CAD model is proposed. 3D stress and displacements of different teeth type are successfully carried out. A newly developed open-source finite element solver, Finite Elements for Biomechanics (FEBio), has been used. The limitations of the experimental and analytical approaches used for stress and displacement analysis are overcome by using FEA tool benefits such as dealing with complex geometry and complex loading conditions
    corecore