188 research outputs found

    Efficient Reconstruction of Piecewise Constant Images Using Nonsmooth Nonconvex Minimization

    Get PDF
    We consider the restoration of piecewise constant images where the number of the regions and their values are not fixed in advance, with a good difference of piecewise constant values between neighboring regions, from noisy data obtained at the output of a linear operator (e.g., a blurring kernel or a Radon transform). Thus we also address the generic problem of unsupervised segmentation in the context of linear inverse problems. The segmentation and the restoration tasks are solved jointly by minimizing an objective function (an energy) composed of a quadratic data-fidelity term and a nonsmooth nonconvex regularization term. The pertinence of such an energy is ensured by the analytical properties of its minimizers. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonsmooth nonconvex minimization. Indeed, the existing methods are unsatisfactory since they (implicitly or explicitly) involve a smooth approximation of the regularization term and often get stuck in shallow local minima. The goal of this paper is to design a method that efficiently handles the nonsmooth nonconvex minimization. More precisely, we propose a continuation method where one tracks the minimizers along a sequence of approximate nonsmooth energies {Jε}, the first of which being strictly convex and the last one the original energy to minimize. Knowing the importance of the nonsmoothness of the regularization term for the segmentation task, each Jε is nonsmooth and is expressed as the sum of an l1 regularization term and a smooth nonconvex function. Furthermore, the local minimization of each Jε is reformulated as the minimization of a smooth function subject to a set of linear constraints. The latter problem is solved by the modified primal-dual interior point method, which guarantees the descent direction at each step. Experimental results are presented and show the effectiveness and the efficiency of the proposed method. Comparison with simulated annealing methods further shows the advantage of our method.published_or_final_versio

    Efficient Algorithms for Mumford-Shah and Potts Problems

    Get PDF
    In this work, we consider Mumford-Shah and Potts models and their higher order generalizations. Mumford-Shah and Potts models are among the most well-known variational approaches to edge-preserving smoothing and partitioning of images. Though their formulations are intuitive, their application is not straightforward as it corresponds to solving challenging, particularly non-convex, minimization problems. The main focus of this thesis is the development of new algorithmic approaches to Mumford-Shah and Potts models, which is to this day an active field of research. We start by considering the situation for univariate data. We find that switching to higher order models can overcome known shortcomings of the classical first order models when applied to data with steep slopes. Though the existing approaches to the first order models could be applied in principle, they are slow or become numerically unstable for higher orders. Therefore, we develop a new algorithm for univariate Mumford-Shah and Potts models of any order and show that it solves the models in a stable way in O(n^2). Furthermore, we develop algorithms for the inverse Potts model. The inverse Potts model can be seen as an approach to jointly reconstructing and partitioning images that are only available indirectly on the basis of measured data. Further, we give a convergence analysis for the proposed algorithms. In particular, we prove the convergence to a local minimum of the underlying NP-hard minimization problem. We apply the proposed algorithms to numerical data to illustrate their benefits. Next, we apply the multi-channel Potts prior to the reconstruction problem in multi-spectral computed tomography (CT). To this end, we propose a new superiorization approach, which perturbs the iterates of the conjugate gradient method towards better results with respect to the Potts prior. In numerical experiments, we illustrate the benefits of the proposed approach by comparing it to the existing Potts model approach from the literature as well as to the existing total variation type methods. Hereafter, we consider the second order Mumford-Shah model for edge-preserving smoothing of images which –similarly to the univariate case– improves upon the classical Mumford-Shah model for images with linear color gradients. Based on reformulations in terms of Taylor jets, i.e. specific fields of polynomials, we derive discrete second order Mumford-Shah models for which we develop an efficient algorithm using an ADMM scheme. We illustrate the potential of the proposed method by comparing it with existing methods for the second order Mumford-Shah model. Further, we illustrate its benefits in connection with edge detection. Finally, we consider the affine-linear Potts model for the image partitioning problem. As many images possess linear trends within homogeneous regions, the classical Potts model frequently leads to oversegmentation. The affine-linear Potts model accounts for that problem by allowing for linear trends within segments. We lift the corresponding minimization problem to the jet space and develop again an ADMM approach. In numerical experiments, we show that the proposed algorithm achieves lower energy values as well as faster runtimes than the method of comparison, which is based on the iterative application of the graph cut algorithm (with α-expansion moves)

    Contributions to unsupervised and supervised learning with applications in digital image processing

    Get PDF
    311 p. : il.[EN]This Thesis covers a broad period of research activities with a commonthread: learning processes and its application to image processing. The twomain categories of learning algorithms, supervised and unsupervised, have beentouched across these years. The main body of initial works was devoted tounsupervised learning neural architectures, specially the Self Organizing Map.Our aim was to study its convergence properties from empirical and analyticalviewpoints.From the digital image processing point of view, we have focused on twobasic problems: Color Quantization and filter design. Both problems have beenaddressed from the context of Vector Quantization performed by CompetitiveNeural Networks. Processing of non-stationary data is an interesting paradigmthat has not been explored with Competitive Neural Networks. We have statesthe problem of Non-stationary Clustering and related Adaptive Vector Quantizationin the context of image sequence processing, where we naturally havea Frame Based Adaptive Vector Quantization. This approach deals with theproblem as a sequence of stationary almost-independent Clustering problems.We have also developed some new computational algorithms for Vector Quantizationdesign.The works on supervised learning have been sparsely distributed in time anddirection. First we worked on the use of Self Organizing Map for the independentmodeling of skin and no-skin color distributions for color based face localization. Second, we have collaborated in the realization of a supervised learning systemfor tissue segmentation in Magnetic Resonance Imaging data. Third, we haveworked on the development, implementation and experimentation with HighOrder Boltzmann Machines, which are a very different learning architecture.Finally, we have been working on the application of Sparse Bayesian Learningto a new kind of classification systems based on Dendritic Computing. This lastresearch line is an open research track at the time of writing this Thesis

    The analytic edge - image reconstruction from edge data via the Cauchy Integral

    Full text link
    A novel image reconstruction algorithm from edges (image gradients) follows from the Sokhostki-Plemelj Theorem of complex analysis, an elaboration of the standard Cauchy (Singular) Integral. This algorithm demonstrates the use of Singular Integral Equation methods to image processing, extending the more common use of Partial Differential Equations (e.g. based on variants of the Diffusion or Poisson equations). The Cauchy Integral approach has a deep connection to and sheds light on the (linear and non-linear) diffusion equation, the retinex algorithm and energy-based image regularization. It extends the commonly understood local definition of an edge to a global, complex analytic structure - the analytic edge - the contrast weighted kernel of the Cauchy Integral. Superposition of the set of analytic edges provides a "filled-in" image which is the piece-wise analytic image corresponding to the edge (gradient data) supplied. This is a fully parallel operation which avoids the time penalty associated with iterative solutions and thus is compatible with the short time (about 150 milliseconds) that is biologically available for the brain to construct a perceptual image from edge data. Although this algorithm produces an exact reconstruction of a filled-in image from the gradients of that image, slight modifications of it produce images which correspond to perceptual reports of human observers when presented with a wide range of "visual contrast illusion" images

    Model-based Fault Diagnosis and Fault Accommodation for Space Missions : Application to the Rendezvous Phase of the MSR Mission

    Get PDF
    The work addressed in this thesis draws expertise from actions undertaken between the EuropeanSpace Agency (ESA), the industry Thales Alenia Space (TAS) and the IMS laboratory (laboratoirede l’Intégration du Matériau au Système) which develop new generations of integrated Guidance, Navigationand Control (GNC) units with fault detection and tolerance capabilities. The reference mission isthe ESA’s Mars Sample Return (MSR) mission. The presented work focuses on the terminal rendezvoussequence of the MSR mission which corresponds to the last few hundred meters until the capture. Thechaser vehicle is the MSR Orbiter, while the passive target is a diameter spherical container. The objectiveat control level is a capture achievement with an accuracy better than a few centimeter. The research workaddressed in this thesis is concerned by the development of model-based Fault Detection and Isolation(FDI) and Fault Tolerant Control (FTC) approaches that could significantly increase the operational andfunctional autonomy of the chaser during rendezvous, and more generally, of spacecraft involved in deepspace missions. Since redundancy exist in the sensors and since the reaction wheels are not used duringthe rendezvous phase, the work presented in this thesis focuses only on the thruster-based propulsionsystem. The investigated faults have been defined in accordance with ESA and TAS requirements andfollowing their experiences. The presented FDI/FTC approaches relies on hardware redundancy in sensors,control redirection and control re-allocation methods and a hierarchical FDI including signal-basedapproaches at sensor level, model-based approaches for thruster fault detection/isolation and trajectorysafety monitoring. Carefully selected performance and reliability indices together with Monte Carlo simulationcampaigns, using a high-fidelity industrial simulator, demonstrate the viability of the proposedapproaches.Les travaux de recherche traités dans cette thèse s’appuient sur l’expertise des actionsmenées entre l’Agence spatiale européenne (ESA), l’industrie Thales Alenia Space (TAS) et le laboratoirede l’Intégration du Matériau au Système (IMS) qui développent de nouvelles générations d’unités intégréesde guidage, navigation et pilotage (GNC) avec une fonction de détection des défauts et de tolérance desdéfauts. La mission de référence retenue dans cette thèse est la mission de retour d’échantillons martiens(Mars Sample Return, MSR) de l’ESA. Ce travail se concentre sur la séquence terminale du rendez-vous dela mission MSR qui correspond aux dernières centaines de mètres jusqu’à la capture. Le véhicule chasseurest l’orbiteur MSR (chasseur), alors que la cible passive est un conteneur sphérique. L’objectif au niveaude contrôle est de réaliser la capture avec une précision inférieure à quelques centimètres. Les travaux derecherche traités dans cette thèse s’intéressent au développement des approches sur base de modèle de détectionet d’isolation des défauts (FDI) et de commande tolérante aux défaillances (FTC), qui pourraientaugmenter d’une manière significative l’autonomie opérationnelle et fonctionnelle du chasseur pendant lerendez-vous et, d’une manière plus générale, d’un vaisseau spatial impliqué dans des missions située dansl’espace lointain. Dès lors que la redondance existe dans les capteurs et que les roues de réaction ne sontpas utilisées durant la phase de rendez-vous, le travail présenté dans cette thèse est orienté seulementvers les systèmes de propulsion par tuyères. Les défaillances examinées ont été définies conformément auxexigences de l’ESA et de TAS et suivant leurs expériences. Les approches FDI/FTC présentées s’appuientsur la redondance de capteurs, la redirection de contrôle et sur les méthodes de réallocation de contrôle,ainsi que le FDI hiérarchique, y compris les approches à base de signaux au niveau de capteurs, les approchesà base de modèle de détection/localisation de défauts de propulseur et la surveillance de sécuritéde trajectoire. Utilisant un simulateur industriel de haute-fidélité, les indices de performance et de fiabilitéFDI, qui ont été soigneusement choisis accompagnés des campagnes de simulation de robustesse/sensibilitéMonte Carlo, démontrent la viabilité des approches proposées

    Approximate Spatial Layout Processing in the Visual System: Modeling Texture-Based Segmentation and Shape Estimation

    Get PDF
    Moving through the environment, grasping objects, orienting oneself, and countless other tasks all require information about spatial organization. This in turn requires determining where surfaces, objects and other elements of a scene are located and how they are arranged. Humans and other animals can extract spatial organization from vision rapidly and automatically. To better understand this capability, it would be useful to know how the visual system can make an initial estimate of the spatial layout. Without time or opportunity for a more careful analysis, a rough estimate may be all that the system can extract. Nevertheless, rough spatial information may be sufficient for many purposes, even if it is devoid of details that are important for tasks such as object recognition. The human visual system uses many sources of information for estimating layout. Here I focus on one source in particular: visual texture. I present a biologically reasonable, computational model of how the system can exploit patterns of texture for performing two basic tasks in spatial layout processing: locating possible surfaces in the visual input, and estimating their approximate shapes. Separately, these two tasks have been studied extensively, but they have not previously been examined together in the context of a model grounded in neurophysiology and psychophysics. I show that by integrating segmentation and shape estimation, a system can share information between these processes, allowing the processes to constrain and inform each other as well as save on computations. The model developed here begins with the responses of simulated complex cells of the primary visual cortex, and combines a weak membrane/functional minimization approach to segmentation with a shape estimation method based on tracking changes in the average dominant spatial frequencies across a surface. It includes mechanisms for detecting untextured areas and flat areas in an input image. In support of the model, I present a software simulation that can perform texture-based segmentation and shape estimation on images containing multiple, curved, textured surfaces.Ph.D.Applied SciencesBiological SciencesCognitive psychologyComputer scienceNeurosciencesPsychologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/131446/2/9909908.pd

    Modeling and imaging of the vocal fold vibration for voice health.

    Get PDF

    Data-driven approaches for interactive appearance editing

    Get PDF
    This thesis proposes several techniques for interactive editing of digital content and fast rendering of virtual 3D scenes. Editing of digital content - such as images or 3D scenes - is difficult, requires artistic talent and technical expertise. To alleviate these difficulties, we exploit data-driven approaches that use the easily accessible Internet data (e. g., images, videos, materials) to develop new tools for digital content manipulation. Our proposed techniques allow casual users to achieve high-quality editing by interactively exploring the manipulations without the need to understand the underlying physical models of appearance. First, the thesis presents a fast algorithm for realistic image synthesis of virtual 3D scenes. This serves as the core framework for a new method that allows artists to fine tune the appearance of a rendered 3D scene. Here, artists directly paint the final appearance and the system automatically solves for the material parameters that best match the desired look. Along this line, an example-based material assignment approach is proposed, where the 3D models of a virtual scene can be "materialized" simply by giving a guidance source (image/video). Next, the thesis proposes shape and color subspaces of an object that are learned from a collection of exemplar images. These subspaces can be used to constrain image manipulations to valid shapes and colors, or provide suggestions for manipulations. Finally, data-driven color manifolds which contain colors of a specific context are proposed. Such color manifolds can be used to improve color picking performance, color stylization, compression or white balancing.Diese Dissertation stellt Techniken zum interaktiven Editieren von digitalen Inhalten und zum schnellen Rendering von virtuellen 3D Szenen vor. Digitales Editieren - seien es Bilder oder dreidimensionale Szenen - ist kompliziert, benötigt künstlerisches Talent und technische Expertise. Um diese Schwierigkeiten zu relativieren, nutzen wir datengesteuerte Ansätze, die einfach zugängliche Internetdaten, wie Bilder, Videos und Materialeigenschaften, nutzen um neue Werkzeuge zur Manipulation von digitalen Inhalten zu entwickeln. Die von uns vorgestellten Techniken erlauben Gelegenheitsnutzern das Editieren in hoher Qualität, indem Manipulationsmöglichkeiten interaktiv exploriert werden können ohne die zugrundeliegenden physikalischen Modelle der Bildentstehung verstehen zu müssen. Zunächst stellen wir einen effizienten Algorithmus zur realistischen Bildsynthese von virtuellen 3D Szenen vor. Dieser dient als Kerngerüst einer Methode, die Nutzern die Feinabstimmung des finalen Aussehens einer gerenderten dreidimensionalen Szene erlaubt. Hierbei malt der Künstler direkt das beabsichtigte Aussehen und das System errechnet automatisch die zugrundeliegenden Materialeigenschaften, die den beabsichtigten Eigenschaften am nahesten kommen. Zu diesem Zweck wird ein auf Beispielen basierender Materialzuordnungsansatz vorgestellt, für den das 3D Model einer virtuellen Szene durch das simple Anführen einer Leitquelle (Bild, Video) in Materialien aufgeteilt werden kann. Als Nächstes schlagen wir Form- und Farbunterräume von Objektklassen vor, die aus einer Sammlung von Beispielbildern gelernt werden. Diese Unterräume können genutzt werden um Bildmanipulationen auf valide Formen und Farben einzuschränken oder Manipulationsvorschläge zu liefern. Schließlich werden datenbasierte Farbmannigfaltigkeiten vorgestellt, die Farben eines spezifischen Kontexts enthalten. Diese Mannigfaltigkeiten ermöglichen eine Leistungssteigerung bei Farbauswahl, Farbstilisierung, Komprimierung und Weißabgleich
    • …
    corecore