61 research outputs found

    C-HiLasso: A Collaborative Hierarchical Sparse Modeling Framework

    Full text link
    Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an L1-regularized linear regression problem, commonly referred to as Lasso or Basis Pursuit. In this work we combine the sparsity-inducing property of the Lasso model at the individual feature level, with the block-sparsity property of the Group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the Hierarchical Lasso (HiLasso), which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level, but not necessarily at the lower (inside the group) level, obtaining the collaborative HiLasso model (C-HiLasso). Such signals then share the same active groups, or classes, but not necessarily the same active set. This model is very well suited for applications such as source identification and separation. An efficient optimization procedure, which guarantees convergence to the global optimum, is developed for these new models. The underlying presentation of the new framework and optimization approach is complemented with experimental examples and theoretical results regarding recovery guarantees for the proposed models

    Second generation sparse models

    Get PDF
    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a learned dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many applications. The success of these models is largely attributed to two critical features: the use of sparsity as a robust mechanism for regularizing the linear coefficients that represent the data, and the flexibility provided by overcomplete dictionaries that are learned from the data. These features are controlled by two critical hyper-parameters: the desired sparsity of the coefficients, and the size of the dictionaries to be learned. However, lacking theoretical guidelines for selecting these critical parameters, applications based on sparse models often require hand-tuning and cross-validation to select them, for each application, and each data set. This can be both inefficient and ineffective. On the other hand, there are multiple scenarios in which imposing additional constraints to the produced representations, including the sparse codes and the dictionary itself, can result in further improvements. This thesis is about improving and/or extending current sparse models by addressing the two issues discussed above, providing the elements for a new generation of more powerful and flexible sparse models. First, we seek to gain a better understanding of sparse models as data modeling tools, so that critical parameters can be selected automatically, efficiently, and in a principled way. Secondly, we explore new sparse modeling formulations for effectively exploiting the prior information present in different scenarios. In order to achieve these goals, we combine ideas and tools from information theory, statistics, machine learning, and optimization theory. The theoretical contributions are complemented with applications in audio, image and video processing

    A Hierarchical Algorithm for Multiphase Texture Image Segmentation

    Get PDF

    Von Pixeln zu Regionen: Partielle Differentialgleichungen in der Bildanalyse

    Get PDF
    This work deals with applications of partial differential equations in image analysis. The focus is thereby on applications that can be used for image segmentation. This includes, among other topics, nonlinear diffusion, motion analysis, and image segmentation itself. From each chapter to the next, the methods are directed more and more to image segmentation. While Chapter 2 presents general denoising and simplification techniques, Chapter 4 already addresses the somewhat more special task to extract texture and motion from images. This is in order to employ the resulting features to the partitioning of images finally in Chapter 5. Thus, in this work, one can clearly make out the thread from the raw image data, the pixels, to the more abstract descriptions of images by means of regions. The fact that image processing techniques can also be useful in research areas besides conventional images is shown in Chapter 3. They are used here in order to improve numerical methods for conservation laws in physics. The work conceptually focuses on using as many different features as possible for segmentation. This includes besides image-driven features like texture and motion the knowledge-based information of a three-dimensional object model. The basic idea of this concept is to provide a preferably wide basis of information for separating object regions and thus increasing the number of situations in which the method yields satisfactory segmentation results. A further basic concept pursued in this thesis is to employ coarse-to-fine strategies. They are used both for motion estimation in Chapter 4 and for segmentation in Chapter 5. In both cases one has to deal with optimization problems that contain many local optima. Conventional local optimization therefore usually leads to results the quality of which heavily depends on the initialization. This situation can often be eased, if the optimization problem is first significantly simplified. One then tries to solve the original problem by continuously increasing the problem complexity. Apart from this, the work contains several essential technical novelties. In Chapter 2, nonlinear diffusion with unbounded diffusivities is considered. This also includes total variation flow(TV flow). A thorough analysis of TV flow thereby leads to an analytic solution that allows to show that TV flow is in the space-discrete, one-dimensional setting exactly identical to the corresponding variational approach called TV regularization. Moreover, various different numerical methods are investigated in order to determine their suitability for diffusion filters with unbounded diffusivities. TV flow can be regarded as an alternative to Gaussian smoothing, though there is the significant difference of TV flow being discontinuity preserving. By replacing Gaussian smoothing by TV flow, one can develop new discontinuity preserving versions of well-known operators such as the structure tensor. TV flow is also employed in Chapter 3 where the goal is to improve numerical schemes for the approximation of hyperbolic conservation laws by means of image processing techniques. The role of TV flow in this scope is to remove oscillations of a second order method. In an alternative approach, the approximation performance of a first order method is improved by a nonlinear inverse diffusion filter. The underlying concept is to remove exactly the amount of numerical diffusion that actually stabilizes the scheme. By means of an appropriate stabilization of the inverse diffusion process it is possible to preserve the positive stability properties of the original method. III IV Abstract Chapter 4 is separated into two parts. The first part deals with the extraction of texture features, whereas the second part focuses on motion estimation. Goal of the texture extraction method is to derive a feature space that is as low-dimensional as possible but still provides very good discrimination properties. The basic framework of this feature space is the structure tensor based on TV flow presented earlier in Chapter 2. It contains the orientation, magnitude, and homogeneity of a texture and therefore provides already very important features for texture discrimination. Additionally, a region based local scale measure is developed that supplements the size of texture elements to the feature space. This feature space is used later in Chapter 5 for texture segmentation. Two motion estimation methods are introduced in Chapter 4. One of them is based on the structure tensor from Section 2 and improves existing local methods. The other technique is based on a global variational approach. It differs from usual variational approaches by the use of a gradient constancy assumption. This assumption provides the method with the capability to yield good estimation results even in the presence of small local or global variations of illumination. Besides this novelty, the combination of non-linearized constancy assumptions and a coarse-to-fine strategy yields a numerical scheme that provides for the first time a well founded theory for the very successful warping methods. The described technique leads to results that are generally more accurate than all results presented in literature so far. As already mentioned, goal of the image segmentation approach in Chapter 5 is mainly to integrate the features derived in Chapter 4 and to utilize a coarse-to-fine strategy. This is done in the framework of region based, implicit active contour models which are set up on the concept of level sets. The involved region models are extended by nonparametric as well as local region statistics. A further novelty is the extension of the level set concept to multiple regions. The optimum number of regions is thereby estimated by a hierarchical approach. This is a considerable extension of conventional active contour models, which are usually restricted to two regions. Moreover, the idea to use three-dimensional object knowledge for segmentation is presented. The proposed method uses the extracted contour for estimating the pose of the object, while in return the projected object model supports the segmentation. The implementation of this idea as described in this thesis is only at an early stage. Plenty of interesting aspects can be derived from this concept that are to be investigated in the future.Die vorliegenden Arbeit beschäftigt sich mit Anwendungen partieller Differentialgleichungen in der Bildanalyse. Dabei stehen Anwendungen im Vordergrund, die sich zur Bildsegmentierung verwenden lassen. Dies schließt unter anderem nichtlineare Diffusion, Bewegungsschätzung und die Bildsegmentierung selbst ein. Von Kapitel zu Kapitel werden die verwendeten Methoden dabei mehr und mehr auf die Bildsegmentierung ausgerichtet. Werden in Kapitel 2 noch allgemeine Entrauschungs- und Bildvereinfachungsoperationen vorgestellt, behandelt Kapitel 4 die schon etwas speziellere Aufgabe, Textur und Bewegung aus Bildern zu extrahieren, um entsprechende Merkmale schließlich in Kapitel 5 zur Segmentierung von Bildern verwenden zu können. Dabei zieht sich der Weg von den rohen Bilddaten, den Pixeln, hin zur abstrakteren Beschreibung von Bildern mit Hilfe von Regionen als roter Faden durch die gesamte Arbeit. Dass sich Bildverarbeitungstechniken auch in Forschungsgebieten fern herkömmlicher Bilder als nützlich erweisen können, zeigt Kapitel 3. Hier werden Bildverarbeitungstechniken zur Verbesserung numerischer Verfahren für Erhaltungsgleichungen der Physik verwendet. Konzeptionell legt diese Arbeit Wert darauf, möglichst viele verschiedene Merkmale zur Segmentierung zu verwenden. Darunter fallen neben den bildgestützten Merkmalen wie Textur und Bewegung auch die wissensbasierte Information eines dreidimensionalen Oberflächenmodells. Die prinzipielle Idee hinter diesem Konzept ist, die Entscheidungsgrundlage zur Trennung von Objektregionen auf eine möglichst breite Informationsbasis zu stellen und somit die Anzahl der Situationen, in denen das Verfahren zufriedenstellende Segmentierungsergebnisse liefert, zu erhöhen. Ein weiteres Grundkonzept, das in dieser Arbeit verfolgt wird, ist die Verwendung von Coarse- To-Fine-Strategien. Sie kommen sowohl bei der Bewegungsschätzung in Kapitel 4 als auch in der Segmentierung in Kapitel 5 zum Einsatz. In beiden Fällen hat man es mit Optimierungsproblemen zu tun, die viele lokale Optima aufweisen. Herkömmliche lokale Optimierung führt daher meist zu Ergebnissen, deren Qualität stark von der Initialisierung abhängt. Diese Situation lässt sich häufig entschärfen, wenn man das entsprechende Optimierungsproblem zunächst deutlich vereinfacht und erst nach und nach das ursprüngliche Problem zu lösen versucht. Daneben enthält diese Arbeit viele wesentliche technische Neuerungen. In Kapitel 2 wird nichtlineare Diffusion mit unbeschränkten Diffusivitäten betrachtet, was auch Total-Variation- Flow (TV-Flow) mit einschließt. Eine genaue Analyse von TV-Flow führt dabei zu einer analytischen Lösung, mit Hilfe derer man zeigen kann, dass TV-Flow im diskreten, eindimensionalen Fall exakt identisch mit dem ensprechenden Variationsansatz der TV-Regularisierung ist. Desweiteren werden verschiedene numerische Verfahren in Bezug auf ihre Eignung für Diffusionsfilter mit unbeschränkten Diffusivitäten untersucht. Man kann TV-Flow als eine Alternative zur Gaußglättung ansehen, mit dem entscheidenden Unterschied, dass TV-Flow kantenerhaltend ist. Durch Ersetzen von Gaußglättung durch TV-Flow lassen sich so diskontinuitätserhaltende Varianten bekannter Operatoren wie etwa des Strukturtensors entwickeln. Auch in Kapitel 3 kommt TV-Flow zum Einsatz, wenn es darum geht, numerische Verfahren zur Approximation hyperbolischer Erhaltungsgleichungen durch Bildverarbeitungsmethoden zu verbessern. TV-Flow fällt dabei die Rolle zu, Oszillationen eines Verfahrens zweiter Ordnung zu beseitigen. In einem alternativen Ansatz werden die Approximationseigenschaften eines Verfahrens erster Ordnung durch einen nichtlinearen Rückwärtsdiffusionsfilter verbessert, indem die numerische Diffusion, die das Verfahren eigentlich stabilisiert, gezielt wieder entfernt wird. Dabei gelingt es durch eine geeignete Stabilisierung der Rückwärtsdiffusion, die positiven Stabilitätseigenschaften des Originalverfahrens zu erhalten. Kapitel 4 spaltet sich in zwei Teile auf, wobei der erste Teil von der Extrahierung von Texturmerkmalen handelt, während sich der zweite Teil auf Bewegungsschätzung konzentriert. Bei den Texturmerkmalen besteht dabei das Ziel, einen möglichst niederdimensionalen Merkmalsraum zu kreieren, der dennoch sehr gute Diskriminierungseigenschaften besitzt. Das Grundgerüst dieses Merkmalsraums stellt dabei der in Kapitel 2 vorgestellte, auf TV-Flow basierende Strukturtensor dar. Er beschreibt mit der Orientierung, Stärke und Homogenität der Texturierung bereits sehr wichtige Merkmale einer Textur. Daneben wird ein regionenbasiertes, lokales Skalenmaß entwickelt, das zusätzlich die Größe von Texturelementen als Merkmal einbringt. Diese Texturmerkmale werden später in Kapitel 5 zur Textursegmentierung verwendet. Zur Bewegungsschätzung werden zwei Verfahren vorgestellt. Das eine basiert auf dem in Kapitel 2 eingeführten Strukturtensor und stellt eine Verbesserung vorhandener lokaler Methoden dar. Das andere Verfahren basiert auf einem globalen Variationsansatz und unterscheidet sich von üblichen Variationsansätzen durch die Verwendung einer Gradientenkonstanzannahme. Diese stattet das Verfahren mit der Fähigkeit aus, auch beim Vorhandensein kleinerer lokaler oder globaler Helligkeitsschwankungen gute Schätzergebnisse zu liefern. Daneben ergibt sich aus der Kombination von nicht-linearisierten Konstanzannahmen und einer Coarse-To-Fine-Strategie ein numerisches Schema, das erstmals eine fundierte Theorie zu den sehr erfolgreichen Warping-Verfahren zur Verfügung stellt. Mit der beschriebenen Technik werden Ergebnisse erzielt, die grundsätzlich präziser sind als alles was bisher in der Literatur vorgestellt wurde. Bei der eigentlichen Bildsegmentierung in Kapitel 5 geht es schließlich, wie bereits erwähnt, hauptsächlich um die Einbringung der in Kapitel 4 entwickelten zusätzlichen Merkmale und um die Verwendung einer Coarse-To-Fine-Strategie. Dies geschieht im Rahmen von regionenbasierten, impliziten Aktiv-Kontur-Modellen, die auf dem Konzept der Level-Sets aufbauen. Dabei werden die Regionenmodelle um nichtparametrische und lokale Beschreibungen der Regionenstatistik erweitert. Eine weitere Neuerung ist die Erweiterung des Level-Set-Konzepts auf mehrere Regionen. In einem teils hierarchischen Ansatz wird dabei auch die optimale Anzahl der Regionen geschätzt, was eine erhebliche Erweiterung im Vergleich zu herkömmlichen Aktiv-Kontur- Modellen darstellt. Außerdem wird die Idee vorgestellt, dreidimensionales Objektwissen in der Segmentierung zu verwenden, indem anhand der Segmentierung die Lage des Objekts geschätzt wird und umgekehrt wiederum das projizierte Objektmodell die Segmentierung unterstützt. Die Umsetzung dieser Idee, wie sie in dieser Arbeit beschrieben wird, steht dabei erst am Anfang. Für die Zukunft ergeben sich hieraus noch viele interessanter Aspekte, die es zu untersuchen gilt

    Shape and Topology Constrained Image Segmentation with Stochastic Models

    Get PDF
    The central theme of this thesis has been to develop robust algorithms for the task of image segmentation. All segmentation techniques that have been proposed in this thesis are based on the sound modeling of the image formation process. This approach to image partition enables the derivation of objective functions, which make all modeling assumptions explicit. Based on the Parametric Distributional Clustering (PDC) technique, improved variants have been derived, which explicitly incorporate topological assumptions in the corresponding cost functions. In this thesis, the questions of robustness and generalizability of segmentation solutions have been addressed in an empirical manner, giving comprehensive example sets for both problems. It has been shown, that the PDC framework is indeed capable of producing highly robust image partitions. In the context of PDC-based segmentation, a probabilistic representation of shape has been constructed. Furthermore, likelihood maps for given objects of interest were derived from the PDC cost function. Interpreting the shape information as a prior for the segmentation task, it has been combined with the likelihoods in a Bayesian setting. The resulting posterior probability for the occurrence of an object of a specified semantic category has been demonstrated to achieve excellent segmentation quality on very hard testbeds of images from the Corel gallery

    NONLINEAR OPERATORS FOR IMAGE PROCESSING: DESIGN, IMPLEMENTATION AND MODELING TECHNIQUES FOR POWER ESTIMATION

    Get PDF
    1998/1999Negli ultimi anni passati le applicazioni multimediali hanno visto uno sviluppo notevole, trovando applicazione in un gran numero di campi. Applicazioni come video conferenze, diagnostica medica, telefonia mobile e applicazioni militari necessitano il trattamento di una gran mole di dati ad alta velocità. Pertanto, l'elaborazione di immagini e di dati vocali è molto importante ed è stata oggetto di numerosi sforzi, nel tentativo di trovare algoritmi sempre più veloci ed efficaci. Tra gli algoritmi proposti, noi crediamo che gli operatori razionali svolgano un ruolo molto importante, grazie alla loro versatilità ed efficacia nell'elaborazione di dati. Negli ultimi anni sono stati proposti diversi algoritmi, dimostrando che questi operatori possono essere molto vantaggiosi in diverse applicazioni, producendo buoni risultati. Lo scopo di questo lavoro è di realizzare alcuni di questi algoritmi e, quindi, dimostrare che i filtri razionali, in particolare, possono essere realizzati senza ricorrere a sistemi di grandi dimensioni e possono raggiungere frequenze operative molto alte. Una volta che il blocco fondamentale di un sistema basato su operatori razionali sia stato realizzato, esso pu6 essere riusato con successo in molte altre applicazioni. Dal punto di vista del progettista, è importante avere uno schema generale di studio, che lo renda capace di studiare le varie configurazioni del sistema da realizzare e di analizzare i compromessi tra le variabili di progetto. In particolare, per soddisfare l'esigenza di metodi versatili per la stima della potenza, abbiamo sviluppato una tecnica di macro modellizazione che permette al progettista di stimare velocemente ed accuratamente la potenza dissipata da un circuito. La tesi è organizzata come segue: Nel Capitolo 1 alcuni sono presentati alcuni algoritmi studiati per la realizzazione. Ne viene data solo una veloce descrizione, lasciando comunque al lettore interessato dei riferimenti bibliografici. Nel Capitolo 2 vengono discusse le architetture fondamentali usate per la realizzazione. Principalmente sono state usate architetture a pipeline, ma viene data anche una descrizione degli approcci oggigiorno disponibili per l'ottimizzazione delle temporizzazioni. Nel Capitolo 3 sono presentate le realizzazioni di due sistemi studiati per questa tesi. Gli approcci seguiti si basano su ASIC e FPGA. Richiedono tecniche e soluzioni diverse per il progetto del sistema, per cui é interessante vedere cosa pu6 essere fatto nei due casi. Infine, nel Capitolo 4, descriviamo la nostra tecnica di macro modellizazione per la stima di potenza, dando una breve visione delle tecniche finora proposte e facendo vedere quali sono i vantaggi che il nostro metodo comporta per il progetto.In the past few years, multimedia application have been growing very fast, being applied to a large variety of fields. Applications like video conference, medical diagnostic, mobile phones, military applications require to handle large amount of data at high rate. Images as well as voice data processing are therefore very important and they have been subjected to a lot of efforts in order to find always faster and effective algorithms. Among image processing algorithms, we believe that rational operators assume an important role, due to their versatility and effectiveness in data processing. In the last years, several algorithms have been proposed, demonstrating that these operators can be very suitable in different applications with very good results. The aim of this work is to implement some of these algorithm and, therefore, demonstrate that rational filters, in particular, can be implemented without requiring large sized systems and they can operate at very high frequencies. Once the basic building block of a rational based system has been implemented, it can be successfully reused in many other applications. From the designer point of view, it is important to have a general framework, which makes it able to study various configurations of the system to be implemented and analyse the trade-off among the design variables. In particular, to meet the need far versatile tools far power estimation, we developed a new macro modelling technique, which allows the designer to estimate the power dissipated by a circuit quickly and accurately. The thesis is organized as follows: In chapter 1 we present some of the algorithms which have been studied for implementation. Only a brief overview is given, leaving to the interested reader some references in literature. In chapter 2 we discuss the basic architectures used for the implementations. Pipelined structures have been mainly used for this thesis, but an overview of the nowaday available approaches for timing optimization is presented. In chapter 3 we present two of the implementation designed for this thesis. The approaches followed are ASIC driven and FPGA drive. They require different techniques and different solution for the design of the system, therefore it is interesting to see what can be done in both the cases. Finally, in chapter 4, we describe our macro modelling techniques for power estimation, giving a brief overview of the up to now proposed techniques and showing the advantages our method brings to the design.XII Ciclo1969Versione digitalizzata della tesi di dottorato cartacea

    Inférence bayésienne dans des problèmes inverses, myopes et aveugles en traitement du signal et des images

    Get PDF
    Les activités de recherche présentées concernent la résolution de problèmes inverses, myopes et aveugles rencontrés en traitement du signal et des images. Les méthodes de résolution privilégiées reposent sur une démarche d'inférence bayésienne. Celle-ci offre un cadre d'étude générique pour régulariser les problèmes généralement mal posés en exploitant les contraintes inhérentes aux modèles d'observation. L'estimation des paramètres d'intérêt est menée à l'aide d'algorithmes de Monte Carlo qui permettent d'explorer l'espace des solutions admissibles. Un des domaines d'application visé par ces travaux est l'imagerie hyperspectrale et, plus spécifiquement, le démélange spectral. Le second travail présenté concerne la reconstruction d'images parcimonieuses acquises par un microscope MRFM
    corecore