49 research outputs found
Robust Algorithms for Low-Rank and Sparse Matrix Models
Data in statistical signal processing problems is often inherently matrix-valued, and a natural first step in working with such data is to impose a model with structure that captures the distinctive features of the underlying data. Under the right model, one can design algorithms that can reliably tease weak signals out of highly corrupted data. In this thesis, we study two important classes of matrix structure: low-rankness and sparsity. In particular, we focus on robust principal component analysis (PCA) models that decompose data into the sum of low-rank and sparse (in an appropriate sense) components. Robust PCA models are popular because they are useful models for data in practice and because efficient algorithms exist for solving them.
This thesis focuses on developing new robust PCA algorithms that advance the state-of-the-art in several key respects. First, we develop a theoretical understanding of the effect of outliers on PCA and the extent to which one can reliably reject outliers from corrupted data using thresholding schemes. We apply these insights and other recent results from low-rank matrix estimation to design robust PCA algorithms with improved low-rank models that are well-suited for processing highly corrupted data. On the sparse modeling front, we use sparse signal models like spatial continuity and dictionary learning to develop new methods with important adaptive representational capabilities. We also propose efficient algorithms for implementing our methods, including an extension of our dictionary learning algorithms to the online or sequential data setting. The underlying theme of our work is to combine ideas from low-rank and sparse modeling in novel ways to design robust algorithms that produce accurate reconstructions from highly undersampled or corrupted data. We consider a variety of application domains for our methods, including foreground-background separation, photometric stereo, and inverse problems such as video inpainting and dynamic magnetic resonance imaging.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143925/1/brimoor_1.pd
An introduction to continuous optimization for imaging
International audienceA large number of imaging problems reduce to the optimization of a cost function , with typical structural properties. The aim of this paper is to describe the state of the art in continuous optimization methods for such problems, and present the most successful approaches and their interconnections. We place particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems. We illustrate and compare the different algorithms using classical non-smooth problems in imaging, such as denoising and deblurring. Moreover, we present applications of the algorithms to more advanced problems, such as magnetic resonance imaging, multilabel image segmentation, optical flow estimation, stereo matching, and classification
Neuroinformatics in Functional Neuroimaging
This Ph.D. thesis proposes methods for information retrieval in functional neuroimaging through automatic computerized authority identification, and searching and cleaning in a neuroscience database. Authorities are found through cocitation analysis of the citation pattern among scientific articles. Based on data from a single scientific journal it is shown that multivariate analyses are able to determine group structure that is interpretable as particular âknown â subgroups in functional neuroimaging. Methods for text analysis are suggested that use a combination of content and links, in the form of the terms in scientific documents and scientific citations, respectively. These included context sensitive author ranking and automatic labeling of axes and groups in connection with multivariate analyses of link data. Talairach foci from the BrainMap âą database are modeled with conditional probability density models useful for exploratory functional volumes modeling. A further application is shown with conditional outlier detection where abnormal entries in the BrainMap âą database are spotted using kernel density modeling and the redundancy between anatomical labels and spatial Talairach coordinates. This represents a combination of simple term and spatial modeling. The specific outliers that were found in the BrainMap âą database constituted among others: Entry errors, errors in the article and unusual terminology
Fortgeschrittene Methoden und Algorithmen fĂŒr die computergestĂŒtzte geodĂ€tische Datenanalyse
Die fortschreitende Digitalisierung mit ihren innovativen Technologien stellt zunehmende Anforderungen an Wirtschaft, Gesellschaft und Verwaltungen. Digitale Daten gelten als SchlĂŒsselressource, die hohe AnsprĂŒche u.a. an die Datenverarbeitung stellt, wie z. B. hohe Geschwindigkeit und ZuverlĂ€ssigkeit. Besondere Bedeutung sind digitalen Daten mit Raumbezug beizumessen. Digitale Daten stammen im Bereich der GeodĂ€sie und Geoinformatik von Multi-Sensor-Systemen, Satellitenmissionen, Smartphones, technischen GerĂ€ten, Computern oder von Datenbanken unterschiedlichster Institutionen und Behörden. âBig Dataâ heiĂt der Trend und es gilt die enormen Datenmengen so breit und so effektiv wie möglich zu nutzen und mit Hilfe von computergestĂŒtzten Tools, beispielsweise basierend auf kĂŒnstlicher Intelligenz, auszuwerten. Um diese groĂen Datenmengen statistisch auszuwerten und zu analysieren, mĂŒssen laufend neue Modelle und Algorithmen entwickelt, getestet und validiert werden. Algorithmen erleichtern GeodĂ€tinnen und GeodĂ€ten seit Jahrzehnten das Leben - sie schĂ€tzen, entscheiden, wĂ€hlen aus und bewerten die durchgefĂŒhrten Analysen.
Bei der geodĂ€tisch-statistischen Datenanalyse werden Beobachtungen zusammen mit Fachkenntnissen verwendet, um ein Modell zur Untersuchung und zum besseren VerstĂ€ndnis eines datengenerierenden Prozesses zu entwickeln. Die Datenanalyse wird verwendet, um das Modell zu verfeinern oder möglicherweise ein anderes Modell auszuwĂ€hlen, um geeignete Werte fĂŒr Modellterme zu bestimmen und um das Modell zu verwenden, oder um Aussagen ĂŒber den Prozess zu treffen. Die Fortschritte in der Statistik in den vergangenen Jahren beschrĂ€nken sich nicht nur auf die Theorie, sondern umfassen auch die Entwicklung von neuartigen computergestĂŒtzten Methoden. Die Fortschritte in der Rechenleistung haben neuere und aufwendigere statistische Methoden ermöglicht. Eine Vielzahl von alternativen Darstellungen der Daten und von Modellen können untersucht werden.
Wenn bestimmte statistische Modelle mathematisch nicht realisierbar sind, mĂŒssen Approximationsmethoden angewendet werden, die oft auf asymptotischer Inferenz basieren. Fortschritte in der Rechenleistung und Entwicklungen in der Theorie haben die computergestĂŒtzte Inferenz zu einer praktikablen und nĂŒtzlichen Alternative zu den Standardmethoden der asymptotischen Inferenz in der traditionellen Statistik werden lassen. Die computergestĂŒtzte Inferenz basiert auf der Simulation statistischer Modelle.
Die vorliegende Habilitationsschrift stellt die Ergebnisse der ForschungsaktivitĂ€ten des Autors im Bereich der statistischen und simulationsbasierten Inferenz fĂŒr die geodĂ€tische Datenanalyse vor, die am GeodĂ€tischen Institut der Gottfried Wilhelm Leibniz UniversitĂ€t Hannover wĂ€hrend der Zeit des Autors als Postdoktorand von 2009 bis 2019 publiziert wurden. Die Forschungsschwerpunkte in dieser Arbeit befassen sich mit der Entwicklung von mathematisch-statistischen Modellen, SchĂ€tzverfahren und computergestĂŒtzten Algorithmen, um raum-zeitliche und möglicherweise unvollstĂ€ndige Daten, welche durch zufĂ€llige, systematische, ausreiĂerbehaftete und korrelierte Messabweichungen charakterisiert sind, rekursiv sowie nicht-rekursiv auszugleichen. Herausforderungen bestehen hierbei in der genauen, zuverlĂ€ssigen und effizienten SchĂ€tzung der unbekannten Modellparameter, in der Ableitung von QualitĂ€tsmaĂen der SchĂ€tzung sowie in der statistisch-simulationsbasierten Beurteilung der SchĂ€tzergebnisse. Die Forschungsschwerpunkte haben verschiedene Anwendungsmöglichkeiten in den Bereichen der IngenieurgeodĂ€sie und der Immobilienbewertung gefunden
Local Deformation Modelling for Non-Rigid Structure from Motion
PhDReconstructing the 3D geometry of scenes based on monocular image sequences is
a long-standing problem in computer vision. Structure from motion (SfM) aims at a
data-driven approach without requiring a priori models of the scene. When the scene is
rigid, SfM is a well understood problem with solutions widely used in industry. However,
if the scene is non-rigid, monocular reconstruction without additional information
is an ill-posed problem and no satisfactory solution has yet been found.
Current non-rigid SfM (NRSfM) methods typically aim at modelling deformable
motion globally. Additionally, most of these methods focus on cases where deformable
motion is seen as small variations from a mean shape. In turn, these methods fail at
reconstructing highly deformable objects such as a flag waving in the wind. Additionally,
reconstructions typically consist of low detail, sparse point-cloud representation
of objects.
In this thesis we aim at reconstructing highly deformable surfaces by modelling
them locally. In line with a recent trend in NRSfM, we propose a piecewise approach
which reconstructs local overlapping regions independently. These reconstructions are
merged into a global object by imposing 3D consistency of the overlapping regions.
We propose our own local model â the Quadratic Deformation model â and show
how patch division and reconstruction can be formulated in a principled approach by
alternating at minimizing a single geometric cost â the image re-projection error of
the reconstruction. Moreover, we extend our approach to dense NRSfM, where reconstructions
are preformed at the pixel level, improving the detail of state of the art
reconstructions.
Finally we show how our principled approach can be used to perform simultaneous
segmentation and reconstruction of articulated motion, recovering meaningful
segments which provide a coarse 3D skeleton of the object.Fundacao para a Ciencia e a Tecnologia (FCT)
under Doctoral Grant SFRH/BD/70312/2010; European Research Council
under ERC Starting Grant agreement 204871-HUMANI