1,411 research outputs found
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Applied Harmonic Analysis and Sparse Approximation
Efficiently analyzing functions, in particular multivariate functions, is a key problem in applied mathematics. The area of applied harmonic analysis has a significant impact on this problem by providing methodologies both for theoretical questions and for a wide range of applications in technology and science, such as image processing. Approximation theory, in particular the branch of the theory of sparse approximations, is closely intertwined with this area with a lot of recent exciting developments in the intersection of both. Research topics typically also involve related areas such as convex optimization, probability theory, and Banach space geometry. The workshop was the continuation of a first event in 2012 and intended to bring together world leading experts in these areas, to report on recent developments, and to foster new developments and collaborations
Screened poisson hyperfields for shape coding
We present a novel perspective on shape characterization using the screened Poisson equation. We discuss that the effect of the screening parameter is a change of measure of the underlying metric space. Screening also indicates a conditioned random walker biased by the choice of measure. A continuum of shape fields is created by varying the screening parameter or, equivalently, the bias of the random walker. In addition to creating a regional encoding of the diffusion with a different bias, we further break down the influence of boundary interactions by considering a number of independent random walks, each emanating from a certain boundary point, whose superposition yields the screened Poisson field. Probing the screened Poisson equation from these two complementary perspectives leads to a high-dimensional hyperfield: a rich characterization of the shape that encodes global, local, interior, and boundary interactions. To extract particular shape information as needed in a compact way from the hyperfield, we apply various decompositions either to unveil parts of a shape or parts of a boundary or to create consistent mappings. The latter technique involves lower-dimensional embeddings, which we call screened Poisson encoding maps (SPEM). The expressive power of the SPEM is demonstrated via illustrative experiments as well as a quantitative shape retrieval experiment over a public benchmark database on which the SPEM method shows a high-ranking performance among the existing state-of-the-art shape retrieval methods
Applications of Multi-view Learning Approaches for Software Comprehension
Program comprehension concerns the ability of an individual to make an
understanding of an existing software system to extend or transform it.
Software systems comprise of data that are noisy and missing, which makes
program understanding even more difficult. A software system consists of
various views including the module dependency graph, execution logs,
evolutionary information and the vocabulary used in the source code, that
collectively defines the software system. Each of these views contain unique
and complementary information; together which can more accurately describe the
data. In this paper, we investigate various techniques for combining different
sources of information to improve the performance of a program comprehension
task. We employ state-of-the-art techniques from learning to 1) find a suitable
similarity function for each view, and 2) compare different multi-view learning
techniques to decompose a software system into high-level units and give
component-level recommendations for refactoring of the system, as well as
cross-view source code search. The experiments conducted on 10 relatively large
Java software systems show that by fusing knowledge from different views, we
can guarantee a lower bound on the quality of the modularization and even
improve upon it. We proceed by integrating different sources of information to
give a set of high-level recommendations as to how to refactor the software
system. Furthermore, we demonstrate how learning a joint subspace allows for
performing cross-modal retrieval across views, yielding results that are more
aligned with what the user intends by the query. The multi-view approaches
outlined in this paper can be employed for addressing problems in software
engineering that can be encoded in terms of a learning problem, such as
software bug prediction and feature location
Natural Image Statistics for Digital Image Forensics
We describe a set of natural image statistics that are built upon two multi-scale image decompositions, the quadrature mirror filter pyramid decomposition and the local angular harmonic decomposition. These image statistics consist of first- and higher-order statistics that capture certain statistical regularities of natural images. We propose to apply these image statistics, together with classification techniques, to three problems in digital image forensics: (1) differentiating photographic images from computer-generated photorealistic images, (2) generic steganalysis; (3) rebroadcast image detection. We also apply these image statistics to the traditional art authentication for forgery detection and identification of artists in an art work. For each application we show the effectiveness of these image statistics and analyze their sensitivity and robustness
Learning Laplacian Matrix in Smooth Graph Signal Representations
The construction of a meaningful graph plays a crucial role in the success of
many graph-based representations and algorithms for handling structured data,
especially in the emerging field of graph signal processing. However, a
meaningful graph is not always readily available from the data, nor easy to
define depending on the application domain. In particular, it is often
desirable in graph signal processing applications that a graph is chosen such
that the data admit certain regularity or smoothness on the graph. In this
paper, we address the problem of learning graph Laplacians, which is equivalent
to learning graph topologies, such that the input data form graph signals with
smooth variations on the resulting topology. To this end, we adopt a factor
analysis model for the graph signals and impose a Gaussian probabilistic prior
on the latent variables that control these signals. We show that the Gaussian
prior leads to an efficient representation that favors the smoothness property
of the graph signals. We then propose an algorithm for learning graphs that
enforces such property and is based on minimizing the variations of the signals
on the learned graph. Experiments on both synthetic and real world data
demonstrate that the proposed graph learning framework can efficiently infer
meaningful graph topologies from signal observations under the smoothness
prior
Matrix and Tensor-based ESPRIT Algorithm for Joint Angle and Delay Estimation in 2D Active Broadband Massive MIMO Systems and Analysis of Direction of Arrival Estimation Algorithms for Basal Ice Sheet Tomography
In this thesis, we apply and analyze three direction of arrival algorithms (DoA) to tackle two distinct problems: one belongs to wireless communication, the other to radar signal processing. Though the essence of these two problems is DoA estimation, their formulation, underlying assumptions, application scenario, etc. are totally different. Hence, we write them separately, with ESPRIT algorithm the focus of Part I and MUSIC and MLE detailed in Part II. For wireless communication scenario, mobile data traffic is expected to have an exponential growth in the future. In order to meet the challenge as well as the form factor limitation on the base station, 2D "massive MIMO" has been proposed as one of the enabling technologies to significantly increase the spectral efficiency of a wireless system. In "massive MIMO" systems, a base station will rely on the uplink sounding signals from mobile stations to figure out the spatial information to perform MIMO beamforming. Accordingly, multi-dimensional parameter estimation of a ray-based multi-path wireless channel becomes crucial for such systems to realize the predicted capacity gains. In the first Part, we study joint angle and delay estimation for 2D "massive MIMO" systems in mobile wireless communications. To be specific, we first introduce a low complexity time delay and 2D DoA estimation algorithm based on unitary transformation. Some closed-form results and capacity analysis are involved. Furthermore, the matrix and tensor-based 3D ESPRIT-like algorithms are applied to jointly estimate angles and delay. Significant improvements of the performance can be observed in our communication scheme. Finally, we found that azimuth estimation is more vulnerable compared to elevation estimation. Results suggest that the dimension of the antenna array at the base station plays an important role in determining the estimation performance. These insights will be useful for designing practical "massive MIMO" systems in future mobile wireless communications. For the problem of radar remote sensing of ice sheet topography, one of the key requirements for deriving more realistic ice sheet models is to obtain a good set of basal measurements that enables accurate estimation of bed roughness and conditions. For this purpose, 3D tomography of the ice bed has been successfully implemented with the help of DoA algorithms such as MUSIC and MLE techniques. These methods have enabled fine resolution in the cross-track dimension using synthetic aperture radar (SAR) images obtained from single pass multichannel data. In Part II, we analyze and compare the results obtained from the spectral MUSIC algorithm and an alternating projection (AP) based MLE technique. While the MUSIC algorithm is more attractive computationally compared to MLE, the performance of the latter is known to be superior in most situations. The SAR focused datasets provide a good case study to explore the performance of these two techniques to the application of ice sheet bed elevation estimation. For the antenna array geometry and sample support used in our tomographic application, MUSIC performs better originally using a cross-over analysis where the estimated topography from crossing flightlines are compared for consistency. However, after several improvements applied to MLE, i.e., replacing ideal steering vector generation with measured steering vectors, automatic determination of the number of scatter sources, smoothing the 3D tomography in order to get a more accurate height estimation and introducing a quality metric for the estimated signals, etc., MLE outperforms MUSIC. It confirms that MLE is indeed the optimal estimator for our particular ice bed tomographic application. We observe that, the spatial bottom smoothing, aiming to remove the artifacts made by MLE algorithm, is the most essential step in the post-processing procedure. The 3D tomography we obtained lays a good foundation for further analysis and modeling of ice sheets
Spatial and temporal background modelling of non-stationary visual scenes
PhDThe prevalence of electronic imaging systems in everyday life has become increasingly apparent
in recent years. Applications are to be found in medical scanning, automated manufacture, and
perhaps most significantly, surveillance. Metropolitan areas, shopping malls, and road traffic
management all employ and benefit from an unprecedented quantity of video cameras for monitoring
purposes. But the high cost and limited effectiveness of employing humans as the final
link in the monitoring chain has driven scientists to seek solutions based on machine vision techniques.
Whilst the field of machine vision has enjoyed consistent rapid development in the last
20 years, some of the most fundamental issues still remain to be solved in a satisfactory manner.
Central to a great many vision applications is the concept of segmentation, and in particular,
most practical systems perform background subtraction as one of the first stages of video
processing. This involves separation of ‘interesting foreground’ from the less informative but
persistent background. But the definition of what is ‘interesting’ is somewhat subjective, and
liable to be application specific. Furthermore, the background may be interpreted as including
the visual appearance of normal activity of any agents present in the scene, human or otherwise.
Thus a background model might be called upon to absorb lighting changes, moving trees and
foliage, or normal traffic flow and pedestrian activity, in order to effect what might be termed in
‘biologically-inspired’ vision as pre-attentive selection. This challenge is one of the Holy Grails
of the computer vision field, and consequently the subject has received considerable attention.
This thesis sets out to address some of the limitations of contemporary methods of background
segmentation by investigating methods of inducing local mutual support amongst pixels
in three starkly contrasting paradigms: (1) locality in the spatial domain, (2) locality in the shortterm
time domain, and (3) locality in the domain of cyclic repetition frequency.
Conventional per pixel models, such as those based on Gaussian Mixture Models, offer no
spatial support between adjacent pixels at all. At the other extreme, eigenspace models impose
a structure in which every image pixel bears the same relation to every other pixel. But Markov
Random Fields permit definition of arbitrary local cliques by construction of a suitable graph, and
3
are used here to facilitate a novel structure capable of exploiting probabilistic local cooccurrence
of adjacent Local Binary Patterns. The result is a method exhibiting strong sensitivity to multiple
learned local pattern hypotheses, whilst relying solely on monochrome image data.
Many background models enforce temporal consistency constraints on a pixel in attempt to
confirm background membership before being accepted as part of the model, and typically some
control over this process is exercised by a learning rate parameter. But in busy scenes, a true
background pixel may be visible for a relatively small fraction of the time and in a temporally
fragmented fashion, thus hindering such background acquisition. However, support in terms of
temporal locality may still be achieved by using Combinatorial Optimization to derive shortterm
background estimates which induce a similar consistency, but are considerably more robust
to disturbance. A novel technique is presented here in which the short-term estimates act as
‘pre-filtered’ data from which a far more compact eigen-background may be constructed.
Many scenes entail elements exhibiting repetitive periodic behaviour. Some road junctions
employing traffic signals are among these, yet little is to be found amongst the literature regarding
the explicit modelling of such periodic processes in a scene. Previous work focussing on gait
recognition has demonstrated approaches based on recurrence of self-similarity by which local
periodicity may be identified. The present work harnesses and extends this method in order
to characterize scenes displaying multiple distinct periodicities by building a spatio-temporal
model. The model may then be used to highlight abnormality in scene activity. Furthermore, a
Phase Locked Loop technique with a novel phase detector is detailed, enabling such a model to
maintain correct synchronization with scene activity in spite of noise and drift of periodicity.
This thesis contends that these three approaches are all manifestations of the same broad
underlying concept: local support in each of the space, time and frequency domains, and furthermore,
that the support can be harnessed practically, as will be demonstrated experimentally
Stable Feature Selection for Biomarker Discovery
Feature selection techniques have been used as the workhorse in biomarker
discovery applications for a long time. Surprisingly, the stability of feature
selection with respect to sampling variations has long been under-considered.
It is only until recently that this issue has received more and more attention.
In this article, we review existing stable feature selection methods for
biomarker discovery using a generic hierarchal framework. We have two
objectives: (1) providing an overview on this new yet fast growing topic for a
convenient reference; (2) categorizing existing methods under an expandable
framework for future research and development
- …