1,064 research outputs found
Autoregressive process parameters estimation from Compressed Sensing measurements and Bayesian dictionary learning
The main contribution of this thesis is the introduction of new techniques which allow to perform signal processing operations on signals represented by means of compressed sensing. Exploiting autoregressive modeling of the original signal, we obtain a compact yet representative description of the signal which can be estimated directly in the compressed domain. This is the key concept on which the applications we introduce rely on.
In fact, thanks to proposed the framework it is possible to gain information about the original signal given compressed sensing measurements. This is done by means of autoregressive modeling which can be used to describe a signal through a small number of parameters. We develop a method to estimate these parameters given the compressed measurements by using an ad-hoc sensing matrix design and two different coupled estimators that can be used in different scenarios. This enables centralized and distributed estimation of the covariance matrix of a process given the compressed sensing measurements in a efficient way at low communication cost.
Next, we use the characterization of the original signal done by means of few autoregressive parameters to improve compressive imaging. In particular, we use these parameters as a proxy to estimate the complexity of a block of a given image. This allows us to introduce a novel compressive imaging system in which the number of allocated measurements is adapted for each block depending on its complexity, i.e., spatial smoothness. The result is that a careful allocation of the measurements, improves the recovery process by reaching higher recovery quality at the same compression ratio in comparison to state-of-the-art compressive image recovery techniques.
Interestingly, the parameters we are able to estimate directly in the compressed domain not only can improve the recovery but can also be used as feature vectors for classification. In fact, we also propose to use these parameters as more general feature vectors which allow to perform classification in the compressed domain. Remarkably, this method reaches high classification performance which is comparable with that obtained in the original domain, but with a lower cost in terms of dataset storage.
In the second part of this work, we focus on sparse representations. In fact, a better sparsifying dictionary can improve the Compressed Sensing recovery performance. At first, we focus on the original domain and hence no dimensionality reduction by means of Compressed Sensing is considered. In particular, we develop a Bayesian technique which, in a fully automated fashion, performs dictionary learning. More in detail, using the uncertainties coming from atoms selection in the sparse representation step, this technique outperforms state-of-the-art dictionary learning techniques. Then, we also address image denoising and inpainting tasks using the aforementioned technique with excellent results.
Next, we move to the compressed domain where a better dictionary is expected to provide improved recovery. We show how the Bayesian dictionary learning model can be adapted to the compressive case and the necessary assumptions that must be made when considering random projections. Lastly, numerical experiments confirm the superiority of this technique when compared to other compressive dictionary learning techniques
Discrete Wavelet Transforms
The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications
Fatigue reliability of ship structures
Today we are sitting on a huge wealth of structural reliability theory but its application in
ship design and construction is far behind. Researchers and practitioners face a daunting
task of dove-tailing the theoretical achievements into the established processes in the
industry. The research is aimed to create a computational framework to facilitate fatigue
reliability of ship structures. Modeling, transformation and optimization, the three key
elements underlying the success of computational mechanics are adopted as the basic
methodology through the research. The whole work is presented in a way that is most
suitable for software development.
The foundation of the framework is constituted of reliability methods at component level.
Looking at the second-moment reliability theory from a minimum distance point of view
the author derives a generic set of formulations that incorporate all major first and second
order reliability methods (FORM, SORM). Practical ways to treat correlation and non-
Gaussian variables are discussed in detail. Monte Carlo simulation (MCS) also accounts
for significant part of the research with emphasis on variance reduction techniques in a
proposed Markov chain kernel method. Existing response surface methods (RSM) are
reviewed and improved with much weight given to sampling techniques and determination
of the quadratic form. Time-variant problem is touched upon and methods to convert it to
nested reliability problems are discussed.
In the upper layer of the framework common fatigue damage models are compared.
Random process simulation and rain-flow counting are used to study effect of wide-banded
non-Gaussian process. At the center of this layer is spectral fatigue analysis based on SN
curve and first-principle stress and hydrodynamic analysis. Pseudo-excitation is introduced
to get linear equivalent stress RAO in the non-linear ship-wave system. Finally response
surface method is applied to this model to calculate probability of failure and design
sensitivity in the case studies of a double hull oil tanker and a bulk carrier
Spatial and temporal background modelling of non-stationary visual scenes
PhDThe prevalence of electronic imaging systems in everyday life has become increasingly apparent
in recent years. Applications are to be found in medical scanning, automated manufacture, and
perhaps most significantly, surveillance. Metropolitan areas, shopping malls, and road traffic
management all employ and benefit from an unprecedented quantity of video cameras for monitoring
purposes. But the high cost and limited effectiveness of employing humans as the final
link in the monitoring chain has driven scientists to seek solutions based on machine vision techniques.
Whilst the field of machine vision has enjoyed consistent rapid development in the last
20 years, some of the most fundamental issues still remain to be solved in a satisfactory manner.
Central to a great many vision applications is the concept of segmentation, and in particular,
most practical systems perform background subtraction as one of the first stages of video
processing. This involves separation of ‘interesting foreground’ from the less informative but
persistent background. But the definition of what is ‘interesting’ is somewhat subjective, and
liable to be application specific. Furthermore, the background may be interpreted as including
the visual appearance of normal activity of any agents present in the scene, human or otherwise.
Thus a background model might be called upon to absorb lighting changes, moving trees and
foliage, or normal traffic flow and pedestrian activity, in order to effect what might be termed in
‘biologically-inspired’ vision as pre-attentive selection. This challenge is one of the Holy Grails
of the computer vision field, and consequently the subject has received considerable attention.
This thesis sets out to address some of the limitations of contemporary methods of background
segmentation by investigating methods of inducing local mutual support amongst pixels
in three starkly contrasting paradigms: (1) locality in the spatial domain, (2) locality in the shortterm
time domain, and (3) locality in the domain of cyclic repetition frequency.
Conventional per pixel models, such as those based on Gaussian Mixture Models, offer no
spatial support between adjacent pixels at all. At the other extreme, eigenspace models impose
a structure in which every image pixel bears the same relation to every other pixel. But Markov
Random Fields permit definition of arbitrary local cliques by construction of a suitable graph, and
3
are used here to facilitate a novel structure capable of exploiting probabilistic local cooccurrence
of adjacent Local Binary Patterns. The result is a method exhibiting strong sensitivity to multiple
learned local pattern hypotheses, whilst relying solely on monochrome image data.
Many background models enforce temporal consistency constraints on a pixel in attempt to
confirm background membership before being accepted as part of the model, and typically some
control over this process is exercised by a learning rate parameter. But in busy scenes, a true
background pixel may be visible for a relatively small fraction of the time and in a temporally
fragmented fashion, thus hindering such background acquisition. However, support in terms of
temporal locality may still be achieved by using Combinatorial Optimization to derive shortterm
background estimates which induce a similar consistency, but are considerably more robust
to disturbance. A novel technique is presented here in which the short-term estimates act as
‘pre-filtered’ data from which a far more compact eigen-background may be constructed.
Many scenes entail elements exhibiting repetitive periodic behaviour. Some road junctions
employing traffic signals are among these, yet little is to be found amongst the literature regarding
the explicit modelling of such periodic processes in a scene. Previous work focussing on gait
recognition has demonstrated approaches based on recurrence of self-similarity by which local
periodicity may be identified. The present work harnesses and extends this method in order
to characterize scenes displaying multiple distinct periodicities by building a spatio-temporal
model. The model may then be used to highlight abnormality in scene activity. Furthermore, a
Phase Locked Loop technique with a novel phase detector is detailed, enabling such a model to
maintain correct synchronization with scene activity in spite of noise and drift of periodicity.
This thesis contends that these three approaches are all manifestations of the same broad
underlying concept: local support in each of the space, time and frequency domains, and furthermore,
that the support can be harnessed practically, as will be demonstrated experimentally
Recommended from our members
Strategies for Devising Automatic Signal Recognition Algorithms in a Shared Radio Environment
In an increasingly congested and complex radio environment interference is to be expected, which poses problems for Automatic Signal Recognition (ASR) systems.
This thesis explores strategies for improving ASR performance in the presence of interference. The thesis breaks the overall research question down into a number of subquestions and explores each of these in turn. A Phase-symmetric Cross Recurrence Plot is developed and used to show how a radio signal can be manipulated to separate information about the modulation from the information being carried. The Logarithmic Cyclic frequency Domain Profile is introduced to illustrate how a logarithmic representation can be used for analysing mixtures of signals with very different cyclic frequencies. After defining a canonical ASR system architecture, the concepts of an Ideal Feature and Interference Selectivity are introduced and applied to typical features used in ASR processing. Finally it is shown how these algorithmic developments can be combined in a Bayesian chain implementation that can accommodate a wide variety of feature extraction algorithms.
It is concluded that future ASR systems will require features that can handle a wide range of signal types with much higher levels of interference selectivity if they are to achieve acceptable performance in shared spectrum bands. Intelligent segmentation is shown to be a requirement for future ASR systems unless features can be developed that have near ideal performance
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
- …