240 research outputs found
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Recommended from our members
Machine Learning and Bayesian Statistics for Seismic Compressive Sensing
Seismic surveys involve an artificial source of waves and a grid of receivers at the surface. Often, receivers could be missing either because they malfunctioned or could not be placed in certain locations. It could also be the fact that a local source of noise renders a receiverâs output as unusable. These gaps in the data cause problems in later stages of the seismic signal processing work flow via aliasing or incoherent noise and thus signal reconstruction is necessary. Modern algorithms utilise the principle of Compressive Sensing (CS) for reconstruction which uses the assumption that the signal of interest is either sparse in nature or in some other bases. Most algorithms are designed with the only aim to fill in gaps in the data without any consideration of learning bases or quantifying uncertainty in their predictions.
In this thesis, we approach the seismic CS problem using probabilistic data-driven models that are adaptable to seismic data. We propose to use algorithms from the Bayesian statistics and machine learning field that allow the construction of models using probability distributions over random variables. This allows the modelling of sparsity and provides flexibility by adding or removing basis functions from the model. It also provides the framework for learning new dictionaries of bases, associating uncertainty for each prediction and denoising seismic signals. More specifically, we utilise two Bayesian algorithms for seismic CS, the Relevance Vector Machine (RVM) and the Beta Process Factor Analysis (BPFA).
The RVM uses a sparsity promoting distribution over the coefficients of a linear combination of basis functions. By learning the appropriate parameters, the algorithm infers a predictive mean and predictive variance that is used for prediction of receiversâ values and uncertainty quantification. Experiments and comparisons on various seismic data show the effectiveness of the RVM with state-of-the-art reconstruction accuracy. Furthermore, its predictive variance is used along with modifications in order to create uncertainty maps with varying levels of correlation between uncertainty and respective reconstruction error of receivers.
On the other hand, BPFA uses an alternative approach to enforce sparsity providing exact zero coefficients as opposed to the RVM. Another advantage is that it also learns the bases from the available data and provides denoising of seismic signals. Experiments and comparisons on seismic data show that the BPFA obtains state-of-the-art reconstruction accuracy on various domains. In addition, the learned bases are used by other algorithms to improve their performance. An analysis of the BPFAâs inference procedure is given along with insights to reduce its computational cost. We also utilise the probabilistic nature of the BPFA and calculate the variance of the receiversâ predictions obtained during inference. Using this, we create uncertainty maps that are highly correlated with the reconstruction error, obtaining better results than the RVMâs predictive variance. Finally, an analysis of seismic signals with different levels of variance is undertaken in order to provide guidance for the best choice of algorithm per region.
The amount of seismic data available is growing, nevertheless quantity does not directly translate to quality. This creates the challenge to analyse and extract as much information and insight as possible. Using probabilistic data-driven models, we show how to achieve this by reconstructing seismic signals from under-sampled data, learn features from training data, denoise and create uncertainty maps for predictions in seismic surveys
Robust density modelling using the student's t-distribution for human action recognition
The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium
When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its Ï parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available
A Statistical Approach to the Alignment of fMRI Data
Multi-subject functional Magnetic Resonance Image studies are critical. The anatomical and functional structure varies across subjects, so the image alignment is necessary. We define a probabilistic model to describe functional alignment. Imposing a prior distribution, as the matrix Fisher Von Mises distribution, of the orthogonal transformation parameter, the anatomical information is embedded in the estimation of the parameters, i.e., penalizing the combination of spatially distant voxels. Real applications show an improvement in the classification and interpretability of the results compared to various functional alignment methods
- âŠ