45,014 research outputs found
Locally adaptive factor processes for multivariate time series
In modeling multivariate time series, it is important to allow time-varying
smoothness in the mean and covariance process. In particular, there may be
certain time intervals exhibiting rapid changes and others in which changes are
slow. If such time-varying smoothness is not accounted for, one can obtain
misleading inferences and predictions, with over-smoothing across erratic time
intervals and under-smoothing across times exhibiting slow variation. This can
lead to mis-calibration of predictive intervals, which can be substantially too
narrow or wide depending on the time. We propose a locally adaptive factor
process for characterizing multivariate mean-covariance changes in continuous
time, allowing locally varying smoothness in both the mean and covariance
matrix. This process is constructed utilizing latent dictionary functions
evolving in time through nested Gaussian processes and linearly related to the
observed data with a sparse mapping. Using a differential equation
representation, we bypass usual computational bottlenecks in obtaining MCMC and
online algorithms for approximate Bayesian inference. The performance is
assessed in simulations and illustrated in a financial application
4D Seismic History Matching Incorporating Unsupervised Learning
The work discussed and presented in this paper focuses on the history
matching of reservoirs by integrating 4D seismic data into the inversion
process using machine learning techniques. A new integrated scheme for the
reconstruction of petrophysical properties with a modified Ensemble Smoother
with Multiple Data Assimilation (ES-MDA) in a synthetic reservoir is proposed.
The permeability field inside the reservoir is parametrised with an
unsupervised learning approach, namely K-means with Singular Value
Decomposition (K-SVD). This is combined with the Orthogonal Matching Pursuit
(OMP) technique which is very typical for sparsity promoting regularisation
schemes. Moreover, seismic attributes, in particular, acoustic impedance, are
parametrised with the Discrete Cosine Transform (DCT). This novel combination
of techniques from machine learning, sparsity regularisation, seismic imaging
and history matching aims to address the ill-posedness of the inversion of
historical production data efficiently using ES-MDA. In the numerical
experiments provided, I demonstrate that these sparse representations of the
petrophysical properties and the seismic attributes enables to obtain better
production data matches to the true production data and to quantify the
propagating waterfront better compared to more traditional methods that do not
use comparable parametrisation techniques
Confident Kernel Sparse Coding and Dictionary Learning
In recent years, kernel-based sparse coding (K-SRC) has received particular
attention due to its efficient representation of nonlinear data structures in
the feature space. Nevertheless, the existing K-SRC methods suffer from the
lack of consistency between their training and test optimization frameworks. In
this work, we propose a novel confident K-SRC and dictionary learning algorithm
(CKSC) which focuses on the discriminative reconstruction of the data based on
its representation in the kernel space. CKSC focuses on reconstructing each
data sample via weighted contributions which are confident in its corresponding
class of data. We employ novel discriminative terms to apply this scheme to
both training and test frameworks in our algorithm. This specific design
increases the consistency of these optimization frameworks and improves the
discriminative performance in the recall phase. In addition, CKSC directly
employs the supervised information in its dictionary learning framework to
enhance the discriminative structure of the dictionary. For empirical
evaluations, we implement our CKSC algorithm on multivariate time-series
benchmarks such as DynTex++ and UTKinect. Our claims regarding the superior
performance of the proposed algorithm are justified throughout comparing its
classification results to the state-of-the-art K-SRC algorithms.Comment: 10 pages, ICDM 2018 conferenc
Recommended from our members
Parallel data compression
Data compression schemes remove data redundancy in communicated and stored data and increase the effective capacities of communication and storage devices. Parallel algorithms and implementations for textual data compression are surveyed. Related concepts from parallel computation and information theory are briefly discussed. Static and dynamic methods for codeword construction and transmission on various models of parallel computation are described. Included are parallel methods which boost system speed by coding data concurrently, and approaches which employ multiple compression techniques to improve compression ratios. Theoretical and empirical comparisons are reported and areas for future research are suggested
A Unified approach to concurrent and parallel algorithms on balanced data structures
Concurrent and parallel algorithms are different. However, in the case of dictionaries, both kinds of algorithms share many
common points. We present a unified approach emphasizing these points. It is based on a careful analysis of the sequential
algorithm, extracting from it the more basic facts, encapsulated later on as local rules. We apply the method to the
insertion algorithms in AVL trees. All the concurrent and parallel insertion algorithms have two main phases. A
percolation phase, moving the keys to be inserted down, and a rebalancing phase. Finally, some other algorithms and
balanced structures are discussed.Postprint (published version
Dynamic Graphs on the GPU
We present a fast dynamic graph data structure for the GPU. Our dynamic graph structure uses one hash table per vertex to store adjacency lists and achieves 3.4–14.8x faster insertion rates over the state of the art across a diverse set of large datasets, as well as deletion speedups up to 7.8x. The data structure supports queries and dynamic updates through both edge and vertex insertion and deletion. In addition, we define a comprehensive evaluation strategy based on operations, workloads, and applications that we believe better characterize and evaluate dynamic graph data structures
- …