318,735 research outputs found
High Dimensional Low Rank plus Sparse Matrix Decomposition
This paper is concerned with the problem of low rank plus sparse matrix
decomposition for big data. Conventional algorithms for matrix decomposition
use the entire data to extract the low-rank and sparse components, and are
based on optimization problems with complexity that scales with the dimension
of the data, which limits their scalability. Furthermore, existing randomized
approaches mostly rely on uniform random sampling, which is quite inefficient
for many real world data matrices that exhibit additional structures (e.g.
clustering). In this paper, a scalable subspace-pursuit approach that
transforms the decomposition problem to a subspace learning problem is
proposed. The decomposition is carried out using a small data sketch formed
from sampled columns/rows. Even when the data is sampled uniformly at random,
it is shown that the sufficient number of sampled columns/rows is roughly
O(r\mu), where \mu is the coherency parameter and r the rank of the low rank
component. In addition, adaptive sampling algorithms are proposed to address
the problem of column/row sampling from structured data. We provide an analysis
of the proposed method with adaptive sampling and show that adaptive sampling
makes the required number of sampled columns/rows invariant to the distribution
of the data. The proposed approach is amenable to online implementation and an
online scheme is proposed.Comment: IEEE Transactions on Signal Processin
Adaptive processing with signal contaminated training samples
We consider the adaptive beamforming or adaptive detection problem in the case of signal contaminated training samples, i.e., when the latter may contain a signal-like component. Since this results in a significant degradation of the signal to interference and noise ratio at the output of the adaptive filter, we investigate a scheme to jointly detect the contaminated samples and subsequently take this information into account for estimation of the disturbance covariance matrix. Towards this end, a Bayesian model is proposed, parameterized by binary variables indicating the presence/absence of signal-like components in the training samples. These variables, together with the signal amplitudes and the disturbance covariance matrix are jointly estimated using a minimum mean-square error (MMSE) approach. Two strategies are proposed to implement the MMSE estimator. First, a stochastic Markov Chain Monte Carlo method is presented based on Gibbs sampling. Then a computationally more efficient scheme based on variational Bayesian analysis is proposed. Numerical simulations attest to the improvement achieved by this method compared to conventional methods such as diagonal loading. A successful application to real radar data is also presented
Recommended from our members
Stochastic Yield Analysis of Rare Failure Events in High-Dimensional Variation Space
As semiconductor industry kept shrinking the feature size to nanometer scale, circuit reliability has become an area of growing concern due to the uncertainty introduced by process variations. For highly-replicated standard cells, the failure event for each individual component must be extremely rare in order to maintain sufficiently high yield rate. Existing yield analysis approaches works fine at low dimension, but less effective either when there are a large amount of circuit parameters, or when the failure samples are distributed in multiple regions. In this thesis, four novel high sigma analysis approaches have been proposed. First, we propose an adaptive importance sampling (AIS) algorithm. AIS has several iterations of sampling region adjustments, while existing methods pre-decide a static sampling distribution. At each iteration, AIS generates samples from current proposed distribution. Next, AIS carefully assigns weight to each sample based on its tilted occurrence probability between failure region and current failure region distribution. Then we design two adaptive frameworks based on Resampling and population Metropolis-Hastings (MH) to iteratively search for failure regions. Second, we develop an Adaptive Clustering and Sampling (ACS) method to estimate the failure rate of high-dimensional and multi-failure-region circuit cases. The basic idea of the algorithm is to cluster failure samples and build global sampling distribution at each iteration. Specifically, in clustering step, we propose a multi-cone clustering method, which partitions the parametric space and clusters failure samples. Then global sampling distribution is constructed from a set of weighted Gaussian distributions. Next, we calculate importance weight for each sample based on the discrepancy between sampling distribution and target distribution. Failure probability is updated at the end of each iteration. This clustering and sampling procedure proceeds iteratively until all the failure regions are covered.Moreover, two meta-model based approaches are proposed for high sigma analysis. The Low-Rank Tensor Approximation (LRTA) formulate the meta-model in tensor space by representing a multi-way tensor into a finite sum of rank-one tensor. The polynomial degree of our LRTA model grows linearly with circuit dimension, which makes it especially promising for high-dimensional circuit problems. Then we solve our LRTA model efficiently with a robust greedy algorithm, and calibrate iteratively with an adaptive sampling method. The meta-model based importance sampling (MIS) method utilizes Gaussian Process meta-model to construct quasi-optimal importance sampling distribution, and performs Markov Chain Monte Carlo (MCMC) simulation to generate new samples from the proposed distribution. By updating our global Importance Sampling estimator in an iterated framework, MIS leads to better efficiency and higher accuracy than traditional importance sampling methods. Experiment results validate that the proposed approaches are 3 orders faster than Monte Carlo, and more accurate than both academia solutions such as importance sampling and classification based methods, and industrial solutions such as mixture IS used by Intel
Implementation of elastic prestack reverse-time migration using an efficient finite-difference scheme
Elastic reverse-time migration (RTM) can reflect the underground elastic information more comprehensively than single-component P-wave migration. One of the most important requirements of elastic RTM is to solve wave equations. The imaging accuracy and efficiency of RTM depends heavily on the algorithms used for solving wave equations. In this paper, we propose an efficient staggered-grid finite-difference (SFD) scheme based on a sampling approximation method with adaptive variable difference operator lengths to implement elastic prestack RTM. Numerical dispersion analysis and wavefield extrapolation results show that the sampling approximation SFD scheme has greater accuracy than the conventional Taylor-series expansion SFD scheme. We also test the elastic RTM algorithm on theoretical models and a field data set, respectively. Experiments presented demonstrate that elastic RTM using the proposed SFD scheme can generate better images than that using the Taylor-series expansion SFD scheme, particularly for PS images. Furthermore, the application of adaptive variable difference operator lengths can effectively improve the computational efficiency of elastic RTM
Not a COINcidence: Sub-Quadratic Asynchronous Byzantine Agreement WHP
King and Saia were the first to break the quadratic word complexity bound for
Byzantine Agreement in synchronous systems against an adaptive adversary, and
Algorand broke this bound with near-optimal resilience (first in the
synchronous model and then with eventual-synchrony). Yet the question of
asynchronous sub-quadratic Byzantine Agreement remained open. To the best of
our knowledge, we are the first to answer this question in the affirmative. A
key component of our solution is a shared coin algorithm based on a VRF. A
second essential ingredient is VRF-based committee sampling, which we formalize
and utilize in the asynchronous model for the first time. Our algorithms work
against a delayed-adaptive adversary, which cannot perform after-the-fact
removals but has full control of Byzantine processes and full information about
communication in earlier rounds. Using committee sampling and our shared coin,
we solve Byzantine Agreement with high probability, with a word complexity of
and expected time, breaking the bit barrier
for asynchronous Byzantine Agreement
Modeling and Analyzing Adaptive User-Centric Systems in Real-Time Maude
Pervasive user-centric applications are systems which are meant to sense the
presence, mood, and intentions of users in order to optimize user comfort and
performance. Building such applications requires not only state-of-the art
techniques from artificial intelligence but also sound software engineering
methods for facilitating modular design, runtime adaptation and verification of
critical system requirements.
In this paper we focus on high-level design and analysis, and use the
algebraic rewriting language Real-Time Maude for specifying applications in a
real-time setting. We propose a generic component-based approach for modeling
pervasive user-centric systems and we show how to analyze and prove crucial
properties of the system architecture through model checking and simulation.
For proving time-dependent properties we use Metric Temporal Logic (MTL) and
present analysis algorithms for model checking two subclasses of MTL formulas:
time-bounded response and time-bounded safety MTL formulas. The underlying idea
is to extend the Real-Time Maude model with suitable clocks, to transform the
MTL formulas into LTL formulas over the extended specification, and then to use
the LTL model checker of Maude. It is shown that these analyses are sound and
complete for maximal time sampling. The approach is illustrated by a simple
adaptive advertising scenario in which an adaptive advertisement display can
react to actions of the users in front of the display.Comment: In Proceedings RTRTS 2010, arXiv:1009.398
- …