6,762 research outputs found
Model-free Nonconvex Matrix Completion: Local Minima Analysis and Applications in Memory-efficient Kernel PCA
This work studies low-rank approximation of a positive semidefinite matrix
from partial entries via nonconvex optimization. We characterized how well
local-minimum based low-rank factorization approximates a fixed positive
semidefinite matrix without any assumptions on the rank-matching, the condition
number or eigenspace incoherence parameter. Furthermore, under certain
assumptions on rank-matching and well-boundedness of condition numbers and
eigenspace incoherence parameters, a corollary of our main theorem improves the
state-of-the-art sampling rate results for nonconvex matrix completion with no
spurious local minima in Ge et al. [2016, 2017]. In addition, we investigated
when the proposed nonconvex optimization results in accurate low-rank
approximations even in presence of large condition numbers, large incoherence
parameters, or rank mismatching. We also propose to apply the nonconvex
optimization to memory-efficient Kernel PCA. Compared to the well-known
Nystr\"{o}m methods, numerical experiments indicate that the proposed nonconvex
optimization approach yields more stable results in both low-rank approximation
and clustering.Comment: Main theorem improve
Solutions of the (2+1)-dimensional KP, SK and KK equations generated by gauge transformations from non-zero seeds
By using gauge transformations, we manage to obtain new solutions of
(2+1)-dimensional Kadomtsev-Petviashvili(KP), Kaup-Kuperschmidt(KK) and
Sawada-Kotera(SK) equations from non-zero seeds. For each of the preceding
equations, a Galilean type transformation between these solutions and the
previously known solutions generated from zero seed is given. We
present several explicit formulas of the single-soliton solutions for and
, and further point out the two main differences of them under
the same value of parameters, i.e., height and location of peak line, which are
demonstrated visibly in three figures.Comment: 18 pages, 6 figures, to appear in Journal of Nonlinear Mathematical
Physic
Subspace Perspective on Canonical Correlation Analysis: Dimension Reduction and Minimax Rates
Canonical correlation analysis (CCA) is a fundamental statistical tool for
exploring the correlation structure between two sets of random variables. In
this paper, motivated by recent success of applying CCA to learn low
dimensional representations of high dimensional objects, we propose to quantify
the estimation loss of CCA by the excess prediction loss defined through a
prediction-after-dimension-reduction framework. Such framework suggests viewing
CCA estimation as estimating the subspaces spanned by the canonical variates.
Interestedly, the proposed error metrics derived from the excess prediction
loss turn out to be closely related to the principal angles between the
subspaces spanned by the population and sample canonical variates respectively.
We characterize the non-asymptotic minimax rates under the proposed metrics,
especially the dependency of the minimax rates on the key quantities including
the dimensions, the condition number of the covariance matrices, the canonical
correlations and the eigen-gap, with minimal assumptions on the joint
covariance matrix. To the best of our knowledge, this is the first finite
sample result that captures the effect of the canonical correlations on the
minimax rates
Cooperative Change Detection for Online Power Quality Monitoring
This paper considers the real-time power quality monitoring in power grid
systems. The goal is to detect the occurrence of disturbances in the nominal
sinusoidal voltage/current signal as quickly as possible such that protection
measures can be taken in time. Based on an autoregressive (AR) model for the
disturbance, we propose a generalized local likelihood ratio (GLLR) detector
which processes meter readings sequentially and alarms as soon as the test
statistic exceeds a prescribed threshold. The proposed detector not only reacts
to a wide range of disturbances, but also achieves lower detection delay
compared to the conventional block processing method. Then we further propose
to deploy multiple meters to monitor the power signal cooperatively. The
distributed meters communicate wirelessly to a central meter, where the data
fusion and detection are performed. In light of the limited bandwidth of
wireless channels, we develop a level-triggered sampling scheme, where each
meter transmits only one-bit each time asynchronously. The proposed multi-meter
scheme features substantially low communication overhead, while its performance
is close to that of the ideal case where distributed meter readings are
perfectly available at the central meter
Solving Quadratic Equations via PhaseLift when There Are About As Many Equations As Unknowns
This note shows that we can recover a complex vector x in C^n exactly from on
the order of n quadratic equations of the form ||^2 = b_i, i = 1, ...,
m, by using a semidefinite program known as PhaseLift. This improves upon
earlier bounds in [3], which required the number of equations to be at least on
the order of n log n. We also demonstrate optimal recovery results from noisy
quadratic measurements; these results are much sharper than previously known
results.Comment: 6 page
Controlling electron propagation on a topological insulator surface via proximity interactions
The possibility of electron beam guiding is theoretically explored on the
surface of a topological insulator through the proximity interaction with a
magnetic material. The electronic band modification induced by the exchange
coupling at the interface defines the path of electron propagation in analogy
to the optical fiber for photons. Numerical simulations indicate the guiding
efficiency much higher than that in the "waveguide" formed by an electrostatic
potential barrier such as p-n junctions. Further, the results illustrate
effective flux control and beam steering that can be realized by altering the
magnetization/spin texture of the adjacent magnetic materials. Specifically,
the feasibility to switch on/off and make a large-angle turn is demonstrated
under realistic conditions. Potential implementation to logic and interconnect
applications is also examined in connection with electrically controlled
magnetization switching
Sequential Hypothesis Test with Online Usage-Constrained Sensor Selection
This work investigates the sequential hypothesis testing problem with online
sensor selection and sensor usage constraints. That is, in a sensor network,
the fusion center sequentially acquires samples by selecting one "most
informative" sensor at each time until a reliable decision can be made. In
particular, the sensor selection is carried out in the online fashion since it
depends on all the previous samples at each time. Our goal is to develop the
sequential test (i.e., stopping rule and decision function) and sensor
selection strategy that minimize the expected sample size subject to the
constraints on the error probabilities and sensor usages. To this end, we first
recast the usage-constrained formulation into a Bayesian optimal stopping
problem with different sampling costs for the usage-contrained sensors. The
Bayesian problem is then studied under both finite- and infinite-horizon
setups, based on which, the optimal solution to the original usage-constrained
problem can be readily established. Moreover, by capitalizing on the structures
of the optimal solution, a lower bound is obtained for the optimal expected
sample size. In addition, we also propose algorithms to approximately evaluate
the parameters in the optimal sequential test so that the sensor usage and
error probability constraints are satisfied. Finally, numerical experiments are
provided to illustrate the theoretical findings, and compare with the existing
methods.Comment: 33 page
Decentralized Sequential Composite Hypothesis Test Based on One-Bit Communication
This paper considers the sequential composite hypothesis test with multiple
sensors. The sensors observe random samples in parallel and communicate with a
fusion center, who makes the global decision based on the sensor inputs. On one
hand, in the centralized scenario, where local samples are precisely
transmitted to the fusion center, the generalized sequential likelihood ratio
test (GSPRT) is shown to be asymptotically optimal in terms of the expected
sample size as error rates tend to zero. On the other hand, for systems with
limited power and bandwidth resources, decentralized solutions that only send a
summary of local samples (we particularly focus on a one-bit communication
protocol) to the fusion center is of great importance. To this end, we first
consider a decentralized scheme where sensors send their one-bit quantized
statistics every fixed period of time to the fusion center. We show that such a
uniform sampling and quantization scheme is strictly suboptimal and its
suboptimality can be quantified by the KL divergence of the distributions of
the quantized statistics under both hypotheses. We then propose a decentralized
GSPRT based on level-triggered sampling. That is, each sensor runs its own
GSPRT repeatedly and reports its local decision to the fusion center
asynchronously. We show that this scheme is asymptotically optimal as the local
thresholds and global thresholds grow large at different rates. Lastly, two
particular models and their associated applications are studied to compare the
centralized and decentralized approaches. Numerical results are provided to
demonstrate that the proposed level-triggered sampling based decentralized
scheme aligns closely with the centralized scheme with substantially lower
communication overhead, and significantly outperforms the uniform sampling and
quantization based decentralized scheme.Comment: 39 page
Phase Retrieval from Coded Diffraction Patterns
This paper considers the question of recovering the phase of an object from
intensity-only measurements, a problem which naturally appears in X-ray
crystallography and related disciplines. We study a physically realistic setup
where one can modulate the signal of interest and then collect the intensity of
its diffraction pattern, each modulation thereby producing a sort of coded
diffraction pattern. We show that PhaseLift, a recent convex programming
technique, recovers the phase information exactly from a number of random
modulations, which is polylogarithmic in the number of unknowns. Numerical
experiments with noiseless and noisy data complement our theoretical analysis
and illustrate our approach
Phase Retrieval via Wirtinger Flow: Theory and Algorithms
We study the problem of recovering the phase from magnitude measurements;
specifically, we wish to reconstruct a complex-valued signal x of C^n about
which we have phaseless samples of the form y_r = ||^2, r = 1,2,...,m
(knowledge of the phase of these samples would yield a linear system). This
paper develops a non-convex formulation of the phase retrieval problem as well
as a concrete solution algorithm. In a nutshell, this algorithm starts with a
careful initialization obtained by means of a spectral method, and then refines
this initial estimate by iteratively applying novel update rules, which have
low computational complexity, much like in a gradient descent scheme. The main
contribution is that this algorithm is shown to rigorously allow the exact
retrieval of phase information from a nearly minimal number of random
measurements. Indeed, the sequence of successive iterates provably converges to
the solution at a geometric rate so that the proposed scheme is efficient both
in terms of computational and data resources. In theory, a variation on this
scheme leads to a near-linear time algorithm for a physically realizable model
based on coded diffraction patterns. We illustrate the effectiveness of our
methods with various experiments on image data. Underlying our analysis are
insights for the analysis of non-convex optimization schemes that may have
implications for computational problems beyond phase retrieval.Comment: IEEE Transactions on Information Theory, Vol. 64 (4), Feb. 201
- β¦