17,091 research outputs found
The restricted sumsets in
Let be a positive integer. For any subset , let be the set of the elements of
which are sums of distinct elements of .
In this paper, we obtain some new results on and
. For example, we show that if and is odd, then ; Under
some conditions, if is even and is close to , then
GMNN: Graph Markov Neural Networks
This paper studies semi-supervised object classification in relational data,
which is a fundamental problem in relational data modeling. The problem has
been extensively studied in the literature of both statistical relational
learning (e.g. relational Markov networks) and graph neural networks (e.g.
graph convolutional networks). Statistical relational learning methods can
effectively model the dependency of object labels through conditional random
fields for collective classification, whereas graph neural networks learn
effective object representations for classification through end-to-end
training. In this paper, we propose the Graph Markov Neural Network (GMNN) that
combines the advantages of both worlds. A GMNN models the joint distribution of
object labels with a conditional random field, which can be effectively trained
with the variational EM algorithm. In the E-step, one graph neural network
learns effective object representations for approximating the posterior
distributions of object labels. In the M-step, another graph neural network is
used to model the local label dependency. Experiments on object classification,
link classification, and unsupervised node representation learning show that
GMNN achieves state-of-the-art results.Comment: icml 201
Optimal Control with State Constraints for Stochastic Evolution Equation with Jumps in Hilbert Space
This paper studies a stochastic optimal control problem with state
constraint, where the state equation is described by a controlled stochastic
evolution equation with jumps in Hilbert Space and the control domain is
assumed to be convex. By means of Ekland variational principle, combining the
convex variation method and the duality technique, necessary conditions for
optimality are derived in the form of stochastic maximum principles
Partial Information Stochastic Differential Games for Backward Stochastic Systems Driven By L\'{e}vy Processes
In this paper, we consider a partial information two-person zero-sum
stochastic differential game problem where the system is governed by a backward
stochastic differential equation driven by Teugels martingales associated with
a L\'{e}vy process and an independent Brownian motion. One sufficient (a
verification theorem) and one necessary conditions for the existence of optimal
controls are proved. To illustrate the general results, a linear quadratic
stochastic differential game problem is discussed
The Limits of Error Correction with lp Decoding
An unknown vector f in R^n can be recovered from corrupted measurements y =
Af + e where A^(m*n)(m>n) is the coding matrix if the unknown error vector e is
sparse. We investigate the relationship of the fraction of errors and the
recovering ability of lp-minimization (0 < p <= 1) which returns a vector x
minimizing the "lp-norm" of y - Ax. We give sharp thresholds of the fraction of
errors that determine the successful recovery of f. If e is an arbitrary
unknown vector, the threshold strictly decreases from 0.5 to 0.239 as p
increases from 0 to 1. If e has fixed support and fixed signs on the support,
the threshold is 2/3 for all p in (0, 1), while the threshold is 1 for
l1-minimization.Comment: 5 pages, 1 figure. ISIT 201
Stochastic Evolution Equation Driven by Teugels Martingale and Its Optimal Control
The paper is concerned with a class of stochastic evolution equations in
Hilbert space with random coefficients driven by Teugel's martingales and an
independent multi-dimensional Brownian motion and its optimal control problem.
Here Teugels martingales are a family of pairwise strongly orthonormal
martingales associated with L\'evy processes (see Nualart and Schoutens).
There are three major ingredients. The first is to prove the existence and
uniqueness of the solutions by continuous dependence theorem of solutions
combining with the parameter extension method. The second is to establish the
stochastic maximum principle and verification theorem for our optimal control
problem by the classic convex variation method and dual technique. The third is
to represent an example of a Cauchy problem for a controlled stochastic partial
differential equation driven by Teugels martingales which our theoretical
results can solve.Comment: arXiv admin note: text overlap with arXiv:1610.0491
On the Performance of Sparse Recovery via L_p-minimization (0<=p <=1)
It is known that a high-dimensional sparse vector x* in R^n can be recovered
from low-dimensional measurements y= A^{m*n} x* (m<n) . In this paper, we
investigate the recovering ability of l_p-minimization (0<=p<=1) as p varies,
where l_p-minimization returns a vector with the least l_p ``norm'' among all
the vectors x satisfying Ax=y. Besides analyzing the performance of strong
recovery where l_p-minimization needs to recover all the sparse vectors up to
certain sparsity, we also for the first time analyze the performance of
``weak'' recovery of l_p-minimization (0<=p<1) where the aim is to recover all
the sparse vectors on one support with fixed sign pattern. When m/n goes to 1,
we provide sharp thresholds of the sparsity ratio that differentiates the
success and failure via l_p-minimization. For strong recovery, the threshold
strictly decreases from 0.5 to 0.239 as p increases from 0 to 1. Surprisingly,
for weak recovery, the threshold is 2/3 for all p in [0,1), while the threshold
is 1 for l_1-minimization. We also explicitly demonstrate that l_p-minimization
(p<1) can return a denser solution than l_1-minimization. For any m/n<1, we
provide bounds of sparsity ratio for strong recovery and weak recovery
respectively below which l_p-minimization succeeds with overwhelming
probability. Our bound of strong recovery improves on the existing bounds when
m/n is large. Regarding the recovery threshold, l_p-minimization has a higher
threshold with smaller p for strong recovery; the threshold is the same for all
p for sectional recovery; and l_1-minimization can outperform l_p-minimization
for weak recovery. These are in contrast to traditional wisdom that
l_p-minimization has better sparse recovery ability than l_1-minimization since
it is closer to l_0-minimization. We provide an intuitive explanation to our
findings and use numerical examples to illustrate the theoretical predictions
Maximum Principle of Forward-Backward Stochastic Differential System of Mean-Field Type with Observation Noise
This paper is concerned with the partial information optimal control problem
of mean-field type under partial observation, where the system is given by a
controlled mean-field forward-backward stochastic differential equation with
correlated noises between the system and the observation, moreover the
observation coefficients may depend not only on the control process and but
also on its probability distribution. Under standard assumptions on the
coefficients, necessary and sufficient conditions for optimality of the control
problem in the form of Pontryagin's maximum principles are established in a
unified way.Comment: arXiv admin note: substantial text overlap with arXiv:1708.0300
Non-zero Sum Stochastic Differential Games of Fully Coupled Forward-Backward Stochastic Systems
In this paper, an open-loop two-person non-zero sum stochastic differential
game is considered for forward-backward stochastic systems. More precisely, the
controlled systems are described by a fully coupled nonlinear multi-
dimensional forward-backward stochastic differential equation driven by a
multi-dimensional Brownian motion. one sufficient (a verification theorem) and
one necessary conditions for the existence of open-loop Nash equilibrium points
for the corresponding two-person non-zero sum stochastic differential game are
proved. The control domain need to be convex and the admissible controls for
both players are allowed to appear in both the drift and diffusion of the state
equations
Collision statistics of clusters: From Poisson model to Poisson mixtures
Clusters traverse a gas and collide with gas particles. The gas particles are
adsorbed and the clusters become hosts. If the clusters are size selected, the
number of guests will be Poisson distributed. We review this by showcasing four
laboratory procedures that all rely on the validity of the Poisson model. The
effects of a statistical distribution of the cluster sizes in a beam of
clusters are discussed. We derive the average collision rates. Additionally, we
present Poisson mixture models that involve also standard deviations. We derive
the collision statistics for common size distributions of hosts and also for
some generalizations thereof. The models can be applied to large noble gas
clusters traversing doping gas. While outlining how to fit a generalized
Poisson to the statistics, we still find even these Poisson models to be often
insufficient.Comment: 22 pages, 4 figures, to appear in Chin Phys
- β¦