7,385 research outputs found
Partial Coherence Estimation via Spectral Matrix Shrinkage under Quadratic Loss
Partial coherence is an important quantity derived from spectral or precision
matrices and is used in seismology, meteorology, oceanography, neuroscience and
elsewhere. If the number of complex degrees of freedom only slightly exceeds
the dimension of the multivariate stationary time series, spectral matrices are
poorly conditioned and shrinkage techniques suggest themselves. When true
partial coherencies are quite large then for shrinkage estimators of the
diagonal weighting kind it is shown empirically that the minimization of risk
using quadratic loss (QL) leads to oracle partial coherence estimators superior
to those derived by minimizing risk using Hilbert-Schmidt (HS) loss. When true
partial coherencies are small the methods behave similarly. We derive two new
QL estimators for spectral matrices, and new QL and HS estimators for precision
matrices. In addition for the full estimation (non-oracle) case where certain
trace expressions must also be estimated, we examine the behaviour of three
different QL estimators, the precision matrix one seeming particularly robust
and reliable. For the empirical study we carry out exact simulations derived
from real EEG data for two individuals, one having large, and the other small,
partial coherencies. This ensures our study covers cases of real-world
relevance
A theory of -dissipative solvers for scalar conservation laws with discontinuous flux
We propose a general framework for the study of contractive semigroups
of solutions to conservation laws with discontinuous flux. Developing the ideas
of a number of preceding works we claim that the whole admissibility issue is
reduced to the selection of a family of "elementary solutions", which are
certain piecewise constant stationary weak solutions. We refer to such a family
as a "germ". It is well known that (CL) admits many different contractive
semigroups, some of which reflects different physical applications. We revisit
a number of the existing admissibility (or entropy) conditions and identify the
germs that underly these conditions. We devote specific attention to the
anishing viscosity" germ, which is a way to express the "-condition" of
Diehl. For any given germ, we formulate "germ-based" admissibility conditions
in the form of a trace condition on the flux discontinuity line (in the
spirit of Vol'pert) and in the form of a family of global entropy inequalities
(following Kruzhkov and Carrillo). We characterize those germs that lead to the
-contraction property for the associated admissible solutions. Our
approach offers a streamlined and unifying perspective on many of the known
entropy conditions, making it possible to recover earlier uniqueness results
under weaker conditions than before, and to provide new results for other less
studied problems. Several strategies for proving the existence of admissible
solutions are discussed, and existence results are given for fluxes satisfying
some additional conditions. These are based on convergence results either for
the vanishing viscosity method (with standard viscosity or with specific
viscosities "adapted" to the choice of a germ), or for specific germ-adapted
finite volume schemes
Local exclusion and Lieb-Thirring inequalities for intermediate and fractional statistics
In one and two spatial dimensions there is a logical possibility for
identical quantum particles different from bosons and fermions, obeying
intermediate or fractional (anyon) statistics. We consider applications of a
recent Lieb-Thirring inequality for anyons in two dimensions, and derive new
Lieb-Thirring inequalities for intermediate statistics in one dimension with
implications for models of Lieb-Liniger and Calogero-Sutherland type. These
inequalities follow from a local form of the exclusion principle valid for such
generalized exchange statistics.Comment: Revised and accepted version. 49 pages, 2 figure
Equidistribution estimates for eigenfunctions and eigenvalue bounds for random operators
We discuss properties of -eigenfunctions of Schr\"odinger operators and
elliptic partial differential operators. The focus is set on unique
continuation principles and equidistribution properties. We review recent
results and announce new ones.Comment: Keywords: scale-free unique continuation property, equidistribution
property, observability estimate, uncertainty relation, Carleman estimate,
Schr\"odinger operator, elliptic differential equatio
Super-Resolution of Positive Sources: the Discrete Setup
In single-molecule microscopy it is necessary to locate with high precision
point sources from noisy observations of the spectrum of the signal at
frequencies capped by , which is just about the frequency of natural
light. This paper rigorously establishes that this super-resolution problem can
be solved via linear programming in a stable manner. We prove that the quality
of the reconstruction crucially depends on the Rayleigh regularity of the
support of the signal; that is, on the maximum number of sources that can occur
within a square of side length about . The theoretical performance
guarantee is complemented with a converse result showing that our simple convex
program convex is nearly optimal. Finally, numerical experiments illustrate our
methods.Comment: 31 page, 7 figure
Organization and Inequality in a Knowledge Economy
We present a theory of the organization of work in an economy where knowledge is an essential input in production: a knowledge economy. In this economy a continuum of agents with heterogeneous skills must choose how much knowledge to acquire and may produce on their own or in organizations. Our theory generates an assignment of workers to positions, a wage structure, and a continuum of knowledge-based hierarchies. Organization allows low skill agents to ask others for directions. Thus, they acquire less knowledge than in isolation. In contrast, organization allows high skill agents to leverage their knowledge through large teams. Hence, they acquire more knowledge than on their own. As a result, organization decreases wage inequality within workers, but increases income inequality among the highest skill agents. We also show that equilibrium assignments and earnings can be interpreted as the outcome of alternative market institutions such as firms, or consulting and referral markets. We use our theory to study the impact of information and communication technology, and contrast its predictions with US evidence.
High-resolution distributed sampling of bandlimited fields with low-precision sensors
The problem of sampling a discrete-time sequence of spatially bandlimited
fields with a bounded dynamic range, in a distributed,
communication-constrained, processing environment is addressed. A central unit,
having access to the data gathered by a dense network of fixed-precision
sensors, operating under stringent inter-node communication constraints, is
required to reconstruct the field snapshots to maximum accuracy. Both
deterministic and stochastic field models are considered. For stochastic
fields, results are established in the almost-sure sense. The feasibility of
having a flexible tradeoff between the oversampling rate (sensor density) and
the analog-to-digital converter (ADC) precision, while achieving an exponential
accuracy in the number of bits per Nyquist-interval per snapshot is
demonstrated. This exposes an underlying ``conservation of bits'' principle:
the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed
along the amplitude axis (sensor-precision) and space (sensor density) in an
almost arbitrary discrete-valued manner, while retaining the same (exponential)
distortion-rate characteristics. Achievable information scaling laws for field
reconstruction over a bounded region are also derived: With N one-bit sensors
per Nyquist-interval, Nyquist-intervals, and total network
bitrate (per-sensor bitrate ), the maximum pointwise distortion goes to zero as
or . This is shown to be possible
with only nearest-neighbor communication, distributed coding, and appropriate
interpolation algorithms. For a fixed, nonzero target distortion, the number of
fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal
Processing and re-submitted to the IEEE Transactions on Information Theor
- âŠ