1,037 research outputs found
Conformal invariants from nodal sets. I. Negative Eigenvalues and Curvature Prescription
In this paper, we study conformal invariants that arise from nodal sets and
negative eigenvalues of conformally covariant operators; more specifically, the
GJMS operators, which include the Yamabe and Paneitz operators. We give several
applications to curvature prescription problems. We establish a version in
conformal geometry of Courant's Nodal Domain Theorem. We also show that on any
manifold of dimension , there exist many metrics for which our
invariants are nontrivial. We prove that the Yamabe operator can have an
arbitrarily large number of negative eigenvalues on any manifold of dimension
. We obtain similar results for some higher order GJMS operators on
some Einstein and Heisenberg manifolds. We describe the invariants arising from
the Yamabe and Paneitz operators associated to left-invariant metrics on
Heisenberg manifolds. Finally, in the appendix, the 2nd named author and Andrea
Malchiodi study the -curvature prescription problems for non-critical
-curvatures.Comment: v3: final version. To appear in IMRN. 31 page
WKB Approximation to the Power Wall
We present a semiclassical analysis of the quantum propagator of a particle
confined on one side by a steeply, monotonically rising potential. The models
studied in detail have potentials proportional to for ; the
limit would reproduce a perfectly reflecting boundary, but at
present we concentrate on the cases and 2, for which exact
solutions in terms of well known functions are available for comparison. We
classify the classical paths in this system by their qualitative nature and
calculate the contributions of the various classes to the leading-order
semiclassical approximation: For each classical path we find the action ,
the amplitude function and the Laplacian of . (The Laplacian is of
interest because it gives an estimate of the error in the approximation and is
needed for computing higher-order approximations.) The resulting semiclassical
propagator can be used to rewrite the exact problem as a Volterra integral
equation, whose formal solution by iteration (Neumann series) is a
semiclassical, not perturbative, expansion. We thereby test, in the context of
a concrete problem, the validity of the two technical hypotheses in a previous
proof of the convergence of such a Neumann series in the more abstract setting
of an arbitrary smooth potential. Not surprisingly, we find that the hypotheses
are violated when caustics develop in the classical dynamics; this opens up the
interesting future project of extending the methods to momentum space.Comment: 30 pages, 8 figures. Minor corrections in v.
A machine learning pipeline for discriminant pathways identification
Motivation: Identifying the molecular pathways more prone to disruption
during a pathological process is a key task in network medicine and, more in
general, in systems biology.
Results: In this work we propose a pipeline that couples a machine learning
solution for molecular profiling with a recent network comparison method. The
pipeline can identify changes occurring between specific sub-modules of
networks built in a case-control biomarker study, discriminating key groups of
genes whose interactions are modified by an underlying condition. The proposal
is independent from the classification algorithm used. Three applications on
genomewide data are presented regarding children susceptibility to air
pollution and two neurodegenerative diseases: Parkinson's and Alzheimer's.
Availability: Details about the software used for the experiments discussed
in this paper are provided in the Appendix
A study of the classification of low-dimensional data with supervised manifold learning
Supervised manifold learning methods learn data representations by preserving
the geometric structure of data while enhancing the separation between data
samples from different classes. In this work, we propose a theoretical study of
supervised manifold learning for classification. We consider nonlinear
dimensionality reduction algorithms that yield linearly separable embeddings of
training data and present generalization bounds for this type of algorithms. A
necessary condition for satisfactory generalization performance is that the
embedding allow the construction of a sufficiently regular interpolation
function in relation with the separation margin of the embedding. We show that
for supervised embeddings satisfying this condition, the classification error
decays at an exponential rate with the number of training samples. Finally, we
examine the separability of supervised nonlinear embeddings that aim to
preserve the low-dimensional geometric structure of data based on graph
representations. The proposed analysis is supported by experiments on several
real data sets
Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs
The paper considers gossip distributed estimation of a (static) distributed
random field (a.k.a., large scale unknown parameter vector) observed by
sparsely interconnected sensors, each of which only observes a small fraction
of the field. We consider linear distributed estimators whose structure
combines the information \emph{flow} among sensors (the \emph{consensus} term
resulting from the local gossiping exchange among sensors when they are able to
communicate) and the information \emph{gathering} measured by the sensors (the
\emph{sensing} or \emph{innovations} term.) This leads to mixed time scale
algorithms--one time scale associated with the consensus and the other with the
innovations. The paper establishes a distributed observability condition
(global observability plus mean connectedness) under which the distributed
estimates are consistent and asymptotically normal. We introduce the
distributed notion equivalent to the (centralized) Fisher information rate,
which is a bound on the mean square error reduction rate of any distributed
estimator; we show that under the appropriate modeling and structural network
communication conditions (gossip protocol) the distributed gossip estimator
attains this distributed Fisher information rate, asymptotically achieving the
performance of the optimal centralized estimator. Finally, we study the
behavior of the distributed gossip estimator when the measurements fade (noise
variance grows) with time; in particular, we consider the maximum rate at which
the noise variance can grow and still the distributed estimator being
consistent, by showing that, as long as the centralized estimator is
consistent, the distributed estimator remains consistent.Comment: Submitted for publication, 30 page
- âŠ