77,892 research outputs found
Potential Theory on Trees, Graphs and Ahlfors Regular Metric Spaces
We investigate connections between potential theories on a Ahlfors-regular
metric space X, on a graph G associated with X, and on the tree T obtained by
removing the "horizontal edges" in G. Applications to the calculation of set
capacity are given.Comment: 45 pages; presentation improved based on referee comment
The Dirichlet space: A Survey
In this paper we survey many results on the Dirichlet space of analytic
functions. Our focus is more on the classical Dirichlet space on the disc and
not the potential generalizations to other domains or several variables.
Additionally, we focus mainly on certain function theoretic properties of the
Dirichlet space and omit covering the interesting connections between this
space and operator theory. The results discussed in this survey show what is
known about the Dirichlet space and compares it with the related results for
the Hardy space.Comment: 35 pages, typoes corrected, some open problems adde
Two-weight norm inequalities for potential type and maximal operators in a metric space
We characterize two-weight norm inequalities for potential type integral
operators in terms of Sawyer-type testing conditions. Our result is stated in a
space of homogeneous type with no additional geometric assumptions, such as
group structure or non-empty annulus property, which appeared in earlier works
on the subject. One of the new ingredients in the proof is the use of a finite
collection of adjacent dyadic systems recently constructed by the author and T.
Hyt\"onen. We further extend the previous Euclidean characterization of
two-weight norm inequalities for fractional maximal functions into spaces of
homogeneous type.Comment: 33 pages, v8 (some typos corrected; clarified the relationship
between the different constants present in the several steps of the proof of
the main result; Lemma 6.18 modified; examples of spaces and operators
included; fixed some technical details; Definition 2.14 and Lemma 2.15
modified; Lemma 6.17 corrected; measures allowed with point masses; some
imprecise arguments clarified
Hypothesis Testing For Network Data in Functional Neuroimaging
In recent years, it has become common practice in neuroscience to use
networks to summarize relational information in a set of measurements,
typically assumed to be reflective of either functional or structural
relationships between regions of interest in the brain. One of the most basic
tasks of interest in the analysis of such data is the testing of hypotheses, in
answer to questions such as "Is there a difference between the networks of
these two groups of subjects?" In the classical setting, where the unit of
interest is a scalar or a vector, such questions are answered through the use
of familiar two-sample testing strategies. Networks, however, are not Euclidean
objects, and hence classical methods do not directly apply. We address this
challenge by drawing on concepts and techniques from geometry, and
high-dimensional statistical inference. Our work is based on a precise
geometric characterization of the space of graph Laplacian matrices and a
nonparametric notion of averaging due to Fr\'echet. We motivate and illustrate
our resulting methodologies for testing in the context of networks derived from
functional neuroimaging data on human subjects from the 1000 Functional
Connectomes Project. In particular, we show that this global test is more
statistical powerful, than a mass-univariate approach. In addition, we have
also provided a method for visualizing the individual contribution of each edge
to the overall test statistic.Comment: 34 pages. 5 figure
Extrinsic Methods for Coding and Dictionary Learning on Grassmann Manifolds
Sparsity-based representations have recently led to notable results in
various visual recognition tasks. In a separate line of research, Riemannian
manifolds have been shown useful for dealing with features and models that do
not lie in Euclidean spaces. With the aim of building a bridge between the two
realms, we address the problem of sparse coding and dictionary learning over
the space of linear subspaces, which form Riemannian structures known as
Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into
the space of symmetric matrices by an isometric mapping. This in turn enables
us to extend two sparse coding schemes to Grassmann manifolds. Furthermore, we
propose closed-form solutions for learning a Grassmann dictionary, atom by
atom. Lastly, to handle non-linearity in data, we extend the proposed Grassmann
sparse coding and dictionary learning algorithms through embedding into Hilbert
spaces.
Experiments on several classification tasks (gender recognition, gesture
classification, scene analysis, face recognition, action recognition and
dynamic texture classification) show that the proposed approaches achieve
considerable improvements in discrimination accuracy, in comparison to
state-of-the-art methods such as kernelized Affine Hull Method and
graph-embedding Grassmann discriminant analysis.Comment: Appearing in International Journal of Computer Visio
- …