20,838 research outputs found
A Homology Theory for Etale Groupoids
Etale groupoids arise naturally as models for leaf spaces of foliations, for
orbifolds, and for orbit spaces of discrete group actions. In this paper we
introduce a sheaf homology theory for etale groupoids. We prove its invariance
under Morita equivalence, as well as Verdier duality between Haefliger
cohomology and this homology. We also discuss the relation to the cyclic and
Hochschild homologies of Connes' convolution algebra of the groupoid, and
derive some spectral sequences which serve as a tool for the computation of
these homologies.Comment: 34 page
Knowledge Engineering from Data Perspective: Granular Computing Approach
The concept of rough set theory is a mathematical approach to uncertainly and vagueness in data analysis, introduced by Zdzislaw Pawlak in 1980s. Rough set theory assumes the underlying structure of knowledge is a partition. We have extended Pawlakâs concept of knowledge to coverings. We have taken a soft approach regarding any generalized subset as a basic knowledge. We regard a covering as basic knowledge from which the theory of knowledge approximations and learning, knowledge dependency and reduct are developed
Probabilistic Kernel Support Vector Machines
We propose a probabilistic enhancement of standard kernel Support Vector
Machines for binary classification, in order to address the case when, along
with given data sets, a description of uncertainty (e.g., error bounds) may be
available on each datum. In the present paper, we specifically consider
Gaussian distributions to model uncertainty. Thereby, our data consist of pairs
, , along with an indicator
to declare membership in one of two categories for each pair.
These pairs may be viewed to represent the mean and covariance, respectively,
of random vectors taking values in a suitable linear space (typically
). Thus, our setting may also be viewed as a modification of
Support Vector Machines to classify distributions, albeit, at present, only
Gaussian ones. We outline the formalism that allows computing suitable
classifiers via a natural modification of the standard "kernel trick." The main
contribution of this work is to point out a suitable kernel function for
applying Support Vector techniques to the setting of uncertain data for which a
detailed uncertainty description is also available (herein, "Gaussian points").Comment: 6 pages, 6 figure
An introduction to perverse sheaves
These notes aim to give a first introduction to intersection cohomology and
perverse sheaves with applications to representation theory or quantum groups
in mind.Comment: 39 pages, to appear in Proceedings of the ICRA
Metrics for generalized persistence modules
We consider the question of defining interleaving metrics on generalized
persistence modules over arbitrary preordered sets. Our constructions are
functorial, which implies a form of stability for these metrics. We describe a
large class of examples, inverse-image persistence modules, which occur
whenever a topological space is mapped to a metric space. Several standard
theories of persistence and their stability can be described in this framework.
This includes the classical case of sublevelset persistent homology. We
introduce a distinction between `soft' and `hard' stability theorems. While our
treatment is direct and elementary, the approach can be explained abstractly in
terms of monoidal functors.Comment: Final version; no changes from previous version. Published online Oct
2014 in Foundations of Computational Mathematics. Print version to appea
Maximal information component analysis: a novel non-linear network analysis method.
BackgroundNetwork construction and analysis algorithms provide scientists with the ability to sift through high-throughput biological outputs, such as transcription microarrays, for small groups of genes (modules) that are relevant for further research. Most of these algorithms ignore the important role of non-linear interactions in the data, and the ability for genes to operate in multiple functional groups at once, despite clear evidence for both of these phenomena in observed biological systems.ResultsWe have created a novel co-expression network analysis algorithm that incorporates both of these principles by combining the information-theoretic association measure of the maximal information coefficient (MIC) with an Interaction Component Model. We evaluate the performance of this approach on two datasets collected from a large panel of mice, one from macrophages and the other from liver by comparing the two measures based on a measure of module entropy, Gene Ontology (GO) enrichment, and scale-free topology (SFT) fit. Our algorithm outperforms a widely used co-expression analysis method, weighted gene co-expression network analysis (WGCNA), in the macrophage data, while returning comparable results in the liver dataset when using these criteria. We demonstrate that the macrophage data has more non-linear interactions than the liver dataset, which may explain the increased performance of our method, termed Maximal Information Component Analysis (MICA) in that case.ConclusionsIn making our network algorithm more accurately reflect known biological principles, we are able to generate modules with improved relevance, particularly in networks with confounding factors such as gene by environment interactions
Methodological Fundamentalism: or why Battermanâs Different Notions of âFundamentalismâ may not make a Difference
I argue that the distinctions Robert Batterman (2004) presents between âepistemically fundamentalâ versus âontologically fundamentalâ theoretical approaches can be subsumed by methodologically fundamental procedures. I characterize precisely what is meant by a methodologically fundamental procedure, which involves, among other things, the use of multilinear graded algebras in a theoryâs formalism. For example, one such class of algebras I discuss are the Clifford (or Geometric) algebras. Aside from their being touted by many as a âunified mathematical language for physics,â (Hestenes (1984, 1986) Lasenby, et. al. (2000)) Finkelstein (2001, 2004) and others have demonstrated that the techniques of multilinear algebraic âexpansion and contractionâ exhibit a robust regularizablilty. That is to say, such regularization has been demonstrated to remove singularities, which would otherwise appear in standard field-theoretic, mathematical characterizations of a physical theory. I claim that the existence of such methodologically fundamental procedures calls into question one of Battermanâs central points, that âour explanatory physical practice demands that we appeal essentially to (infinite) idealizationsâ (2003, 7) exhibited, for example, by singularities in the case of modeling critical phenomena, like fluid droplet formation. By way of counterexample, in the field of computational fluid dynamics (CFD), I discuss the work of Mann & Rockwood (2003) and Gerik Scheuermann, (2002). In the concluding section, I sketch a methodologically fundamental procedure potentially applicable to more general classes of critical phenomena appearing in fluid dynamics
A new description of equivariant cohomology for totally disconnected groups
We consider smooth actions of totally disconnected groups on simplicial complexes and compare
different equivariant cohomology groups associated to such actions. Our main result is that the
bivariant equivariant cohomology theory introduced by Baum and Schneider can be described using
equivariant periodic cyclic homology.
This provides a new approach to the construction of Baum and Schneider as well
as a computation of equivariant periodic cyclic homology for a natural class of examples.
In addition we discuss the relation between cosheaf homology and equivariant
Bredon homology.
Since the theory of Baum and Schneider generalizes cosheaf homology we finally see
that all these approaches to equivariant cohomology for totally disconnected
groups are closely related
- âŠ