149,530 research outputs found
Recommended from our members
Blending the physical and the digital through conceptual spaces
The rise of the Internet facilitates an ever increasing growth of virtual, i.e. digital spaces which co-exist with the physical environment, i.e. the physical space. In that, the question arises, how physical and digital space can interact synchronously. While sensors provide a means to continuously observe the physical space, several issues arise with respect to mapping sensor data streams to digital spaces, for instance, structured linked data, formally represented through symbolic Semantic Web (SW) standards such as OWL or RDF. The challenge is to bridge between symbolic knowledge representations and the measured data collected by sensors. In particular, one needs to map a given set of arbitrary sensor data to a particular set of symbolic knowledge representations, e.g. ontology instances. This task is particularly challenging due to the vast variety of possible sensor measurements. Conceptual Spaces (CS) provide a means to represent knowledge in geometrical vector spaces in order to enable computation of similarities between knowledge entities by means of distance metrics. We propose an approach which allows to refine symbolic concepts as CS and to ground ontology instances to so-called prototypical members which are vectors in the CS. By computing similarities in terms of spatial distances between a given set of sensor measurements and a finite set of CS members, the most similar instance can be identified. In that, we provide a means to bridge between the physical space, as observed by sensors, and the digital space made up of symbolic representations
Prototypes, Poles, and Topological Tessellations of Conceptual Spaces
Abstract. The aim of this paper is to present a topological method for constructing
discretizations (tessellations) of conceptual spaces. The method works for a class of
topological spaces that the Russian mathematician Pavel Alexandroff defined more than
80 years ago. Alexandroff spaces, as they are called today, have many interesting
properties that distinguish them from other topological spaces. In particular, they exhibit
a 1-1 correspondence between their specialization orders and their topological structures.
Recently, a special type of Alexandroff spaces was used by Ian Rumfitt to elucidate the
logic of vague concepts in a new way. According to his approach, conceptual spaces such
as the color spectrum give rise to classical systems of concepts that have the structure
of atomic Boolean algebras. More precisely, concepts are represented as regular open
regions of an underlying conceptual space endowed with a topological structure.
Something is subsumed under a concept iff it is represented by an element of the
conceptual space that is maximally close to the prototypical element p that defines that
concept. This topological representation of concepts comes along with a representation
of the familiar logical connectives of Aristotelian syllogistics in terms of natural settheoretical
operations that characterize regular open interpretations of classical Boolean
propositional logic.
In the last 20 years, conceptual spaces have become a popular tool of dealing with a
variety of problems in the fields of cognitive psychology, artificial intelligence, linguistics
and philosophy, mainly due to the work of Peter Gärdenfors and his collaborators. By using
prototypes and metrics of similarity spaces, one obtains geometrical discretizations of
conceptual spaces by so-called Voronoi tessellations. These tessellations are extensionally
equivalent to topological tessellations that can be constructed for Alexandroff spaces.
Thereby, Rumfitt’s and Gärdenfors’s constructions turn out to be special cases of an
approach that works for a more general class of spaces, namely, for weakly scattered
Alexandroff spaces. This class of spaces provides a convenient framework for conceptual
spaces as used in epistemology and related disciplines in general. Alexandroff spaces are
useful for elucidating problems related to the logic of vague concepts, in particular they
offer a solution of the Sorites paradox (Rumfitt). Further, they provide a semantics for the
logic of clearness (Bobzien) that overcomes certain problems of the concept of higher2
order vagueness. Moreover, these spaces help find a natural place for classical syllogistics
in the framework of conceptual spaces. The crucial role of order theory for Alexandroff
spaces can be used to refine the all-or-nothing distinction between prototypical and nonprototypical
stimuli in favor of a more fine-grained gradual distinction between more-orless
prototypical elements of conceptual spaces. The greater conceptual flexibility of the
topological approach helps avoid some inherent inadequacies of the geometrical approach,
for instance, the so-called “thickness problem” (Douven et al.) and problems of selecting
a unique metric for similarity spaces. Finally, it is shown that only the Alexandroff account can deal with an issue that is gaining more and more importance for the theory of conceptual spaces, namely, the role that digital conceptual spaces play in the area of artificial intelligence, computer science and related disciplines.
Keywords: Conceptual Spaces, Polar Spaces, Alexandroff Spaces, Prototypes, Topological Tessellations, Voronoi Tessellations, Digital Topology
Topological exploration of artificial neuronal network dynamics
One of the paramount challenges in neuroscience is to understand the dynamics
of individual neurons and how they give rise to network dynamics when
interconnected. Historically, researchers have resorted to graph theory,
statistics, and statistical mechanics to describe the spatiotemporal structure
of such network dynamics. Our novel approach employs tools from algebraic
topology to characterize the global properties of network structure and
dynamics.
We propose a method based on persistent homology to automatically classify
network dynamics using topological features of spaces built from various
spike-train distances. We investigate the efficacy of our method by simulating
activity in three small artificial neural networks with different sets of
parameters, giving rise to dynamics that can be classified into four regimes.
We then compute three measures of spike train similarity and use persistent
homology to extract topological features that are fundamentally different from
those used in traditional methods. Our results show that a machine learning
classifier trained on these features can accurately predict the regime of the
network it was trained on and also generalize to other networks that were not
presented during training. Moreover, we demonstrate that using features
extracted from multiple spike-train distances systematically improves the
performance of our method
Multi-view Metric Learning in Vector-valued Kernel Spaces
We consider the problem of metric learning for multi-view data and present a
novel method for learning within-view as well as between-view metrics in
vector-valued kernel spaces, as a way to capture multi-modal structure of the
data. We formulate two convex optimization problems to jointly learn the metric
and the classifier or regressor in kernel feature spaces. An iterative
three-step multi-view metric learning algorithm is derived from the
optimization problems. In order to scale the computation to large training
sets, a block-wise Nystr{\"o}m approximation of the multi-view kernel matrix is
introduced. We justify our approach theoretically and experimentally, and show
its performance on real-world datasets against relevant state-of-the-art
methods
- …