194,455 research outputs found
Community-Aware Graph Signal Processing
The emerging field of graph signal processing (GSP) allows to transpose
classical signal processing operations (e.g., filtering) to signals on graphs.
The GSP framework is generally built upon the graph Laplacian, which plays a
crucial role to study graph properties and measure graph signal smoothness.
Here instead, we propose the graph modularity matrix as the centerpiece of GSP,
in order to incorporate knowledge about graph community structure when
processing signals on the graph, but without the need for community detection.
We study this approach in several generic settings such as filtering, optimal
sampling and reconstruction, surrogate data generation, and denoising.
Feasibility is illustrated by a small-scale example and a transportation
network dataset, as well as one application in human neuroimaging where
community-aware GSP reveals relationships between behavior and brain features
that are not shown by Laplacian-based GSP. This work demonstrates how concepts
from network science can lead to new meaningful operations on graph signals.Comment: 21 pages, 4 figures, Accepted to Signal Processing Magazine: Special
Issue on Graph Signal Processing: Foundations and Emerging Direction
Unveiling Relations in the Industry 4.0 Standards Landscape based on Knowledge Graph Embeddings
Industry~4.0 (I4.0) standards and standardization frameworks have been
proposed with the goal of \emph{empowering interoperability} in smart
factories. These standards enable the description and interaction of the main
components, systems, and processes inside of a smart factory. Due to the
growing number of frameworks and standards, there is an increasing need for
approaches that automatically analyze the landscape of I4.0 standards.
Standardization frameworks classify standards according to their functions into
layers and dimensions. However, similar standards can be classified differently
across the frameworks, producing, thus, interoperability conflicts among them.
Semantic-based approaches that rely on ontologies and knowledge graphs, have
been proposed to represent standards, known relations among them, as well as
their classification according to existing frameworks. Albeit informative, the
structured modeling of the I4.0 landscape only provides the foundations for
detecting interoperability issues. Thus, graph-based analytical methods able to
exploit knowledge encoded by these approaches, are required to uncover
alignments among standards. We study the relatedness among standards and
frameworks based on community analysis to discover knowledge that helps to cope
with interoperability conflicts between standards. We use knowledge graph
embeddings to automatically create these communities exploiting the meaning of
the existing relationships. In particular, we focus on the identification of
similar standards, i.e., communities of standards, and analyze their properties
to detect unknown relations. We empirically evaluate our approach on a
knowledge graph of I4.0 standards using the Trans family of embedding
models for knowledge graph entities. Our results are promising and suggest that
relations among standards can be detected accurately.Comment: 15 pages, 7 figures, DEXA2020 Conferenc
Synthesis of Attributed Feature Models From Product Descriptions: Foundations
Feature modeling is a widely used formalism to characterize a set of products
(also called configurations). As a manual elaboration is a long and arduous
task, numerous techniques have been proposed to reverse engineer feature models
from various kinds of artefacts. But none of them synthesize feature attributes
(or constraints over attributes) despite the practical relevance of attributes
for documenting the different values across a range of products. In this
report, we develop an algorithm for synthesizing attributed feature models
given a set of product descriptions. We present sound, complete, and
parametrizable techniques for computing all possible hierarchies, feature
groups, placements of feature attributes, domain values, and constraints. We
perform a complexity analysis w.r.t. number of features, attributes,
configurations, and domain size. We also evaluate the scalability of our
synthesis procedure using randomized configuration matrices. This report is a
first step that aims to describe the foundations for synthesizing attributed
feature models
Towards MKM in the Large: Modular Representation and Scalable Software Architecture
MKM has been defined as the quest for technologies to manage mathematical
knowledge. MKM "in the small" is well-studied, so the real problem is to scale
up to large, highly interconnected corpora: "MKM in the large". We contend that
advances in two areas are needed to reach this goal. We need representation
languages that support incremental processing of all primitive MKM operations,
and we need software architectures and implementations that implement these
operations scalably on large knowledge bases.
We present instances of both in this paper: the MMT framework for modular
theory-graphs that integrates meta-logical foundations, which forms the base of
the next OMDoc version; and TNTBase, a versioned storage system for XML-based
document formats. TNTBase becomes an MMT database by instantiating it with
special MKM operations for MMT.Comment: To appear in The 9th International Conference on Mathematical
Knowledge Management: MKM 201
Mathematical Foundations of Consciousness
We employ the Zermelo-Fraenkel Axioms that characterize sets as mathematical
primitives. The Anti-foundation Axiom plays a significant role in our
development, since among other of its features, its replacement for the Axiom
of Foundation in the Zermelo-Fraenkel Axioms motivates Platonic
interpretations. These interpretations also depend on such allied notions for
sets as pictures, graphs, decorations, labelings and various mappings that we
use. A syntax and semantics of operators acting on sets is developed. Such
features enable construction of a theory of non-well-founded sets that we use
to frame mathematical foundations of consciousness. To do this we introduce a
supplementary axiomatic system that characterizes experience and consciousness
as primitives. The new axioms proceed through characterization of so- called
consciousness operators. The Russell operator plays a central role and is shown
to be one example of a consciousness operator. Neural networks supply striking
examples of non-well-founded graphs the decorations of which generate
associated sets, each with a Platonic aspect. Employing our foundations, we
show how the supervening of consciousness on its neural correlates in the brain
enables the framing of a theory of consciousness by applying appropriate
consciousness operators to the generated sets in question
- âŠ