201 research outputs found
A 3d geoscience information system framework
Two-dimensional geographical information systems are extensively used in the geosciences to create and analyse maps. However, these systems are unable to represent the Earth's subsurface in three spatial dimensions. The objective of this thesis is to overcome this deficiency, to provide a general framework for a 3d geoscience information system (GIS), and to contribute to the public discussion about the development of an infrastructure for geological observation data, geomodels, and geoservices. Following the objective, the requirements for a 3d GIS are analysed. According to the requirements, new geologically sensible query functionality for geometrical, topological and geological properties has been developed and the integration of 3d geological modeling and data management system components in a generic framework has been accomplished. The 3d geoscience information system framework presented here is characterized by the following features: - Storage of geological observation data and geomodels in a XML-database server. According to a new data model, geological observation data can be referenced by a set of geomodels. - Functionality for querying observation data and 3d geomodels based on their 3d geometrical, topological, material, and geological properties were developed and implemented as plug-in for a 3d geomodeling user application. - For database queries, the standard XML query language has been extended with 3d spatial operators. The spatial database query operations are computed using a XML application server which has been developed for this specific purpose. This technology allows sophisticated 3d spatial and geological database queries. Using the developed methods, queries can be answered like: "Select all sandstone horizons which are intersected by the set of faults F". This request contains a topological and a geological material parameter. The combination of queries with other GIS methods, like visual and statistical analysis, allows geoscience investigations in a novel 3d GIS environment. More generally, a 3d GIS enables geologists to read and understand a 3d digital geomodel analogously as they read a conventional 2d geological map
Point set signature and algorithm of classifications on its basis
ΠΠ° Π΄Π°Π½Π½ΡΠΉ ΠΌΠΎΠΌΠ΅Π½Ρ ΡΡΡΠ΅ΡΡΠ²ΡΠ΅Ρ Π±ΠΎΠ»ΡΡΠΎΠ΅ ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ Π·Π°Π΄Π°Ρ ΠΏΠΎ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·ΠΈΡΠΎΠ²Π°Π½Π½ΠΎΠΉ ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠ΅ ΠΌΠ½ΠΎΠ³ΠΎΠΌΠ΅ΡΠ½ΡΡ
Π΄Π°Π½Π½ΡΡ
, Π½Π°ΠΏΡΠΈΠΌΠ΅Ρ, ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΈΡ, ΠΊΠ»Π°ΡΡΠ΅ΡΠΈΠ·Π°ΡΠΈΡ, ΠΏΡΠΎΠ³Π½ΠΎΠ·ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅, Π·Π°Π΄Π°ΡΠΈ ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ ΡΠ»ΠΎΠΆΠ½ΡΠΌΠΈ
ΠΎΠ±ΡΠ΅ΠΊΡΠ°ΠΌΠΈ. Π‘ΠΎΠΎΡΠ²Π΅ΡΡΡΠ²Π΅Π½Π½ΠΎ, Π²ΠΎΠ·Π½ΠΈΠΊΠ°Π΅Ρ Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎΡΡΡ Π² ΡΠ°Π·Π²ΠΈΡΠΈΠΈ ΠΌΠ°ΡΠ΅ΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΠΈ Π°Π»Π³ΠΎΡΠΈΡΠΌΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠ΅Π½ΠΈΡ Π΄Π»Ρ ΡΠ΅ΡΠ΅Π½ΠΈΡ Π²ΠΎΠ·Π½ΠΈΠΊΠ°ΡΡΠΈΡ
Π·Π°Π΄Π°Ρ. Π¦Π΅Π»ΡΡ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ ΡΠ²Π»ΡΠ΅ΡΡΡ ΡΠ°Π·Π²ΠΈΡΠΈΠ΅ Π°Π»Π³ΠΎΡΠΈΡΠΌΠΎΠ² ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ ΡΠΎΡΠ΅ΡΠ½ΡΡ
ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ² Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΈΡ
ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ. Π ΡΠ°Π±ΠΎΡΠ΅ ΠΏΡΠ΅Π΄Π»Π°Π³Π°Π΅ΡΡΡ ΡΠ°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°ΡΡ Π΄Π°Π½Π½ΡΠ΅ ΠΊΠ°ΠΊ ΡΠΎΡΠΊΠΈ Π² ΠΌΠ½ΠΎΠ³ΠΎΠΌΠ΅ΡΠ½ΠΎΠΌ ΠΌΠ΅ΡΡΠΈΡΠ΅ΡΠΊΠΎΠΌ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅. Π ΡΠ°Π±ΠΎΡΠ΅ ΡΠ°ΡΡΠΌΠΎΡΡΠ΅Π½Ρ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄Ρ ΠΊ ΠΎΠΏΠΈΡΠ°Π½ΠΈΡ Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊ ΡΠΎΡΠ΅ΡΠ½ΡΡ
ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ² Π² ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π°Ρ
Π²ΡΡΠΎΠΊΠΎΠΉ ΡΠ°Π·ΠΌΠ΅ΡΠ½ΠΎΡΡΠΈ ΠΈ ΠΏΡΠ΅Π΄Π»Π°Π³Π°Π΅ΡΡΡ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ ΠΊ ΠΎΠΏΠΈΡΠ°Π½ΠΈΡ ΡΠΎΡΠ΅ΡΠ½ΠΎΠ³ΠΎ ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²Π° Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΡΠΈΠ³Π½Π°ΡΡΡ, ΠΊΠΎΡΠΎΡΡΠ΅ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»ΡΡΡ ΡΠΎΠ±ΠΎΠΉ Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊΡ Π·Π°ΠΏΠΎΠ»Π½Π΅Π½Π½ΠΎΡΡΠΈ ΡΠΎΡΠ΅ΡΠ½ΠΎΠ³ΠΎ ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²Π° Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΡΠ°ΡΡΠΈΡΠ΅Π½ΠΈΡ ΠΏΠΎΠ½ΡΡΠΈΡ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ Ρ
Π΅ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΠ±ΠΎΠ±ΡΠ΅Π½Π½ΡΠΉ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ ΠΊ Π²ΡΡΠΈΡΠ»Π΅Π½ΠΈΡ ΡΠΈΠ³Π½Π°ΡΡΡ ΡΠΎΡΠ΅ΡΠ½ΡΡ
ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ² Π·Π°ΠΊΠ»ΡΡΠ°Π΅ΡΡΡ Π² ΡΠ°Π·Π±ΠΈΠ΅Π½ΠΈΠΈ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π°, Π·Π°Π½ΠΈΠΌΠ°Π΅ΠΌΠΎΠ³ΠΎ ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²ΠΎΠΌ Π½Π° ΡΠ΅Π³ΡΠ»ΡΡΠ½ΡΡ ΡΠ΅ΡΠΊΡ Ρ ΠΏΠΎΠΌΠΎΡΡΡ ΠΌΠ΅ΡΠΎΠ΄Π° ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ Ρ
Π΅ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ, Π²ΡΡΠΈΡΠ»Π΅Π½ΠΈΡ Π³Π΅ΠΎΠΌΠ΅ΡΡΠΈΡΠ΅ΡΠΊΠΈΡ
Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊ ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²Π° Π² ΠΏΠΎΠ»ΡΡΠ΅Π½Π½ΡΡ
ΡΡΠ΅ΠΉΠΊΠ°Ρ
ΠΈ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ Π½Π°ΠΈΠ±ΠΎΠ»Π΅Π΅
Π·Π°ΠΏΠΎΠ»Π½Π΅Π½Π½ΡΡ
ΡΡΠ΅Π΅ΠΊ ΠΏΠΎ ΠΊΠ°ΠΆΠ΄ΠΎΠΌΡ ΠΈΠ· ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅Π½Π½ΡΡ
ΠΈΠ·ΠΌΠ΅ΡΠ΅Π½ΠΈΠΉ. ΠΡΠ΅Π΄Π»Π°Π³Π°Π΅ΡΡΡ Π½ΠΎΠ²ΡΠΉ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ ΠΊ ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΡΠΈΠ³Π½Π°ΡΡΡ ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²Π°, ΠΊΠΎΡΠΎΡΡΠΉ Π·Π°ΠΊΠ»ΡΡΠ°Π΅ΡΡΡ Π² Π½Π°Ρ
ΠΎΠΆΠ΄Π΅Π½ΠΈΠΈ ΡΠΈΠ³Π½Π°ΡΡΡ Π΄Π»Ρ ΡΠΎΡΠ΅ΠΊ Ρ ΠΈΠ·Π²Π΅ΡΡΠ½ΡΠΌ Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅ΠΌ ΠΏΡΠΈΠ½Π°Π΄Π»Π΅ΠΆΠ½ΠΎΡΡΠΈ ΠΊ Π½Π΅ΠΊΠΎΡΠΎΡΡΠΌ ΠΊΠ»Π°ΡΡΠ°ΠΌ, Π° Π΄Π»Ρ Π½ΠΎΠ²ΡΡ
ΡΠΎΡΠ΅ΠΊ Π²ΡΡΠΈΡΠ»ΡΠ΅ΡΡΡ ΡΠ°ΡΡΡΠΎΡΠ½ΠΈΠ΅ ΠΎΡ Ρ
Π΅ΡΠ° ΡΠΎΡΠΊΠΈ Π΄ΠΎ ΡΠΈΠ³Π½Π°ΡΡΡΡ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΠΈΠ· ΠΈΠ·Π²Π΅ΡΡΠ½ΡΡ
ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ², Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΡΠ΅Π³ΠΎ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅ΡΡΡ Π½Π°ΠΈΠ±ΠΎΠ»Π΅Π΅ Π²Π΅ΡΠΎΡΡΠ½ΡΠΉ ΠΊΠ»Π°ΡΡ ΡΠΎΡΠΊΠΈ. Π ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ
ΠΌΠ΅ΡΡΠΈΠΊ ΠΏΡΠ΅Π΄Π»Π°Π³Π°ΡΡΡΡ ΠΠ²ΠΊΠ»ΠΈΠ΄ΠΎΠ²ΠΎ ΡΠ°ΡΡΡΠΎΡΠ½ΠΈΠ΅ ΠΈ ΠΌΠ΅ΡΡΠΈΠΊΠ° Π³ΠΎΡΠΎΠ΄ΡΠΊΠΈΡ
ΠΊΠ²Π°ΡΡΠ°Π»ΠΎΠ². Π ΡΠ°Π±ΠΎΡΠ΅ ΠΏΡΠΎΠ²Π΅Π΄ΡΠ½ ΡΡΠ°Π²Π½ΠΈΡΠ΅Π»ΡΠ½ΡΠΉ Π°Π½Π°Π»ΠΈΠ· ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ
ΠΌΠ΅ΡΡΠΈΠΊ Ρ ΡΠΎΡΠΊΠΈ Π·ΡΠ΅Π½ΠΈΡ ΡΠΎΡΠ½ΠΎΡΡΠΈ ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ. ΠΡΠ΅ΠΈΠΌΡΡΠ΅ΡΡΠ²Π°ΠΌΠΈ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄Π° ΡΠ²Π»ΡΡΡΡΡ ΠΏΡΠΎΡΡΠΎΡΠ° Π²ΡΡΠΈΡΠ»Π΅Π½ΠΈΠΉ ΠΈ Π²ΡΡΠΎΠΊΠ°Ρ ΡΡΠ΅ΠΏΠ΅Π½Ρ ΡΠΎΡΠ½ΠΎΡΡΠΈ ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ Π΄Π»Ρ ΡΠ°Π²Π½ΠΎΠΌΠ΅ΡΠ½ΠΎ ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Π½ΡΡ
ΡΠΎΡΠ΅ΠΊ. ΠΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½Π½ΡΠΉ Π°Π»Π³ΠΎΡΠΈΡΠΌ ΡΠ΅Π°Π»ΠΈΠ·ΠΎΠ²Π°Π½ Π²
Π²ΠΈΠ΄Π΅ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠ½ΠΎΠ³ΠΎ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ Π½Π° ΡΠ·ΡΠΊΠ΅ Python Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ Π±ΠΈΠ±Π»ΠΈΠΎΡΠ΅ΠΊΠΈ NumPy. Π’Π°ΠΊΠΆΠ΅ ΡΠ°ΡΡΠΌΠΎΡΡΠ΅Π½Ρ Π²Π°ΡΠΈΠ°Π½ΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄Π° Π΄Π»Ρ Π·Π°Π΄Π°Ρ Ρ Π½Π΅ ΡΠΈΡΠ»ΠΎΠ²ΡΠΌΠΈ Π΄Π°Π½Π½ΡΠΌΠΈ, ΡΠ°ΠΊΠΈΠΌΠΈ ΠΊΠ°ΠΊ ΡΡΡΠΎΠΊΠΎΠ²ΡΠ΅ ΠΈ Π±ΡΠ»Π΅Π²Ρ Π·Π½Π°ΡΠ΅Π½ΠΈΡ. ΠΠ»Ρ ΡΠ°ΠΊΠΈΡ
Π΄Π°Π½Π½ΡΡ
ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΠΌΠ΅ΡΡΠΈΠΊΡ Π₯ΡΠΌΠΌΠΈΠ½Π³Π°, ΠΏΡΠΎΠ²Π΅Π΄ΡΠ½Π½ΡΠ΅ ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΡ ΠΏΠΎΠΊΠ°Π·Π°Π»ΠΈ ΡΠ°Π±ΠΎΡΠΎΡΠΏΠΎΡΠΎΠ±Π½ΠΎΡΡΡ Π°Π»Π³ΠΎΡΠΈΡΠΌΠ° Π΄Π»Ρ ΡΠ°ΠΊΠΈΡ
ΡΠΈΠΏΠΎΠ² Π΄Π°Π½Π½ΡΡ
.There are many unsolved problems in the field of automatic multi-dimensional data processing, for example, classification, clustering, regression, and control of complex objects. This leads to the need of development of mathematical and algorithmical background for such problems. In our research we aim to development of classification algorithms of point sets based on their spatial distribution. We propose to consider data as points in multi-dimensional metric space. The approaches to describe point set features in high dimensional spaces are viewed. The algorithm of describing of point set based on their signatures, that are spatial distribution of point set is considered. In our approach we extend spatial hashing technique. The generalized method of computation of point set signatures is to split space, occupied by point set into regular grid by the spatial hashing algorithm, then we
evaluate geometrical characteristics of the set in cells of the grid and define cells, that contain most of the points for the all of coordinate axis. The new approach to classification by means of point set signatures is developed that is to find signatures of known points with the classes defined and then we compute spatial hashes for unknown points and their distance to the signatures of classes. The probable class of the tested point is defined by the minimal distance among all distances to each signature. To define distance in our approach we use Manhattan and Euclidean metric. The comparative study of impact of metrics used to the classification error is provided. The main advantage of our method is computation simplicity and low classification error for evenly distributed points. Prototype implementation of our algorithm was written in order to test this algorithm for practical classification applications. The implementation was coded in Python with use NumPy library. The use of our algorithm to the classification of non-numerical data such as texts and booleans is viewed. For such data types we propose use of Hamming distance and experiments done show practical viability for such data types
Representing Edge Flows on Graphs via Sparse Cell Complexes
Obtaining sparse, interpretable representations of observable data is crucial
in many machine learning and signal processing tasks. For data representing
flows along the edges of a graph, an intuitively interpretable way to obtain
such representations is to lift the graph structure to a simplicial complex:
The eigenvectors of the associated Hodge-Laplacian, respectively the incidence
matrices of the corresponding simplicial complex then induce a Hodge
decomposition, which can be used to represent the observed data in terms of
gradient, curl, and harmonic flows. In this paper, we generalize this approach
to cellular complexes and introduce the cell inference optimization problem,
i.e., the problem of augmenting the observed graph by a set of cells, such that
the eigenvectors of the associated Hodge Laplacian provide a sparse,
interpretable representation of the observed edge flows on the graph. We show
that this problem is NP-hard and introduce an efficient approximation algorithm
for its solution. Experiments on real-world and synthetic data demonstrate that
our algorithm outperforms current state-of-the-art methods while being
computationally efficient.Comment: 9 pages, 6 figures (plus appendix). For evaluation code, see
https://anonymous.4open.science/r/edge-flow-repr-cell-complexes-11C
A New Oscillating-Error Technique for Classifiers
This paper describes a new method for reducing the error in a classifier. It
uses an error correction update that includes the very simple rule of either
adding or subtracting the error adjustment, based on whether the variable value
is currently larger or smaller than the desired value. While a traditional
neuron would sum the inputs together and then apply a function to the total,
this new method can change the function decision for each input value. This
gives added flexibility to the convergence procedure, where through a series of
transpositions, variables that are far away can continue towards the desired
value, whereas variables that are originally much closer can oscillate from one
side to the other. Tests show that the method can successfully classify some
benchmark datasets. It can also work in a batch mode, with reduced training
times and can be used as part of a neural network architecture. Some
comparisons with an earlier wave shape paper are also made
Dist2Cycle: A Simplicial Neural Network for Homology Localization
Simplicial complexes can be viewed as high dimensional generalizations of
graphs that explicitly encode multi-way ordered relations between vertices at
different resolutions, all at once. This concept is central towards detection
of higher dimensional topological features of data, features to which graphs,
encoding only pairwise relationships, remain oblivious. While attempts have
been made to extend Graph Neural Networks (GNNs) to a simplicial complex
setting, the methods do not inherently exploit, or reason about, the underlying
topological structure of the network. We propose a graph convolutional model
for learning functions parametrized by the -homological features of
simplicial complexes. By spectrally manipulating their combinatorial
-dimensional Hodge Laplacians, the proposed model enables learning
topological features of the underlying simplicial complexes, specifically, the
distance of each -simplex from the nearest "optimal" -th homology
generator, effectively providing an alternative to homology localization.Comment: 9 pages, 5 figure
Graph-based Semi-Supervised & Active Learning for Edge Flows
We present a graph-based semi-supervised learning (SSL) method for learning
edge flows defined on a graph. Specifically, given flow measurements on a
subset of edges, we want to predict the flows on the remaining edges. To this
end, we develop a computational framework that imposes certain constraints on
the overall flows, such as (approximate) flow conservation. These constraints
render our approach different from classical graph-based SSL for vertex labels,
which posits that tightly connected nodes share similar labels and leverages
the graph structure accordingly to extrapolate from a few vertex labels to the
unlabeled vertices. We derive bounds for our method's reconstruction error and
demonstrate its strong performance on synthetic and real-world flow networks
from transportation, physical infrastructure, and the Web. Furthermore, we
provide two active learning algorithms for selecting informative edges on which
to measure flow, which has applications for optimal sensor deployment. The
first strategy selects edges to minimize the reconstruction error bound and
works well on flows that are approximately divergence-free. The second approach
clusters the graph and selects bottleneck edges that cross cluster-boundaries,
which works well on flows with global trends
Local Dirac Synchronization on Networks
We propose Local Dirac Synchronization which uses the Dirac operator to
capture the dynamics of coupled nodes and link signals on an arbitrary network.
In Local Dirac Synchronization, the harmonic modes of the dynamics oscillate
freely while the other modes are interacting non-linearly, leading to a
collectively synchronized state when the coupling constant of the model is
increased. Local Dirac Synchronization is characterized by discontinuous
transitions and the emergence of a rhythmic coherent phase. In this rhythmic
phase, one of the two complex order parameters oscillates in the complex plane
at a slow frequency (called emergent frequency) in the frame in which the
intrinsic frequencies have zero average. Our theoretical results obtained
within the annealed approximation are validated by extensive numerical results
on fully connected networks and sparse Poisson and scale-free networks. Local
Dirac Synchronization on both random and real networks, such as the connectome
of Caenorhabditis Elegans, reveals the interplay between topology (Betti
numbers and harmonic modes) and non-linear dynamics. This unveils how topology
might play a role in the onset of brain rhythms.Comment: 17 pages, 16 figures + appendice
Modelling and recognition of protein contact networks by multiple kernel learning and dissimilarity representations
Multiple kernel learning is a paradigm which employs a properly constructed chain of kernel functions able to simultaneously analyse different data or different representations of the same data. In this paper, we propose an hybrid classification system based on a linear combination of multiple kernels defined over multiple dissimilarity spaces. The core of the training procedure is the joint optimisation of kernel weights and representatives selection in the dissimilarity spaces. This equips the system with a two-fold knowledge discovery phase: by analysing the weights, it is possible to check which representations are more suitable for solving the classification problem, whereas the pivotal patterns selected as representatives can give further insights on the modelled system, possibly with the help of field-experts. The proposed classification system is tested on real proteomic data in order to predict proteins' functional role starting from their folded structure: specifically, a set of eight representations are drawn from the graph-based protein folded description. The proposed multiple kernel-based system has also been benchmarked against a clustering-based classification system also able to exploit multiple dissimilarities simultaneously. Computational results show remarkable classification capabilities and the knowledge discovery analysis is in line with current biological knowledge, suggesting the reliability of the proposed system
A statistical approach for fracture property realization and macroscopic failure analysis of brittle materials
Lacking the energy dissipative mechanics such as plastic deformation to rebalance localized stresses, similar to their ductile counterparts, brittle material fracture mechanics is associated with catastrophic failure of purely brittle and quasi-brittle materials at immeasurable and measurable deformation scales respectively. This failure, in the form macroscale sharp cracks, is highly dependent on the composition of the material microstructure. Further, the complexity of this relationship and the resulting crack patterns is exacerbated under highly dynamic loading conditions. A robust brittle material model must account for the multiscale inhomogeneity as well as the probabilistic distribution of the constituents which cause material heterogeneity and influence the complex mechanisms of dynamic fracture responses of the material. Continuum-based homogenization is carried out via finite element-based micromechanical analysis of a material neighbor which gives is geometrically described as a sampling windows (i.e., statistical volume elements). These volume elements are well-defined such that they are representative of the material while propagating material randomness from the inherent microscale defects. Homogenization yields spatially defined elastic and fracture related effective properties, utilized to statistically characterize the material in terms of these properties. This spatial characterization is made possible by performing homogenization at prescribed spatial locations which collectively comprise a non-uniform spatial grid which allows the mapping of each effective material properties to an associated spatial location. Through stochastic decomposition of the derived empirical covariance of the sampled effective material property, the Karhunen-Loeve method is used to generate realizations of a continuous and spatially-correlated random field approximation that preserve the statistics of the material from which it is derived. Aspects of modeling both isotropic and anisotropic brittle materials, from a statistical viewpoint, are investigated to determine how each influences the macroscale fracture response of these materials under highly dynamic conditions. The effects of modeling a material both explicitly by representations of discrete multiscale constituents and/or implicitly by continuum representation of material properties is studies to determine how each model influences the resulting material fracture response. For the implicit material representations, both a statistical white noise (i.e., Weibull-based spatially-uncorrelated) and colored noise (i.e., Karhunen-Loeve spatially-correlated model) random fields are employed herein
- β¦