3,249 research outputs found
A Physics-Based Approach to Unsupervised Discovery of Coherent Structures in Spatiotemporal Systems
Given that observational and numerical climate data are being produced at
ever more prodigious rates, increasingly sophisticated and automated analysis
techniques have become essential. Deep learning is quickly becoming a standard
approach for such analyses and, while great progress is being made, major
challenges remain. Unlike commercial applications in which deep learning has
led to surprising successes, scientific data is highly complex and typically
unlabeled. Moreover, interpretability and detecting new mechanisms are key to
scientific discovery. To enhance discovery we present a complementary
physics-based, data-driven approach that exploits the causal nature of
spatiotemporal data sets generated by local dynamics (e.g. hydrodynamic flows).
We illustrate how novel patterns and coherent structures can be discovered in
cellular automata and outline the path from them to climate data.Comment: 4 pages, 1 figure;
http://csc.ucdavis.edu/~cmg/compmech/pubs/ci2017_Rupe_et_al.ht
Classification methods for Hilbert data based on surrogate density
An unsupervised and a supervised classification approaches for Hilbert random
curves are studied. Both rest on the use of a surrogate of the probability
density which is defined, in a distribution-free mixture context, from an
asymptotic factorization of the small-ball probability. That surrogate density
is estimated by a kernel approach from the principal components of the data.
The focus is on the illustration of the classification algorithms and the
computational implications, with particular attention to the tuning of the
parameters involved. Some asymptotic results are sketched. Applications on
simulated and real datasets show how the proposed methods work.Comment: 33 pages, 11 figures, 6 table
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Background modeling for video sequences by stacked denoising autoencoders
Nowadays, the analysis and extraction of relevant information in visual data flows is of paramount importance. These images sequences can last for hours, which implies that the model must adapt to all kinds of circumstances so that the performance of the system does not decay over time. In this paper we propose a methodology for background modeling and foreground detection, whose main characteristic is its robustness against stationary noise. Thus, stacked denoising autoencoders are applied to generate a set of robust characteristics for each region or patch of the image, which will be the input of a probabilistic model to determine if that region is background or foreground. The evaluation of a set of heterogeneous sequences results in that, although our proposal is similar to the classical methods existing in the literature, the inclusion of noise in these sequences causes drastic performance drops in the competing methods, while in our case the performance stays or falls slightly.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Modeling sequences and temporal networks with dynamic community structures
In evolving complex systems such as air traffic and social organizations,
collective effects emerge from their many components' dynamic interactions.
While the dynamic interactions can be represented by temporal networks with
nodes and links that change over time, they remain highly complex. It is
therefore often necessary to use methods that extract the temporal networks'
large-scale dynamic community structure. However, such methods are subject to
overfitting or suffer from effects of arbitrary, a priori imposed timescales,
which should instead be extracted from data. Here we simultaneously address
both problems and develop a principled data-driven method that determines
relevant timescales and identifies patterns of dynamics that take place on
networks as well as shape the networks themselves. We base our method on an
arbitrary-order Markov chain model with community structure, and develop a
nonparametric Bayesian inference framework that identifies the simplest such
model that can explain temporal interaction data.Comment: 15 Pages, 6 figures, 2 table
- …