20,800 research outputs found
From large deviations to semidistances of transport and mixing: coherence analysis for finite Lagrangian data
One way to analyze complicated non-autonomous flows is through trying to
understand their transport behavior. In a quantitative, set-oriented approach
to transport and mixing, finite time coherent sets play an important role.
These are time-parametrized families of sets with unlikely transport to and
from their surroundings under small or vanishing random perturbations of the
dynamics. Here we propose, as a measure of transport and mixing for purely
advective (i.e., deterministic) flows, (semi)distances that arise under
vanishing perturbations in the sense of large deviations. Analogously, for
given finite Lagrangian trajectory data we derive a discrete-time and space
semidistance that comes from the "best" approximation of the randomly perturbed
process conditioned on this limited information of the deterministic flow. It
can be computed as shortest path in a graph with time-dependent weights.
Furthermore, we argue that coherent sets are regions of maximal farness in
terms of transport and mixing, hence they occur as extremal regions on a
spanning structure of the state space under this semidistance---in fact, under
any distance measure arising from the physical notion of transport. Based on
this notion we develop a tool to analyze the state space (or the finite
trajectory data at hand) and identify coherent regions. We validate our
approach on idealized prototypical examples and well-studied standard cases.Comment: J Nonlinear Sci, 201
Platform Relative Sensor Abstractions across Mobile Robots using Computer Vision and Sensor Integration
Uniform sensor management and abstraction across different robot platforms is a difficult task due to the sheer diversity of sensing devices. However, because these sensors can be grouped into categories that in essence provide the same information, we can capture their similarities and create abstractions. An example would be distance data measured by an assortment of range sensors, or alternatively extracted from a camera using image processing. This paper describes how using software components it is possible to uniformly construct high-level abstractions of sensor information across various robots in a way to support the portability of common code that uses these abstractions (e.g. obstacle avoidance, wall following). We demonstrate our abstractions on a number of robots using different configurations of range sensors and cameras
Fuzzy Supernova Templates I: Classification
Modern supernova (SN) surveys are now uncovering stellar explosions at rates
that far surpass what the world's spectroscopic resources can handle. In order
to make full use of these SN datasets, it is necessary to use analysis methods
that depend only on the survey photometry. This paper presents two methods for
utilizing a set of SN light curve templates to classify SN objects. In the
first case we present an updated version of the Bayesian Adaptive Template
Matching program (BATM). To address some shortcomings of that strictly Bayesian
approach, we introduce a method for Supernova Ontology with Fuzzy Templates
(SOFT), which utilizes Fuzzy Set Theory for the definition and combination of
SN light curve models. For well-sampled light curves with a modest signal to
noise ratio (S/N>10), the SOFT method can correctly separate thermonuclear
(Type Ia) SNe from core collapse SNe with 98% accuracy. In addition, the SOFT
method has the potential to classify supernovae into sub-types, providing
photometric identification of very rare or peculiar explosions. The accuracy
and precision of the SOFT method is verified using Monte Carlo simulations as
well as real SN light curves from the Sloan Digital Sky Survey and the
SuperNova Legacy Survey. In a subsequent paper the SOFT method is extended to
address the problem of parameter estimation, providing estimates of redshift,
distance, and host galaxy extinction without any spectroscopy.Comment: 26 pages, 12 figures. Accepted to Ap
A weak characterization of slow variables in stochastic dynamical systems
We present a novel characterization of slow variables for continuous Markov
processes that provably preserve the slow timescales. These slow variables are
known as reaction coordinates in molecular dynamical applications, where they
play a key role in system analysis and coarse graining. The defining
characteristics of these slow variables is that they parametrize a so-called
transition manifold, a low-dimensional manifold in a certain density function
space that emerges with progressive equilibration of the system's fast
variables. The existence of said manifold was previously predicted for certain
classes of metastable and slow-fast systems. However, in the original work, the
existence of the manifold hinges on the pointwise convergence of the system's
transition density functions towards it. We show in this work that a
convergence in average with respect to the system's stationary measure is
sufficient to yield reaction coordinates with the same key qualities. This
allows one to accurately predict the timescale preservation in systems where
the old theory is not applicable or would give overly pessimistic results.
Moreover, the new characterization is still constructive, in that it allows for
the algorithmic identification of a good slow variable. The improved
characterization, the error prediction and the variable construction are
demonstrated by a small metastable system
Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks
Information fusion is an essential part of numerous engineering systems and
biological functions, e.g., human cognition. Fusion occurs at many levels,
ranging from the low-level combination of signals to the high-level aggregation
of heterogeneous decision-making processes. While the last decade has witnessed
an explosion of research in deep learning, fusion in neural networks has not
observed the same revolution. Specifically, most neural fusion approaches are
ad hoc, are not understood, are distributed versus localized, and/or
explainability is low (if present at all). Herein, we prove that the fuzzy
Choquet integral (ChI), a powerful nonlinear aggregation function, can be
represented as a multi-layer network, referred to hereafter as ChIMP. We also
put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient
descent-based optimization in light of the exponential number of ChI inequality
constraints. An additional benefit of ChIMP/iChIMP is that it enables
eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP
is applied to the fusion of a set of heterogeneous architecture deep models in
remote sensing. We show an improvement in model accuracy and our previously
established XAI indices shed light on the quality of our data, model, and its
decisions.Comment: IEEE Transactions on Fuzzy System
Evolving Ensemble Fuzzy Classifier
The concept of ensemble learning offers a promising avenue in learning from
data streams under complex environments because it addresses the bias and
variance dilemma better than its single model counterpart and features a
reconfigurable structure, which is well suited to the given context. While
various extensions of ensemble learning for mining non-stationary data streams
can be found in the literature, most of them are crafted under a static base
classifier and revisits preceding samples in the sliding window for a
retraining step. This feature causes computationally prohibitive complexity and
is not flexible enough to cope with rapidly changing environments. Their
complexities are often demanding because it involves a large collection of
offline classifiers due to the absence of structural complexities reduction
mechanisms and lack of an online feature selection mechanism. A novel evolving
ensemble classifier, namely Parsimonious Ensemble pENsemble, is proposed in
this paper. pENsemble differs from existing architectures in the fact that it
is built upon an evolving classifier from data streams, termed Parsimonious
Classifier pClass. pENsemble is equipped by an ensemble pruning mechanism,
which estimates a localized generalization error of a base classifier. A
dynamic online feature selection scenario is integrated into the pENsemble.
This method allows for dynamic selection and deselection of input features on
the fly. pENsemble adopts a dynamic ensemble structure to output a final
classification decision where it features a novel drift detection scenario to
grow the ensemble structure. The efficacy of the pENsemble has been numerically
demonstrated through rigorous numerical studies with dynamic and evolving data
streams where it delivers the most encouraging performance in attaining a
tradeoff between accuracy and complexity.Comment: this paper has been published by IEEE Transactions on Fuzzy System
A Possibilistic and Probabilistic Approach to Precautionary Saving
This paper proposes two mixed models to study a consumer's optimal saving in
the presence of two types of risk.Comment: Panoeconomicus, 201
Analysing imperfect temporal information in GIS using the Triangular Model
Rough set and fuzzy set are two frequently used approaches for modelling and reasoning about imperfect time intervals. In this paper, we focus on imperfect time intervals that can be modelled by rough sets and use an innovative graphic model [i.e. the triangular model (TM)] to represent this kind of imperfect time intervals. This work shows that TM is potentially advantageous in visualizing and querying imperfect time intervals, and its analytical power can be better exploited when it is implemented in a computer application with graphical user interfaces and interactive functions. Moreover, a probabilistic framework is proposed to handle the uncertainty issues in temporal queries. We use a case study to illustrate how the unique insights gained by TM can assist a geographical information system for exploratory spatio-temporal analysis
- …