14,703 research outputs found
Neural Simplex Architecture
We present the Neural Simplex Architecture (NSA), a new approach to runtime
assurance that provides safety guarantees for neural controllers (obtained e.g.
using reinforcement learning) of autonomous and other complex systems without
unduly sacrificing performance. NSA is inspired by the Simplex control
architecture of Sha et al., but with some significant differences. In the
traditional approach, the advanced controller (AC) is treated as a black box;
when the decision module switches control to the baseline controller (BC), the
BC remains in control forever. There is relatively little work on switching
control back to the AC, and there are no techniques for correcting the AC's
behavior after it generates a potentially unsafe control input that causes a
failover to the BC. Our NSA addresses both of these limitations. NSA not only
provides safety assurances in the presence of a possibly unsafe neural
controller, but can also improve the safety of such a controller in an online
setting via retraining, without overly degrading its performance. To
demonstrate NSA's benefits, we have conducted several significant case studies
in the continuous control domain. These include a target-seeking ground rover
navigating an obstacle field, and a neural controller for an artificial
pancreas system.Comment: 12th NASA Formal Methods Symposium (NFM 2020
Association Mining in Database Machine
Association rule is wildly used in most of the data mining technologies. Apriori algorithm is the fundamental association rule mining algorithm. FP-growth tree algorithm improves the performance by reduce the generation of the frequent item sets. Simplex algorithm is a advanced FP-growth algorithm by using bitmap structure with the simplex concept in geometry. The bitmap structure implementation is particular designed for storing the data in database machines to support parallel computing the association rule mining
Quantitative toxicity prediction using topology based multi-task deep neural networks
The understanding of toxicity is of paramount importance to human health and
environmental protection. Quantitative toxicity analysis has become a new
standard in the field. This work introduces element specific persistent
homology (ESPH), an algebraic topology approach, for quantitative toxicity
prediction. ESPH retains crucial chemical information during the topological
abstraction of geometric complexity and provides a representation of small
molecules that cannot be obtained by any other method. To investigate the
representability and predictive power of ESPH for small molecules, ancillary
descriptors have also been developed based on physical models. Topological and
physical descriptors are paired with advanced machine learning algorithms, such
as deep neural network (DNN), random forest (RF) and gradient boosting decision
tree (GBDT), to facilitate their applications to quantitative toxicity
predictions. A topology based multi-task strategy is proposed to take the
advantage of the availability of large data sets while dealing with small data
sets. Four benchmark toxicity data sets that involve quantitative measurements
are used to validate the proposed approaches. Extensive numerical studies
indicate that the proposed topological learning methods are able to outperform
the state-of-the-art methods in the literature for quantitative toxicity
analysis. Our online server for computing element-specific topological
descriptors (ESTDs) is available at http://weilab.math.msu.edu/TopTox/Comment: arXiv admin note: substantial text overlap with arXiv:1703.1095
Deep Divergence-Based Approach to Clustering
A promising direction in deep learning research consists in learning
representations and simultaneously discovering cluster structure in unlabeled
data by optimizing a discriminative loss function. As opposed to supervised
deep learning, this line of research is in its infancy, and how to design and
optimize suitable loss functions to train deep neural networks for clustering
is still an open question. Our contribution to this emerging field is a new
deep clustering network that leverages the discriminative power of
information-theoretic divergence measures, which have been shown to be
effective in traditional clustering. We propose a novel loss function that
incorporates geometric regularization constraints, thus avoiding degenerate
structures of the resulting clustering partition. Experiments on synthetic
benchmarks and real datasets show that the proposed network achieves
competitive performance with respect to other state-of-the-art methods, scales
well to large datasets, and does not require pre-training steps
- …