123,804 research outputs found
Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees
Deep Reinforcement Learning (DRL) has achieved impressive success in many
applications. A key component of many DRL models is a neural network
representing a Q function, to estimate the expected cumulative reward following
a state-action pair. The Q function neural network contains a lot of implicit
knowledge about the RL problems, but often remains unexamined and
uninterpreted. To our knowledge, this work develops the first mimic learning
framework for Q functions in DRL. We introduce Linear Model U-trees (LMUTs) to
approximate neural network predictions. An LMUT is learned using a novel
on-line algorithm that is well-suited for an active play setting, where the
mimic learner observes an ongoing interaction between the neural net and the
environment. Empirical evaluation shows that an LMUT mimics a Q function
substantially better than five baseline methods. The transparent tree structure
of an LMUT facilitates understanding the network's learned knowledge by
analyzing feature influence, extracting rules, and highlighting the
super-pixels in image inputs.Comment: This paper is accepted by ECML-PKDD 201
The Structured Process Modeling Method (SPMM) : what is the best way for me to construct a process model?
More and more organizations turn to the construction of process models to support strategical and operational tasks. At the same time, reports indicate quality issues for a considerable part of these models, caused by modeling errors. Therefore, the research described in this paper investigates the development of a practical method to determine and train an optimal process modeling strategy that aims to decrease the number of cognitive errors made during modeling. Such cognitive errors originate in inadequate cognitive processing caused by the inherent complexity of constructing process models. The method helps modelers to derive their personal cognitive profile and the related optimal cognitive strategy that minimizes these cognitive failures. The contribution of the research consists of the conceptual method and an automated modeling strategy selection and training instrument. These two artefacts are positively evaluated by a laboratory experiment covering multiple modeling sessions and involving a total of 149 master students at Ghent University
Exact Failure Frequency Calculations for Extended Systems
This paper shows how the steady-state availability and failure frequency can
be calculated in a single pass for very large systems, when the availability is
expressed as a product of matrices. We apply the general procedure to
-out-of-:G and linear consecutive -out-of-:F systems, and to a
simple ladder network in which each edge and node may fail. We also give the
associated generating functions when the components have identical
availabilities and failure rates. For large systems, the failure rate of the
whole system is asymptotically proportional to its size. This paves the way to
ready-to-use formulae for various architectures, as well as proof that the
differential operator approach to failure frequency calculations is very useful
and straightforward
Evolution of Network Architecture in a Granular Material Under Compression
As a granular material is compressed, the particles and forces within the system arrange to form complex and heterogeneous collective structures. Force chains are a prime example of such structures, and are thought to constrain bulk properties such as mechanical stability and acoustic transmission. However, capturing and characterizing the evolving nature of the intrinsic inhomogeneity and mesoscale architecture of granular systems can be challenging. A growing body of work has shown that graph theoretic approaches may provide a useful foundation for tackling these problems. Here, we extend the current approaches by utilizing multilayer networks as a framework for directly quantifying the progression of mesoscale architecture in a compressed granular system. We examine a quasi-two-dimensional aggregate of photoelastic disks, subject to biaxial compressions through a series of small, quasistatic steps. Treating particles as network nodes and interparticle forces as network edges, we construct a multilayer network for the system by linking together the series of static force networks that exist at each strain step. We then extract the inherent mesoscale structure from the system by using a generalization of community detection methods to multilayer networks, and we define quantitative measures to characterize the changes in this structure throughout the compression process. We separately consider the network of normal and tangential forces, and find that they display a different progression throughout compression. To test the sensitivity of the network model to particle properties, we examine whether the method can distinguish a subsystem of low-friction particles within a bath of higher-friction particles. We find that this can be achieved by considering the network of tangential forces, and that the community structure is better able to separate the subsystem than a purely local measure of interparticle forces alone. The results discussed throughout this study suggest that these network science techniques may provide a direct way to compare and classify data from systems under different external conditions or with different physical makeup
Measuring economic complexity of countries and products: which metric to use?
Evaluating the economies of countries and their relations with products in
the global market is a central problem in economics, with far-reaching
implications to our theoretical understanding of the international trade as
well as to practical applications, such as policy making and financial
investment planning. The recent Economic Complexity approach aims to quantify
the competitiveness of countries and the quality of the exported products based
on the empirical observation that the most competitive countries have
diversified exports, whereas developing countries only export few low quality
products -- typically those exported by many other countries. Two different
metrics, Fitness-Complexity and the Method of Reflections, have been proposed
to measure country and product score in the Economic Complexity framework. We
use international trade data and a recent ranking evaluation measure to
quantitatively compare the ability of the two metrics to rank countries and
products according to their importance in the network. The results show that
the Fitness-Complexity metric outperforms the Method of Reflections in both the
ranking of products and the ranking of countries. We also investigate a
Generalization of the Fitness-Complexity metric and show that it can produce
improved rankings provided that the input data are reliable
Linking component importance to optimisation of preventive maintenance policy
In reliability engineering, time on performing preventive maintenance (PM) on a component in a system may affect system availability if system operation needs stopping for PM. To avoid such an availability reduction, one may adopt the following method: if a component fails, PM is carried out on a number of the other components while the failed component is being repaired. This ensures PM does not take system’s operating time. However, this raises a question: Which components should be selected for PM? This paper introduces an importance measure, called Component Maintenance Priority (CMP), which is used to select components for PM. The paper then compares the CMP with other importance measures and studies the properties of the CMP. Numerical examples are given to show the validity of the CMP
2D Reconstruction of Small Intestine's Interior Wall
Examining and interpreting of a large number of wireless endoscopic images
from the gastrointestinal tract is a tiresome task for physicians. A practical
solution is to automatically construct a two dimensional representation of the
gastrointestinal tract for easy inspection. However, little has been done on
wireless endoscopic image stitching, let alone systematic investigation. The
proposed new wireless endoscopic image stitching method consists of two main
steps to improve the accuracy and efficiency of image registration. First, the
keypoints are extracted by Principle Component Analysis and Scale Invariant
Feature Transform (PCA-SIFT) algorithm and refined with Maximum Likelihood
Estimation SAmple Consensus (MLESAC) outlier removal to find the most reliable
keypoints. Second, the optimal transformation parameters obtained from first
step are fed to the Normalised Mutual Information (NMI) algorithm as an initial
solution. With modified Marquardt-Levenberg search strategy in a multiscale
framework, the NMI can find the optimal transformation parameters in the
shortest time. The proposed methodology has been tested on two different
datasets - one with real wireless endoscopic images and another with images
obtained from Micro-Ball (a new wireless cubic endoscopy system with six image
sensors). The results have demonstrated the accuracy and robustness of the
proposed methodology both visually and quantitatively.Comment: Journal draf
Proceedings of Mathsport international 2017 conference
Proceedings of MathSport International 2017 Conference, held in the Botanical Garden of the University of Padua, June 26-28, 2017.
MathSport International organizes biennial conferences dedicated to all topics where mathematics and sport meet.
Topics include: performance measures, optimization of sports performance, statistics and probability models, mathematical and physical models in sports, competitive strategies, statistics and probability match outcome models, optimal tournament design and scheduling, decision support systems, analysis of rules and adjudication, econometrics in sport, analysis of sporting technologies, financial valuation in sport, e-sports (gaming), betting and sports
- …