267,541 research outputs found
Temporal-Difference Learning to Assist Human Decision Making during the Control of an Artificial Limb
In this work we explore the use of reinforcement learning (RL) to help with
human decision making, combining state-of-the-art RL algorithms with an
application to prosthetics. Managing human-machine interaction is a problem of
considerable scope, and the simplification of human-robot interfaces is
especially important in the domains of biomedical technology and rehabilitation
medicine. For example, amputees who control artificial limbs are often required
to quickly switch between a number of control actions or modes of operation in
order to operate their devices. We suggest that by learning to anticipate
(predict) a user's behaviour, artificial limbs could take on an active role in
a human's control decisions so as to reduce the burden on their users.
Recently, we showed that RL in the form of general value functions (GVFs) could
be used to accurately detect a user's control intent prior to their explicit
control choices. In the present work, we explore the use of temporal-difference
learning and GVFs to predict when users will switch their control influence
between the different motor functions of a robot arm. Experiments were
performed using a multi-function robot arm that was controlled by muscle
signals from a user's body (similar to conventional artificial limb control).
Our approach was able to acquire and maintain forecasts about a user's
switching decisions in real time. It also provides an intuitive and reward-free
way for users to correct or reinforce the decisions made by the machine
learning system. We expect that when a system is certain enough about its
predictions, it can begin to take over switching decisions from the user to
streamline control and potentially decrease the time and effort needed to
complete tasks. This preliminary study therefore suggests a way to naturally
integrate human- and machine-based decision making systems.Comment: 5 pages, 4 figures, This version to appear at The 1st
Multidisciplinary Conference on Reinforcement Learning and Decision Making,
Princeton, NJ, USA, Oct. 25-27, 201
Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features
One-class support vector machine (OC-SVM) for a long time has been one of the
most effective anomaly detection methods and extensively adopted in both
research as well as industrial applications. The biggest issue for OC-SVM is
yet the capability to operate with large and high-dimensional datasets due to
optimization complexity. Those problems might be mitigated via dimensionality
reduction techniques such as manifold learning or autoencoder. However,
previous work often treats representation learning and anomaly prediction
separately. In this paper, we propose autoencoder based one-class support
vector machine (AE-1SVM) that brings OC-SVM, with the aid of random Fourier
features to approximate the radial basis kernel, into deep learning context by
combining it with a representation learning architecture and jointly exploit
stochastic gradient descent to obtain end-to-end training. Interestingly, this
also opens up the possible use of gradient-based attribution methods to explain
the decision making for anomaly detection, which has ever been challenging as a
result of the implicit mappings between the input space and the kernel space.
To the best of our knowledge, this is the first work to study the
interpretability of deep learning in anomaly detection. We evaluate our method
on a wide range of unsupervised anomaly detection tasks in which our end-to-end
training architecture achieves a performance significantly better than the
previous work using separate training.Comment: Accepted at European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases (ECML-PKDD) 201
Visual analysis of discrimination in machine learning
The growing use of automated decision-making in critical applications, such
as crime prediction and college admission, has raised questions about fairness
in machine learning. How can we decide whether different treatments are
reasonable or discriminatory? In this paper, we investigate discrimination in
machine learning from a visual analytics perspective and propose an interactive
visualization tool, DiscriLens, to support a more comprehensive analysis. To
reveal detailed information on algorithmic discrimination, DiscriLens
identifies a collection of potentially discriminatory itemsets based on causal
modeling and classification rules mining. By combining an extended Euler
diagram with a matrix-based visualization, we develop a novel set visualization
to facilitate the exploration and interpretation of discriminatory itemsets. A
user study shows that users can interpret the visually encoded information in
DiscriLens quickly and accurately. Use cases demonstrate that DiscriLens
provides informative guidance in understanding and reducing algorithmic
discrimination
Multi Criteria Decision Making Approach For Product Aspect Extraction And Ranking In Aspect-Based Sentiment Analysis
Identifying product aspects in customer reviews can have a great influence on both business strategies as well as on customers’ decisions. Presently, most research focuses on machine learning, statistical, and Natural Language Processing (NLP) techniques to identify the product aspects in customer reviews. The challenge of this research is to formulate aspect identification as a decision-making problem. To this end, we propose a product aspect identification approach by combining multi-criteria decision-making (MCDM) with sentiment analysis. The suggested approach consists of two stages namely product aspect extraction and product aspect ranking
Improved Weighted Random Forest for Classification Problems
Several studies have shown that combining machine learning models in an
appropriate way will introduce improvements in the individual predictions made
by the base models. The key to make well-performing ensemble model is in the
diversity of the base models. Of the most common solutions for introducing
diversity into the decision trees are bagging and random forest. Bagging
enhances the diversity by sampling with replacement and generating many
training data sets, while random forest adds selecting a random number of
features as well. This has made the random forest a winning candidate for many
machine learning applications. However, assuming equal weights for all base
decision trees does not seem reasonable as the randomization of sampling and
input feature selection may lead to different levels of decision-making
abilities across base decision trees. Therefore, we propose several algorithms
that intend to modify the weighting strategy of regular random forest and
consequently make better predictions. The designed weighting frameworks include
optimal weighted random forest based on ac-curacy, optimal weighted random
forest based on the area under the curve (AUC), performance-based weighted
random forest, and several stacking-based weighted random forest models. The
numerical results show that the proposed models are able to introduce
significant improvements compared to regular random forest
A Survey of Contextual Optimization Methods for Decision Making under Uncertainty
Recently there has been a surge of interest in operations research (OR) and
the machine learning (ML) community in combining prediction algorithms and
optimization techniques to solve decision-making problems in the face of
uncertainty. This gave rise to the field of contextual optimization, under
which data-driven procedures are developed to prescribe actions to the
decision-maker that make the best use of the most recently updated information.
A large variety of models and methods have been presented in both OR and ML
literature under a variety of names, including data-driven optimization,
prescriptive optimization, predictive stochastic programming, policy
optimization, (smart) predict/estimate-then-optimize, decision-focused
learning, (task-based) end-to-end learning/forecasting/optimization, etc.
Focusing on single and two-stage stochastic programming problems, this review
article identifies three main frameworks for learning policies from data and
discusses their strengths and limitations. We present the existing models and
methods under a uniform notation and terminology and classify them according to
the three main frameworks identified. Our objective with this survey is to both
strengthen the general understanding of this active field of research and
stimulate further theoretical and algorithmic advancements in integrating ML
and stochastic programming
Decision Support Systems
Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference
- …