6,667 research outputs found
Toward Efficient Automation of Interpretable Machine Learning Boosting
Developing efficient automated methods for Interpretable Machine Learning (IML) is an important and long-term goal in the field of Artificial Intelligence. Currently the Machine Learning landscape is dominated by Neural Networks (NNs) and Support Vector Machines (SVMs), models which are often highly accurate. Despite high accuracy, such models are essentially “black boxes” and therefore are too risky for situations like healthcare where real lives are at stake. In such situations, so called “glass-box” models, such as Decision Trees (DTs), Bayesian Networks (BNs), and Logic Relational (LR) models are often preferred, however can succumb to accuracy limitations. Unfortunately, having to choose between an algorithm that is accurate or interpretable—but not both—has become a major obstacle in the wider adoption of Machine Learning. Previous research has proposed increasing interpretability of black-box models by degrading model complexity, often degrading accuracy as a consequence. By taking the opposite approach and improving the accuracy of interpretable models, rather than improving the interpretability of accurate black-box models, it’s possible to construct “competitive glass-boxes” via two novel algorithms proposed in this research: Dominance Classifier Predictor (DCP) and Reverse Prediction Pattern Recognition (RPPR). Experiments DCP boosted by RPPR have been conducted on several benchmark datasets, successfully raising the accuracy of interpretable models to reach the accuracy of black-box models
The Grammar of Interactive Explanatory Model Analysis
The growing need for in-depth analysis of predictive models leads to a series
of new methods for explaining their local and global properties. Which of these
methods is the best? It turns out that this is an ill-posed question. One
cannot sufficiently explain a black-box machine learning model using a single
method that gives only one perspective. Isolated explanations are prone to
misunderstanding, which inevitably leads to wrong or simplistic reasoning. This
problem is known as the Rashomon effect and refers to diverse, even
contradictory interpretations of the same phenomenon. Surprisingly, the
majority of methods developed for explainable machine learning focus on a
single aspect of the model behavior. In contrast, we showcase the problem of
explainability as an interactive and sequential analysis of a model. This paper
presents how different Explanatory Model Analysis (EMA) methods complement each
other and why it is essential to juxtapose them together. The introduced
process of Interactive EMA (IEMA) derives from the algorithmic side of
explainable machine learning and aims to embrace ideas developed in cognitive
sciences. We formalize the grammar of IEMA to describe potential human-model
dialogues. IEMA is implemented in the human-centered framework that adopts
interactivity, customizability and automation as its main traits. Combined,
these methods enhance the responsible approach to predictive modeling.Comment: 17 pages, 10 figures, 3 table
A Survey on Explainable AI for 6G O-RAN: Architecture, Use Cases, Challenges and Research Directions
The recent O-RAN specifications promote the evolution of RAN architecture by
function disaggregation, adoption of open interfaces, and instantiation of a
hierarchical closed-loop control architecture managed by RAN Intelligent
Controllers (RICs) entities. This paves the road to novel data-driven network
management approaches based on programmable logic. Aided by Artificial
Intelligence (AI) and Machine Learning (ML), novel solutions targeting
traditionally unsolved RAN management issues can be devised. Nevertheless, the
adoption of such smart and autonomous systems is limited by the current
inability of human operators to understand the decision process of such AI/ML
solutions, affecting their trust in such novel tools. eXplainable AI (XAI) aims
at solving this issue, enabling human users to better understand and
effectively manage the emerging generation of artificially intelligent schemes,
reducing the human-to-machine barrier. In this survey, we provide a summary of
the XAI methods and metrics before studying their deployment over the O-RAN
Alliance RAN architecture along with its main building blocks. We then present
various use-cases and discuss the automation of XAI pipelines for O-RAN as well
as the underlying security aspects. We also review some projects/standards that
tackle this area. Finally, we identify different challenges and research
directions that may arise from the heavy adoption of AI/ML decision entities in
this context, focusing on how XAI can help to interpret, understand, and
improve trust in O-RAN operational networks.Comment: 33 pages, 13 figure
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
For humans, the process of grasping an object relies heavily on rich tactile
feedback. Most recent robotic grasping work, however, has been based only on
visual input, and thus cannot easily benefit from feedback after initiating
contact. In this paper, we investigate how a robot can learn to use tactile
information to iteratively and efficiently adjust its grasp. To this end, we
propose an end-to-end action-conditional model that learns regrasping policies
from raw visuo-tactile data. This model -- a deep, multimodal convolutional
network -- predicts the outcome of a candidate grasp adjustment, and then
executes a grasp by iteratively selecting the most promising actions. Our
approach requires neither calibration of the tactile sensors, nor any
analytical modeling of contact forces, thus reducing the engineering effort
required to obtain efficient grasping policies. We train our model with data
from about 6,450 grasping trials on a two-finger gripper equipped with GelSight
high-resolution tactile sensors on each finger. Across extensive experiments,
our approach outperforms a variety of baselines at (i) estimating grasp
adjustment outcomes, (ii) selecting efficient grasp adjustments for quick
grasping, and (iii) reducing the amount of force applied at the fingers, while
maintaining competitive performance. Finally, we study the choices made by our
model and show that it has successfully acquired useful and interpretable
grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL).
Website: https://sites.google.com/view/more-than-a-feelin
- …