482 research outputs found
Unsupervised state representation learning with robotic priors: a robustness benchmark
Our understanding of the world depends highly on our capacity to produce
intuitive and simplified representations which can be easily used to solve
problems. We reproduce this simplification process using a neural network to
build a low dimensional state representation of the world from images acquired
by a robot. As in Jonschkowski et al. 2015, we learn in an unsupervised way
using prior knowledge about the world as loss functions called robotic priors
and extend this approach to high dimension richer images to learn a 3D
representation of the hand position of a robot from RGB images. We propose a
quantitative evaluation of the learned representation using nearest neighbors
in the state space that allows to assess its quality and show both the
potential and limitations of robotic priors in realistic environments. We
augment image size, add distractors and domain randomization, all crucial
components to achieve transfer learning to real robots. Finally, we also
contribute a new prior to improve the robustness of the representation. The
applications of such low dimensional state representation range from easing
reinforcement learning (RL) and knowledge transfer across tasks, to
facilitating learning from raw data with more efficient and compact high level
representations. The results show that the robotic prior approach is able to
extract high level representation as the 3D position of an arm and organize it
into a compact and coherent space of states in a challenging dataset.Comment: ICRA 2018 submissio
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Although Deep Neural Networks (DNNs) have great generalization and prediction capabilities, their
functioning does not allow a detailed explanation of their behavior. Opaque deep learning models are
increasingly used to make important predictions in critical environments, and the danger is that they make
and use predictions that cannot be justified or legitimized. Several eXplainable Artificial Intelligence (XAI)
methods that separate explanations from machine learning models have emerged, but have shortcomings
in faithfulness to the model actual functioning and robustness. As a result, there is a widespread agreement
on the importance of endowing Deep Learning models with explanatory capabilities so that they can
themselves provide an answer to why a particular prediction was made. First, we address the problem
of the lack of universal criteria for XAI by formalizing what an explanation is. We also introduced a
set of axioms and definitions to clarify XAI from a mathematical perspective. Finally, we present the
Greybox XAI, a framework that composes a DNN and a transparent model thanks to the use of a symbolic
Knowledge Base (KB). We extract a KB from the dataset and use it to train a transparent model (i.e., a
logistic regression). An encoder-decoder architecture is trained on RGB images to produce an output
similar to the KB used by the transparent model. Once the two models are trained independently, they
are used compositionally to form an explainable predictive model. We show how this new architecture is
accurate and explainable in several datasets.French ANRT (AssociationNationale Recherche Technologie - ANRT)SEGULA TechnologiesJuan de la Cierva Incorporacion grant - MCIN/AEI by "ESF Investing in your future" I JC2019-039152-IGoogle Research Scholar ProgramDepartment of Education of the Basque Government (Consolidated Research Group MATHMODE) IT1456-2
Validation Techniques for Sensor Data in Mobile Health Applications
Mobile applications have become amust in every user’s smart device, andmany of these applications make use of the device sensors’
to achieve its goal. Nevertheless, it remains fairly unknown to the user to which extent the data the applications use can be relied
upon and, therefore, to which extent the output of a given application is trustworthy or not. To help developers and researchers and
to provide a common ground of data validation algorithms and techniques, this paper presents a review of the most commonly
used data validation algorithms, along with its usage scenarios, and proposes a classification for these algorithms. This paper also
discusses the process of achieving statistical significance and trust for the desired output.Portuguese Foundation for Science and Technology UID/EEA/50008/2013COST Action Architectures, Algorithms and Protocols for Enhanced Living Environments (AAPELE) IC130
Explainability in Deep Reinforcement Learning
A large set of the explainable Artificial Intelligence (XAI) literature is
emerging on feature relevance techniques to explain a deep neural network (DNN)
output or explaining models that ingest image source data. However, assessing
how XAI techniques can help understand models beyond classification tasks, e.g.
for reinforcement learning (RL), has not been extensively studied. We review
recent works in the direction to attain Explainable Reinforcement Learning
(XRL), a relatively new subfield of Explainable Artificial Intelligence,
intended to be used in general public applications, with diverse audiences,
requiring ethical, responsible and trustable algorithms. In critical situations
where it is essential to justify and explain the agent's behaviour, better
explainability and interpretability of RL models could help gain scientific
insight on the inner workings of what is still considered a black box. We
evaluate mainly studies directly linking explainability to RL, and split these
into two categories according to the way the explanations are generated:
transparent algorithms and post-hoc explainaility. We also review the most
prominent XAI works from the lenses of how they could potentially enlighten the
further deployment of the latest advances in RL, in the demanding present and
future of everyday problems.Comment: Article accepted at Knowledge-Based System
Capabilities, Limitations and Challenges of Style Transfer with CycleGANs: A Study on Automatic Ring Design Generation
Rendering programs have changed the design process completely
as they permit to see how the products will look before they are
fabricated. However, the rendering process is complicated and takes a
signi cant amount of time, not only in the rendering itself but in the
setting of the scene as well. Materials, lights and cameras need to be set
in order to get the best quality results. Nevertheless, the optimal output
may not be obtained in the rst render. This all makes the rendering
process a tedious process. Since Goodfellow et al. introduced Generative
Adversarial Networks (GANs) in 2014 [1], they have been used to generate
computer-assigned synthetic data, from non-existing human faces
to medical data analysis or image style transfer. GANs have been used
to transfer image textures from one domain to another. However, paired
data from both domains was needed. When Zhu et al. introduced the
CycleGAN model, the elimination of this expensive constraint permitted
transforming one image from one domain into another, without the
need for paired data. This work validates the applicability of CycleGANs
on style transfer from an initial sketch to a nal render in 2D that represents
a 3D design, a step that is paramount in every product design
process. We inquiry the possibilities of including CycleGANs as part of
the design pipeline, more precisely, applied to the rendering of ring designs.
Our contribution entails a crucial part of the process as it allows
the customer to see the nal product before buying. This work sets a basis
for future research, showing the possibilities of GANs in design and
establishing a starting point for novel applications to approach crafts
design.MCIN/AEI IJC2019-039152-IESF Investing in your future IJC2019-039152-IGoogle Research Scholar ProgramBasque Government ELKARTEK program (3KIA project) KK-2020/00049
research group MATHMODE T1294-1
Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos
Wearable cameras stand out as one of the most promising devices for the
upcoming years, and as a consequence, the demand of computer algorithms to
automatically understand the videos recorded with them is increasing quickly.
An automatic understanding of these videos is not an easy task, and its mobile
nature implies important challenges to be faced, such as the changing light
conditions and the unrestricted locations recorded. This paper proposes an
unsupervised strategy based on global features and manifold learning to endow
wearable cameras with contextual information regarding the light conditions and
the location captured. Results show that non-linear manifold methods can
capture contextual patterns from global features without compromising large
computational resources. The proposed strategy is used, as an application case,
as a switching mechanism to improve the hand-detection problem in egocentric
videos.Comment: Submitted for publicatio
Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI
During the learning process, a child develops a mental representation of the task he or she is learning.
A Machine Learning algorithm develops also a latent representation of the task it learns. We investigate
the development of the knowledge construction of an artificial agent through the analysis of its
behavior, i.e., its sequences of moves while learning to perform the Tower of HanoĂŻ (TOH) task. The TOH
is a well-known task in experimental contexts to study the problem-solving processes and one of the
fundamental processes of children’s knowledge construction about their world. We position ourselves
in the field of explainable reinforcement learning for developmental robotics, at the crossroads of
cognitive modeling and explainable AI. Our main contribution proposes a 3-step methodology named
Implicit Knowledge Extraction with eXplainable Artificial Intelligence (IKE-XAI) to extract the implicit
knowledge, in form of an automaton, encoded by an artificial agent during its learning. We showcase
this technique to solve and explain the TOH task when researchers have only access to moves that
represent observational behavior as in human–machine interaction. Therefore, to extract the agent
acquired knowledge at different stages of its training, our approach combines: first, a Q-learning
agent that learns to perform the TOH task; second, a trained recurrent neural network that encodes
an implicit representation of the TOH task; and third, an XAI process using a post-hoc implicit rule
extraction algorithm to extract finite state automata. We propose using graph representations as visual
and explicit explanations of the behavior of the Q-learning agent. Our experiments show that the IKEXAI
approach helps understanding the development of the Q-learning agent behavior by providing
a global explanation of its knowledge evolution during learning. IKE-XAI also allows researchers to
identify the agent’s Aha! moment by determining from what moment the knowledge representation
stabilizes and the agent no longer learns.Region BretagneEuropean Union via the FEDER programSpanish Government Juan de la Cierva Incorporacion - MCIN/AEI IJC2019-039152-IGoogle Research Scholar Gran
Extending Knowledge Graphs with Subjective Influence Networks for personalized fashion
International audienceThis chapter shows Stitch Fix's industry case as an applied fashion application in cognitive cities. Fashion goes hand in hand with the economic development of better methods in smart and cognitive cities, leisure activities and consumption. However, extracting knowledge and actionable insights from fashion data still presents challenges due to the intrinsic subjectivity needed to effectively model the domain. Fashion ontologies help address this, but most existing such ontologies are "clothing" ontologies, which consider only the physical attributes of garments or people and often model subjective judgements only as opaque categorizations of entities. We address this by proposing a supplementary ontological approach in the fashion domain based on subjective influence networks. We enumerate a set of use cases this approach is intended to address and discuss possible classes of prediction questions and machine learning experiments that could be executed to validate or refute the model. We also present a case study on business models and monetization strategies for digital fashion, a domain that is fast-changing and gaining the battle in the digital domain
Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C1011198) , (Institute for Information & communications Technology Planning & Evaluation) (IITP) grant funded by the Korea government (MSIT) under the ICT Creative Consilience Program (IITP-2021-2020-0-01821) , and AI Platform to Fully Adapt and Reflect Privacy-Policy Changes (No. 2022-0-00688).Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI mode Äľs decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.National Research Foundation of Korea
Ministry of Science, ICT & Future Planning, Republic of Korea
Ministry of Science & ICT (MSIT), Republic of Korea
2021R1A2C1011198Institute for Information amp; communications Technology Planning amp; Evaluation) (IITP) - Korea government (MSIT) under the ICT Creative Consilience Program
IITP-2021-2020-0-01821AI Platform to Fully Adapt and Reflect Privacy-Policy Changes2022-0-0068
- …