101 research outputs found
A Study of Automatic Metrics for the Evaluation of Natural Language Explanations
As transparency becomes key for robotics and AI, it will be necessary to
evaluate the methods through which transparency is provided, including
automatically generated natural language (NL) explanations. Here, we explore
parallels between the generation of such explanations and the much-studied
field of evaluation of Natural Language Generation (NLG). Specifically, we
investigate which of the NLG evaluation measures map well to explanations. We
present the ExBAN corpus: a crowd-sourced corpus of NL explanations for
Bayesian Networks. We run correlations comparing human subjective ratings with
NLG automatic measures. We find that embedding-based automatic NLG evaluation
methods, such as BERTScore and BLEURT, have a higher correlation with human
ratings, compared to word-overlap metrics, such as BLEU and ROUGE. This work
has implications for Explainable AI and transparent robotic and autonomous
systems.Comment: Accepted at EACL 202
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Backdoors pose a serious threat to machine learning, as they can compromise
the integrity of security-critical systems, such as self-driving cars. While
different defenses have been proposed to address this threat, they all rely on
the assumption that the hardware on which the learning models are executed
during inference is trusted. In this paper, we challenge this assumption and
introduce a backdoor attack that completely resides within a common hardware
accelerator for machine learning. Outside of the accelerator, neither the
learning model nor the software is manipulated, so that current defenses fail.
To make this attack practical, we overcome two challenges: First, as memory on
a hardware accelerator is severely limited, we introduce the concept of a
minimal backdoor that deviates as little as possible from the original model
and is activated by replacing a few model parameters only. Second, we develop a
configurable hardware trojan that can be provisioned with the backdoor and
performs a replacement only when the specific target model is processed. We
demonstrate the practical feasibility of our attack by implanting our hardware
trojan into the Xilinx Vitis AI DPU, a commercial machine-learning accelerator.
We configure the trojan with a minimal backdoor for a traffic-sign recognition
system. The backdoor replaces only 30 (0.069%) model parameters, yet it
reliably manipulates the recognition once the input contains a backdoor
trigger. Our attack expands the hardware circuit of the accelerator by 0.24%
and induces no run-time overhead, rendering a detection hardly possible. Given
the complex and highly distributed manufacturing process of current hardware,
our work points to a new threat in machine learning that is inaccessible to
current security mechanisms and calls for hardware to be manufactured only in
fully trusted environments
How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social Impact
Recent years have seen many breakthroughs in natural language processing
(NLP), transitioning it from a mostly theoretical field to one with many
real-world applications. Noting the rising number of applications of other
machine learning and AI techniques with pervasive societal impact, we
anticipate the rising importance of developing NLP technologies for social
good. Inspired by theories in moral philosophy and global priorities research,
we aim to promote a guideline for social good in the context of NLP. We lay the
foundations via the moral philosophy definition of social good, propose a
framework to evaluate the direct and indirect real-world impact of NLP tasks,
and adopt the methodology of global priorities research to identify priority
causes for NLP research. Finally, we use our theoretical framework to provide
some practical guidelines for future NLP research for social good. Our data and
code are available at http://github.com/zhijing-jin/nlp4sg_acl2021. In
addition, we curate a list of papers and resources on NLP for social good at
https://github.com/zhijing-jin/NLP4SocialGood_Papers.Comment: Findings of ACL 2021; also accepted at the NLP for Positive Impact
workshop@ACL 202
Goal reasoning for autonomous agents using automated planning
MenciĂłn Internacional en el tĂtulo de doctorAutomated planning deals with the task of finding a sequence of actions, namely
a plan, which achieves a goal from a given initial state. Most planning research
consider goals are provided by a external user, and agents just have to find a
plan to achieve them. However, there exist many real world domains where
agents should not only reason about their actions but also about their goals,
generating new ones or changing them according to the perceived environment.
In this thesis we aim at broadening the goal reasoning capabilities of planningbased
agents, both when acting in isolation and when operating in the same
environment as other agents.
In single-agent settings, we firstly explore a special type of planning tasks
where we aim at discovering states that fulfill certain cost-based requirements
with respect to a given set of goals. By computing these states, agents are able
to solve interesting tasks such as find escape plans that move agents in to safe
places, hide their true goal to a potential observer, or anticipate dynamically arriving
goals. We also show how learning the environmentâs dynamics may help
agents to solve some of these tasks. Experimental results show that these states
can be quickly found in practice, making agents able to solve new planning
tasks and helping them in solving some existing ones.
In multi-agent settings, we study the automated generation of goals based on
other agentsâ behavior. We focus on competitive scenarios, where we are interested
in computing counterplans that prevent opponents from achieving their
goals. We frame these tasks as counterplanning, providing theoretical properties
of the counterplans that solve them. We also show how agents can benefit
from computing some of the states we propose in the single-agent setting to
anticipate their opponentâs movements, thus increasing the odds of blocking
them. Experimental results show how counterplans can be found in different
environments ranging from competitive planning domains to real-time strategy
games.Programa de Doctorado en Ciencia y TecnologĂa InformĂĄtica por la Universidad Carlos III de MadridPresidenta: Eva OnaindĂa de la Rivaherrera.- Secretario: Ăngel GarcĂa Olaya.- Vocal: Mark Robert
Explainable methods for knowledge graph refinement and exploration via symbolic reasoning
Knowledge Graphs (KGs) have applications in many domains such as Finance, Manufacturing, and Healthcare. While recent efforts have created large KGs, their content is far from complete and sometimes includes invalid statements. Therefore, it is crucial to refine the constructed KGs to enhance their coverage and accuracy via KG completion and KG validation. It is also vital to provide human-comprehensible explanations for such refinements, so that humans have trust in the KG quality. Enabling KG exploration, by search and browsing, is also essential for users to understand the KG value and limitations towards down-stream applications. However, the large size of KGs makes KG exploration very challenging. While the type taxonomy of KGs is a useful asset along these lines, it remains insufficient for deep exploration. In this dissertation we tackle the aforementioned challenges of KG refinement and KG exploration by combining logical reasoning over the KG with other techniques such as KG embedding models and text mining. Through such combination, we introduce methods that provide human-understandable output. Concretely, we introduce methods to tackle KG incompleteness by learning exception-aware rules over the existing KG. Learned rules are then used in inferring missing links in the KG accurately. Furthermore, we propose a framework for constructing human-comprehensible explanations for candidate facts from both KG and text. Extracted explanations are used to insure the validity of KG facts. Finally, to facilitate KG exploration, we introduce a method that combines KG embeddings with rule mining to compute informative entity clusters with explanations.Wissensgraphen haben viele Anwendungen in verschiedenen Bereichen, beispielsweise im Finanz- und Gesundheitswesen. Wissensgraphen sind jedoch unvollständig und enthalten auch ungĂźltige Daten. Hohe Abdeckung und Korrektheit erfordern neue Methoden zur Wissensgraph-Erweiterung und Wissensgraph-Validierung. Beide Aufgaben zusammen werden als Wissensgraph-Verfeinerung bezeichnet. Ein wichtiger Aspekt dabei ist die Erklärbarkeit und Verständlichkeit von Wissensgraphinhalten fĂźr Nutzer. In Anwendungen ist darĂźber hinaus die nutzerseitige Exploration von Wissensgraphen von besonderer Bedeutung. Suchen und Navigieren im Graph hilft dem Anwender, die Wissensinhalte und ihre Limitationen besser zu verstehen. Aufgrund der riesigen Menge an vorhandenen Entitäten und Fakten ist die Wissensgraphen-Exploration eine Herausforderung. Taxonomische Typsystem helfen dabei, sind jedoch fĂźr tiefergehende Exploration nicht ausreichend. Diese Dissertation adressiert die Herausforderungen der Wissensgraph-Verfeinerung und der Wissensgraph-Exploration durch algorithmische Inferenz Ăźber dem Wissensgraph. Sie erweitert logisches Schlussfolgern und kombiniert es mit anderen Methoden, insbesondere mit neuronalen Wissensgraph-Einbettungen und mit Text-Mining. Diese neuen Methoden liefern Ausgaben mit Erklärungen fĂźr Nutzer. Die Dissertation umfasst folgende Beiträge: Insbesondere leistet die Dissertation folgende Beiträge: ⢠Zur Wissensgraph-Erweiterung präsentieren wir ExRuL, eine Methode zur Revision von Horn-Regeln durch HinzufĂźgen von Ausnahmebedingungen zum Rumpf der Regeln. Die erweiterten Regeln kĂśnnen neue Fakten inferieren und somit LĂźcken im Wissensgraphen schlieĂen. Experimente mit groĂen Wissensgraphen zeigen, dass diese Methode Fehler in abgeleiteten Fakten erheblich reduziert und nutzerfreundliche Erklärungen liefert. ⢠Mit RuLES stellen wir eine Methode zum Lernen von Regeln vor, die auf probabilistischen Repräsentationen fĂźr fehlende Fakten basiert. Das Verfahren erweitert iterativ die aus einem Wissensgraphen induzierten Regeln, indem es neuronale Wissensgraph-Einbettungen mit Informationen aus Textkorpora kombiniert. Bei der Regelgenerierung werden neue Metriken fĂźr die Regelqualität verwendet. Experimente zeigen, dass RuLES die Qualität der gelernten Regeln und ihrer Vorhersagen erheblich verbessert. ⢠Zur UnterstĂźtzung der Wissensgraph-Validierung wird ExFaKT vorgestellt, ein Framework zur Konstruktion von Erklärungen fĂźr Faktkandidaten. Die Methode transformiert Kandidaten mit Hilfe von Regeln in eine Menge von Aussagen, die leichter zu finden und zu validieren oder widerlegen sind. Die Ausgabe von ExFaKT ist eine Menge semantischer Evidenzen fĂźr Faktkandidaten, die aus Textkorpora und dem Wissensgraph extrahiert werden. Experimente zeigen, dass die Transformationen die Ausbeute und Qualität der entdeckten Erklärungen deutlich verbessert. Die generierten unterstĂźtzen Erklärungen unterstĂźtze sowohl die manuelle Wissensgraph- Validierung durch Kuratoren als auch die automatische Validierung. ⢠Zur UnterstĂźtzung der Wissensgraph-Exploration wird ExCut vorgestellt, eine Methode zur Erzeugung von informativen Entitäts-Clustern mit Erklärungen unter Verwendung von Wissensgraph-Einbettungen und automatisch induzierten Regeln. Eine Cluster-Erklärung besteht aus einer Kombination von Relationen zwischen den Entitäten, die den Cluster identifizieren. ExCut verbessert gleichzeitig die Cluster- Qualität und die Cluster-Erklärbarkeit durch iteratives Verschränken des Lernens von Einbettungen und Regeln. Experimente zeigen, dass ExCut Cluster von hoher Qualität berechnet und dass die Cluster-Erklärungen fĂźr Nutzer informativ sind
- âŚ