15 research outputs found
Toward a unifying model for Opinion, Sentiment and Emotion information extraction
International audienceThis paper presents a logical formalization of a set 20 semantic categories related to opinion, emotion and sentiment. Our formalization is based on the BDI model (Belief, Desire and Intetion) and constitues a first step toward a unifying model for subjective information extraction. The separability of the subjective classes that we propose was assessed both formally and on two subjective reference corpora
Formal analysis of the communication of probabilistic knowledge
This paper discusses questions about communication of probabilistic knowledge in the light of current theories of agent communication. It will argue that there is a semantic gap between these theories and research areas related to probabilistic knowledge representation and communication, that creates very serious theoretical problems if agents that reason probabilistically try to use the communication framework provided by these theories. The paper proposes a new formal model, which generalizes current agent communication theories (at least the standard FIPA version of these theories) to handle probabilistic knowledge communication. We propose a new probabilistic logic as the basis for the model and new communication principles and communicative acts to support this kind of communication.IFIP International Conference on Artificial Intelligence in Theory and Practice - Agents 1Red de Universidades con Carreras en Informática (RedUNCI
Simulating a Human Cooperative Problem Solving
International audienc
Formal analysis of the communication of probabilistic knowledge
This paper discusses questions about communication of probabilistic knowledge in the light of current theories of agent communication. It will argue that there is a semantic gap between these theories and research areas related to probabilistic knowledge representation and communication, that creates very serious theoretical problems if agents that reason probabilistically try to use the communication framework provided by these theories. The paper proposes a new formal model, which generalizes current agent communication theories (at least the standard FIPA version of these theories) to handle probabilistic knowledge communication. We propose a new probabilistic logic as the basis for the model and new communication principles and communicative acts to support this kind of communication.IFIP International Conference on Artificial Intelligence in Theory and Practice - Agents 1Red de Universidades con Carreras en Informática (RedUNCI
Location Awareness in Multi-Agent Control of Distributed Energy Resources
The integration of Distributed Energy Resource (DER) technologies such as heat pumps, electric vehicles and small-scale generation into the electricity grid at the household level is limited by technical constraints. This work argues that location is an important aspect for the control and integration of DER and that network topology can inferred without the use of a centralised network model. It addresses DER integration challenges by presenting a novel approach that uses a decentralised multi-agent system where equipment controllers learn and use their location within the low-voltage section of the power system.
Models of electrical networks exhibiting technical constraints were developed. Through theoretical analysis and real network data collection, various sources of location data were identified and new geographical and electrical techniques were developed for deriving network topology using Global Positioning System (GPS) and 24-hour voltage logs. The multi-agent system paradigm and societal structures were examined as an approach to a multi-stakeholder domain and congregations were used as an aid to decentralisation in a non-hierarchical, non-market-based approach. Through formal description of the agent attitude INTEND2, the novel technique of Intention Transfer was applied to an agent congregation to provide an opt-in, collaborative system.
Test facilities for multi-agent systems were developed and culminated in a new embedded controller test platform that integrated a real-time dynamic electrical network simulator to provide a full-feedback system integrated with control hardware. Finally, a multi-agent control system was developed and implemented that used location data in providing demand-side response to a voltage excursion, with the goals of improving power quality, reducing generator disconnections, and deferring network reinforcement.
The resulting communicating and self-organising energy agent community, as demonstrated on a unique hardware-in-the-loop platform, provides an application model and test facility to inspire agent-based, location-aware smart grid applications across the power systems domain
Recommended from our members
A Computational Model of Non-Cooperation in Natural Language Dialogue
A common assumption in the study of conversation is that participants fully cooperate in order to maximise the effectiveness of the exchange and ensure communication flow. This assumption persists even in situations in which the private goals of the participants are at odds: they may act strategically pursuing their agendas, but will still adhere to a number of linguistic norms or conventions which are implicitly accepted by a community of language users.
However, in naturally occurring dialogue participants often depart from such norms, for instance, by asking inappropriate questions, by avoiding to provide adequate answers or by volunteering information that is not relevant to the conversation. These are examples of what we call linguistic non-cooperation.
This thesis presents a systematic investigation of linguistic non-cooperation in dialogue. Given a specific activity, in a specific cultural context and time, the method proceeds by making explicit which linguistic behaviours are appropriate. This results in a set of rules: the global dialogue game. Non-cooperation is then measured as instances in which the actions of the participants are not in accordance with these rules. The dialogue game is formally defined in terms of discourse obligations. These are actions that participants are expected to perform at a given point in the dialogue based on the dialogue history. In this context, non-cooperation amounts to participants failing to act according to their obligations.
We propose a general definition of linguistic non-cooperation and give a specific instance for political interview dialogues. Based on the latter, we present an empirical method which involves a coding scheme for the manual annotation of interview transcripts. The degree to which each participant cooperates is automatically determined by contrasting the annotated transcripts with the rules in the dialogue game for political interviews. The approach is evaluated on a corpus of broadcast political interviews and tested for correlation with human judgement on the same corpus.
Further, we describe a model of conversational agents that incorporates the concepts and mechanisms above as part of their dialogue manager. This allows for the generation of conversations in which the agents exhibit varying degrees of cooperation by controlling how often they favour their private goals instead of discharging their discourse obligations
Rational Agents: Prioritized Goals, Goal Dynamics, and Agent Programming Languages with Declarative Goals
I introduce a specification language for modeling an agent's prioritized goals and their dynamics. I use the situation calculus along with Reiter's solution to the frame problem and predicates for describing agents' knowledge as my base formalism. I further enhance this language by introducing a new sort of infinite paths. Within this language, I discuss how to systematically specify prioritized goals and how to precisely describe the effects of actions on these goals. These actions include adoption and dropping of goals and subgoals. In this framework, an agent's intentions are formally specified as the prioritized intersection of her goals. The ``prioritized'' qualifier above means that the specification must respect the priority ordering of goals when choosing between two incompatible goals. I ensure that the agent's intentions are always consistent with each other and with her knowledge. I investigate two variants with different commitment strategies. Agents specified using the ``optimizing'' agent framework always try to optimize their intentions, while those specified in the ``committed'' agent framework will stick to their intentions even if opportunities to commit to higher priority goals arise when these goals are incompatible with their current intentions. For these, I study properties of prioritized goals and goal change. I also give a definition of subgoals, and prove properties about the goal-subgoal relationship.
As an application, I develop a model for a Simple Rational Agent Programming Language (SR-APL) with declarative goals. SR-APL is based on the ``committed agent'' variant of this rich theory, and combines elements from Belief-Desire-Intention (BDI) APLs and the situation calculus based ConGolog APL. Thus SR-APL supports prioritized goals and is grounded on a formal theory of goal change. It ensures that the agent's declarative goals and adopted plans are consistent with each other and with her knowledge. In doing this, I try to bridge the gap between agent theories and practical agent programming languages by providing a model and specification of an idealized BDI agent whose behavior is closer to what a rational agent does. I show that agents programmed in SR-APL satisfy some key rationality requirements
Human-Robot Interaction architecture for interactive and lively social robots
Mención Internacional en el título de doctorLa sociedad está experimentando un proceso de envejecimiento que puede provocar un desequilibrio
entre la población en edad de trabajar y aquella fuera del mercado de trabajo. Una de las soluciones
a este problema que se están considerando hoy en día es la introducción de robots en multiples
sectores, incluyendo el de servicios. Sin embargo, para que esto sea una solución viable, estos robots
necesitan ser capaces de interactuar con personas de manera satisfactoria, entre otras habilidades. En
el contexto de la aplicación de robots sociales al cuidado de mayores, esta tesis busca proporcionar
a un robot social las habilidades necesarias para crear interacciones entre humanos y robots que
sean naturales. En concreto, esta tesis se centra en tres problemas que deben ser solucionados: (i) el
modelado de interacciones entre humanos y robots; (ii) equipar a un robot social con las capacidades
expresivas necesarias para una comunicación satisfactoria; y (iii) darle al robot una apariencia vivaz.
La solución al problema de modelado de diálogos presentada en esta tesis propone diseñar estos
diálogos como una secuencia de elementos atómicos llamados Actos Comunicativos (CAs, por sus
siglas en inglés). Se pueden parametrizar en tiempo de ejecución para completar diferentes objetivos
comunicativos, y están equipados con mecanismos para manejar algunas de las imprecisiones que
pueden aparecer durante interacciones. Estos CAs han sido identificados a partir de la combinación
de dos dimensiones: iniciativa (si la tiene el robot o el usuario) e intención (si se pretende obtener o
proporcionar información). Estos CAs pueden ser combinados siguiendo una estructura jerárquica
para crear estructuras mas complejas que sean reutilizables. Esto simplifica el proceso para crear
nuevas interacciones, permitiendo a los desarrolladores centrarse exclusivamente en diseñar el flujo
del diálogo, sin tener que preocuparse de reimplementar otras funcionalidades que tienen que estar
presentes en todas las interacciones (como el manejo de errores, por ejemplo).
La expresividad del robot está basada en el uso de una librería de gestos, o expresiones,
multimodales predefinidos, modelados como estructuras similares a máquinas de estados. El
módulo que controla la expresividad recibe peticiones para realizar dichas expresiones, planifica
su ejecución para evitar cualquier conflicto que pueda aparecer, las carga, y comprueba que su
ejecución se complete sin problemas. El sistema es capaz también de generar estas expresiones en
tiempo de ejecución a partir de una lista de acciones unimodales (como decir una frase, o mover una
articulación). Una de las características más importantes de la arquitectura de expresividad propuesta
es la integración de una serie de métodos de modulación que pueden ser usados para modificar los
gestos del robot en tiempo de ejecución. Esto permite al robot adaptar estas expresiones en base
a circunstancias particulares (aumentando al mismo tiempo la variabilidad de la expresividad del robot), y usar un número limitado de gestos para mostrar diferentes estados internos (como el estado
emocional).
Teniendo en cuenta que ser reconocido como un ser vivo es un requisito para poder participar en
interacciones sociales, que un robot social muestre una apariencia de vivacidad es un factor clave
en interacciones entre humanos y robots. Para ello, esta tesis propone dos soluciones. El primer
método genera acciones a través de las diferentes interfaces del robot a intervalos. La frecuencia e
intensidad de estas acciones están definidas en base a una señal que representa el pulso del robot.
Dicha señal puede adaptarse al contexto de la interacción o al estado interno del robot. El segundo
método enriquece las interacciones verbales entre el robot y el usuario prediciendo los gestos no
verbales más apropiados en base al contenido del diálogo y a la intención comunicativa del robot.
Un modelo basado en aprendizaje automático recibe la transcripción del mensaje verbal del robot,
predice los gestos que deberían acompañarlo, y los sincroniza para que cada gesto empiece en el
momento preciso. Este modelo se ha desarrollado usando una combinación de un encoder diseñado
con una red neuronal Long-Short Term Memory, y un Conditional Random Field para predecir la
secuencia de gestos que deben acompañar a la frase del robot.
Todos los elementos presentados conforman el núcleo de una arquitectura de interacción
humano-robot modular que ha sido integrada en múltiples plataformas, y probada bajo diferentes
condiciones. El objetivo central de esta tesis es contribuir al área de interacción humano-robot
con una nueva solución que es modular e independiente de la plataforma robótica, y que se centra
en proporcionar a los desarrolladores las herramientas necesarias para desarrollar aplicaciones que
requieran interacciones con personas.Society is experiencing a series of demographic changes that can result in an unbalance between
the active working and non-working age populations. One of the solutions considered to mitigate
this problem is the inclusion of robots in multiple sectors, including the service sector. But for
this to be a viable solution, among other features, robots need to be able to interact with humans
successfully. This thesis seeks to endow a social robot with the abilities required for a natural
human-robot interactions. The main objective is to contribute to the body of knowledge on the area
of Human-Robot Interaction with a new, platform-independent, modular approach that focuses on
giving roboticists the tools required to develop applications that involve interactions with humans. In
particular, this thesis focuses on three problems that need to be addressed: (i) modelling interactions
between a robot and an user; (ii) endow the robot with the expressive capabilities required for a
successful communication; and (iii) endow the robot with a lively appearance.
The approach to dialogue modelling presented in this thesis proposes to model dialogues as a
sequence of atomic interaction units, called Communicative Acts, or CAs. They can be parametrized
in runtime to achieve different communicative goals, and are endowed with mechanisms oriented to
solve some of the uncertainties related to interaction. Two dimensions have been used to identify the
required CAs: initiative (the robot or the user), and intention (either retrieve information or to convey
it). These basic CAs can be combined in a hierarchical manner to create more re-usable complex
structures. This approach simplifies the creation of new interactions, by allowing developers to focus
exclusively on designing the flow of the dialogue, without having to re-implement functionalities
that are common to all dialogues (like error handling, for example).
The expressiveness of the robot is based on the use of a library of predefined multimodal gestures,
or expressions, modelled as state machines. The module managing the expressiveness receives requests
for performing gestures, schedules their execution in order to avoid any possible conflict that might
arise, loads them, and ensures that their execution goes without problems. The proposed approach
is also able to generate expressions in runtime based on a list of unimodal actions (an utterance,
the motion of a limb, etc...). One of the key features of the proposed expressiveness management
approach is the integration of a series of modulation techniques that can be used to modify the
robot’s expressions in runtime. This would allow the robot to adapt them to the particularities of a
given situation (which would also increase the variability of the robot expressiveness), and to display
different internal states with the same expressions. Considering that being recognized as a living being is a requirement for engaging in social
encounters, the perception of a social robot as a living entity is a key requirement to foster
human-robot interactions. In this dissertation, two approaches have been proposed. The first
method generates actions for the different interfaces of the robot at certain intervals. The frequency
and intensity of these actions are defined by a signal that represents the pulse of the robot, which can
be adapted to the context of the interaction or the internal state of the robot. The second method
enhances the robot’s utterance by predicting the appropriate non-verbal expressions that should
accompany them, according to the content of the robot’s message, as well as its communicative
intention. A deep learning model receives the transcription of the robot’s utterances, predicts
which expressions should accompany it, and synchronizes them, so each gesture selected starts at
the appropriate time. The model has been developed using a combination of a Long-Short Term
Memory network-based encoder and a Conditional Random Field for generating a sequence of
gestures that are combined with the robot’s utterance.
All the elements presented above conform the core of a modular Human-Robot Interaction
architecture that has been integrated in multiple platforms, and tested under different conditions.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fernando Torres Medina.- Secretario: Concepción Alicia Monje Micharet.- Vocal: Amirabdollahian Farshi