901,418 research outputs found
COACHES Cooperative Autonomous Robots in Complex and Human Populated Environments
Public spaces in large cities are increasingly becoming complex and unwelcoming environments. Public spaces progressively become more hostile and unpleasant to use because of the overcrowding and complex information in signboards. It is in the interest of cities to make their public spaces easier to use, friendlier to visitors and safer to increasing elderly population and to citizens with disabilities. Meanwhile, we observe, in the last decade a tremendous progress in the development of robots in dynamic, complex and uncertain environments. The new challenge for the near future is to deploy a network of robots in public spaces to accomplish services that can help humans. Inspired by the aforementioned challenges, COACHES project addresses fundamental issues related to the design of a robust system of self-directed autonomous robots with high-level skills of environment modelling and scene understanding, distributed autonomous decision-making, short-term interacting with humans and robust and safe navigation in overcrowding spaces. To this end, COACHES will provide an integrated solution to new challenges on: (1) a knowledge-based representation of the environment, (2) human activities and needs estimation using Markov and Bayesian techniques, (3) distributed decision-making under uncertainty to collectively plan activities of assistance, guidance and delivery tasks using Decentralized Partially Observable Markov Decision Processes with efficient algorithms to improve their scalability and (4) a multi-modal and short-term human-robot interaction to exchange information and requests. COACHES project will provide a modular architecture to be integrated in real robots. We deploy COACHES at Caen city in a mall called “Rive de l’orne”. COACHES is a cooperative system consisting of ?xed cameras and the mobile robots. The ?xed cameras can do object detection, tracking and abnormal events detection (objects or behaviour). The robots combine these information with the ones perceived via their own sensor, to provide information through its multi-modal interface, guide people to their destinations, show tramway stations and transport goods for elderly people, etc.... The COACHES robots will use different modalities (speech and displayed information) to interact with the mall visitors, shopkeepers and mall managers. The project has enlisted an important an end-user (Caen la mer) providing the scenarios where the COACHES robots and systems will be deployed, and gather together universities with complementary competences from cognitive systems (SU), robust image/video processing (VUB, UNICAEN), and semantic scene analysis and understanding (VUB), Collective decision-making using decentralized partially observable Markov Decision Processes and multi-agent planning (UNICAEN, Sapienza), multi-modal and short-term human-robot interaction (Sapienza, UNICAEN
Principles For Aiding Complex Military Decision Making
Paper presented to the Second International Command and Control Research and Technology Symposium, Monterey, Ca.The Tactical Decision Making Under Stress
(TADMUS) program is being conducted to
apply recent developments in decision theory
and human-system interaction technology
to the design of a decision support system
for enhancing tactical decision making
under the highly complex conditions involved
in anti-air warfare scenarios in littoral
environments. Our goal is to present decision
support information in a format that
minimizes any mismatches between the
cognitive characteristics of the human decision maker and the design and response
characteristics of the decision support system. Decision makers are presented with
decision support tools which parallel the
cognitive strategies they already employ,
thus reducing the number of decision making
errors. Hence, prototype display development has been based on decision making
models postulated by naturalistic
decision-making theory. Incorporating current
human-system interaction design
principles is expected to reduce cognitive
processing demands and thereby mitigate
decision errors caused by cognitive overload,
which have been documented through
research and experimentation. Topics include a discussion of: (1) the theoretical
background for the TADMUS program; (2)
a description of the cognitive tasks performed;
(3) the decision support and human-
system interaction design principles
incorporated to reduce the cognitive processing
load on the decision maker; and (4) a
brief description of the types of errors
made by decision makers and interpretations
of the cause of these errors based on
the cognitive psychology literature.Funding for the research cited in this paper was received from the Cognitive and Neural Science and Technology Division of the Office of Naval Research
Autonomous Decision-Making based on Biological Adaptive Processes for Intelligent Social Robots
Mención Internacional en el título de doctorThe unceasing development of autonomous robots in many different scenarios drives a
new revolution to improve our quality of life. Recent advances in human-robot interaction
and machine learning extend robots to social scenarios, where these systems pretend
to assist humans in diverse tasks. Thus, social robots are nowadays becoming real in
many applications like education, healthcare, entertainment, or assistance. Complex
environments demand that social robots present adaptive mechanisms to overcome
different situations and successfully execute their tasks. Thus, considering the previous
ideas, making autonomous and appropriate decisions is essential to exhibit reasonable
behaviour and operate well in dynamic scenarios.
Decision-making systems provide artificial agents with the capacity of making
decisions about how to behave depending on input information from the environment.
In the last decades, human decision-making has served researchers as an inspiration to
endow robots with similar deliberation. Especially in social robotics, where people expect
to interact with machines with human-like capabilities, biologically inspired decisionmaking
systems have demonstrated great potential and interest. Thereby, it is expected
that these systems will continue providing a solid biological background and improve the
naturalness of the human-robot interaction, usability, and the acceptance of social robots
in the following years.
This thesis presents a decision-making system for social robots acting in healthcare,
entertainment, and assistance with autonomous behaviour. The system’s goal is to
provide robots with natural and fluid human-robot interaction during the realisation of
their tasks. The decision-making system integrates into an already existing software
architecture with different modules that manage human-robot interaction, perception,
or expressiveness. Inside this architecture, the decision-making system decides which
behaviour the robot has to execute after evaluating information received from different
modules in the architecture. These modules provide structured data about planned
activities, perceptions, and artificial biological processes that evolve with time that are the
basis for natural behaviour. The natural behaviour of the robot comes from the evolution
of biological variables that emulate biological processes occurring in humans. We also
propose a Motivational model, a module that emulates biological processes in humans for
generating an artificial physiological and psychological state that influences the robot’s
decision-making. These processes emulate the natural biological rhythms of the human organism to produce biologically inspired decisions that improve the naturalness exhibited
by the robot during human-robot interactions. The robot’s decisions also depend on what
the robot perceives from the environment, planned events listed in the robot’s agenda, and
the unique features of the user interacting with the robot.
The robot’s decisions depend on many internal and external factors that influence how
the robot behaves. Users are the most critical stimuli the robot perceives since they are
the cornerstone of interaction. Social robots have to focus on assisting people in their
daily tasks, considering that each person has different features and preferences. Thus,
a robot devised for social interaction has to adapt its decisions to people that aim at
interacting with it. The first step towards adapting to different users is identifying the user
it interacts with. Then, it has to gather as much information as possible and personalise
the interaction. The information about each user has to be actively updated if necessary
since outdated information may lead the user to refuse the robot. Considering these facts,
this work tackles the user adaptation in three different ways.
• The robot incorporates user profiling methods to continuously gather information
from the user using direct and indirect feedback methods.
• The robot has a Preference Learning System that predicts and adjusts the user’s
preferences to the robot’s activities during the interaction.
• An Action-based Learning System grounded on Reinforcement Learning is
introduced as the origin of motivated behaviour.
The functionalities mentioned above define the inputs received by the decisionmaking
system for adapting its behaviour. Our decision-making system has been designed
for being integrated into different robotic platforms due to its flexibility and modularity.
Finally, we carried out several experiments to evaluate the architecture’s functionalities
during real human-robot interaction scenarios. In these experiments, we assessed:
• How to endow social robots with adaptive affective mechanisms to overcome
interaction limitations.
• Active user profiling using face recognition and human-robot interaction.
• A Preference Learning System we designed to predict and adapt the user
preferences towards the robot’s entertainment activities for adapting the interaction.
• A Behaviour-based Reinforcement Learning System that allows the robot to learn
the effects of its actions to behave appropriately in each situation.
• The biologically inspired robot behaviour using emulated biological processes and
how the robot creates social bonds with each user.
• The robot’s expressiveness in affect (emotion and mood) and autonomic functions
such as heart rate or blinking frequency.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Richard J. Duro Fernández.- Secretaria: Concepción Alicia Monje Micharet.- Vocal: Silvia Ross
MEASURING REGRET: EMOTIONAL ASPECTS OF AUCTION DESIGN
Recent research strengthens the conjecture that human decision-making stems from a complex interaction of rational judgment and emotional processes. A prominent example of the impact of emotions in economic decision-making is the effect of regret-related information feedback on bidding behaviour in first-price sealed-bid auctions. Revealing the information “missed opportunity to win” upon losing an auction, results in higher bids. Revealing the information “money left on the table” upon winning an auction, results in lower bids. The common explanation for this pattern is winner and loser regret. However, this explanation is still hypothetical and little is known about the actual emotional processes that underlie this phenomenon. This paper investigates actual emotional processes in auctions with varying feedback information. Thereby, we provide an approach that combines an auction experiment with psychophysiological measures which indicate emotional involvement. Our economic results are in line with those of previous studies. Moreover, we can show that loser regret results in a stronger emotional response than winner regret. Remarkably, loser regret is strong for high values of “missed opportunity.” However, the pattern for different amounts of “money left on the table” is diametric to what winner regret theory suggests
Cognitive Activity Support Tools: Design of the Visual Interface
This dissertation is broadly concerned with interactive computational tools that support the performance of complex cognitive activities, examples of which are analytical reasoning, decision making, problem solving, sense making, forecasting, and learning. Examples of tools that support such activities are visualization-based tools in the areas of: education, information visualization, personal information management, statistics, and health informatics. Such tools enable access to information and data and, through interaction, enable a human-information discourse. In a more specific sense, this dissertation is concerned with the design of the visual interface of these tools. This dissertation presents a large and comprehensive theoretical framework to support research and design. Issues treated herein include interaction design and patterns of interaction for cognitive and epistemic support; analysis of the essential properties of interactive visual representations and their influences on cognitive and perceptual processes; an analysis of the structural components of interaction and how different operational forms of interaction components affect the performance of cognitive activities; an examination of how the information-processing load should be distributed between humans and tools during the performance of complex cognitive activities; and a categorization of common visualizations according to their structure and function, and a discussion of the cognitive utility of each category. This dissertation also includes a chapter that describes the design of a cognitive activity support tool, as guided by the theoretical contributions that comprise the rest of the dissertation. Those that may find this dissertation useful include researchers and practitioners in the areas of data and information visualization, visual analytics, medical and health informatics, data science, journalism, educational technology, and digital games
Probabilistic Human-Robot Information Fusion
This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability
A decision making aid for evaluating total ship system effectiveness.
The aim of this study was to contribute to the knowledge of Total Systems theory and methodologies, by developing an aid to decision making on the effectiveness of complex man-machine organisations. Sponsored by the Ministry of Defence (Navy) as a collaborative research project, the study was to be based on Royal Navy ships and also linked with certain MOD(N) projects working on related effectiveness problems. Initial pre-feasibility, then feasibility studies established a simple model of Effectiveness as the combination of Availability, Performance and Human Factors, which was followed by a more thorough examination of the Availability Function. The development of an Information System designed for the collection and analysis of reliability and maintainability data was central to this phase of the research. This culminates in a comprehensive description of the Phase I hardware, software requirements and information distribution network to be installed and operating commencing in 1983. The Human Factors research was linked to two additional Ministry of Defence (Navy) projects who made available the Human Factors data. This data, collected from five ships of the Type 42 Guided Weapons Destroyer Class, was concerned with the Operations Room organization. Using this data base, a subjective analysis resulted in key indicators being produced which were used with a rating scale technique to develop profiles. Following a systemic overview three interactive indicators - Variable Disjunction of Information, Knowledge and Information Processing were used as the basis of an Information Transfer Function conceptual model. This model, when combined with Systems Interaction Diagrams enabled a Methodology to be designed which was evaluated against a three man-function element within a total Operations Room complement of 33 men. On the premise that the Human Factors function could be transformed to metric data the framework of a Human Factors model was developed, based on an existing Total Ship Availability Model with the potential that these could be combined to produce an Effectiveness model. The information System, the proposed Methodology and the framework of a Ship Effectiveness Model were then incrementally and theoretically linked in order to develop the organisation of a decision making aid for evaluating the effectiveness of complex man-machine systens. The research was not intended to test or validate the decision making aid, as aspects of this will need to be approved by Ministry of Defence (Navy) authorities before proceeding to the next phase of implementing the results so far produced
A Quantitative Investigation into the Design Trade-offs in Decision Support Systems
Users frequently make decisions about which information systems they incorporate into their information analysis and they abandon tools that they perceive as untrustworthy or ineffective. Decision support systems - automated agents that provide complex algorithms - are often effective but simultaneously opaque; meanwhile, simple tools are transparent and predictable but limited in their usefulness. Tool creators have responded by increasing transparency (via explanation) and customizability (via control parameters) of complex algorithms or by improving the effectiveness of simple algorithms (such as adding personalization to keyword search). Unfortunately, requiring user input or attention requires cognitive bandwidth, which could hurt performance in time-sensitive operations. Simultaneously, improving the performance of algorithms typically makes the underlying computations more complex, reducing predictability, increasing potential mistrust, and sometimes resulting in user performance degradation. Ideally, software engineers could create systems that accommodate human cognition, however, not all of the factors that affect decision making in human-agent interaction (HAI) are known. In this work, we conduct a quantitative investigation into the role of human insight, awareness of system operations, cognitive load, and trust in the context of decision support systems. We conduct several experiments with different task parameters that shed light on the relationship between human cognition and the availability of system explanation/control under varying degrees of algorithm error. Human decision making behavior is quantified in terms of which information tools are used, which information is incorporated, and domain decision success. The measurement of intermediate cognitive variables allows for the testing of mediation effects, which facilitates the explanation of effects related to system explanation, control, and error. Key findings are 1) a simple, reliable, domain independent profiling test can predict human decision behavior in the HAI context, 2) correct user beliefs about information systems mediate the effects of system explanations to predict adherence to advice, and 3) explanations from and control over complex algorithms increase trust, satisfaction, interaction, and adherence, but they also cause humans to form incorrect beliefs about data
Recommended from our members
Recent advances of HCI in decision-making tasks for optimized clinical workflows and precision medicine.
The ever-increasing amount of biomedical data is enabling new large-scale studies, even though ad hoc computational solutions are required. The most recent Machine Learning (ML) and Artificial Intelligence (AI) techniques have been achieving outstanding performance and an important impact in clinical research, aiming at precision medicine, as well as improving healthcare workflows. However, the inherent heterogeneity and uncertainty in the healthcare information sources pose new compelling challenges for clinicians in their decision-making tasks. Only the proper combination of AI and human intelligence capabilities, by explicitly taking into account effective and safe interaction paradigms, can permit the delivery of care that outperforms what either can do separately. Therefore, Human-Computer Interaction (HCI) plays a crucial role in the design of software oriented to decision-making in medicine. In this work, we systematically review and discuss several research fields strictly linked to HCI and clinical decision-making, by subdividing the articles into six themes, namely: Interfaces, Visualization, Electronic Health Records, Devices, Usability, and Clinical Decision Support Systems. However, these articles typically present overlaps among the themes, revealing that HCI inter-connects multiple topics. With the goal of focusing on HCI and design aspects, the articles under consideration were grouped into four clusters. The advances in AI can effectively support the physicians' cognitive processes, which certainly play a central role in decision-making tasks because the human mental behavior cannot be completely emulated and captured; the human mind might solve a complex problem even without a statistically significant amount of data by relying upon domain knowledge. For this reason, technology must focus on interactive solutions for supporting the physicians effectively in their daily activities, by exploiting their unique knowledge and evidence-based reasoning, as well as improving the various aspects highlighted in this review
- …