39 research outputs found

    Designing for dynamic task allocation

    Get PDF
    Future platforms are envisioned in which human-machine teams are able to share and trade tasks as demands in situations change. It seems that human-machine coordination has not received the attention it deserves by past and present approaches to task allocation. In this paper a simple way to make coordination requirements explicit is proposed and for dynamic task allocation a dual-route approach is suggested. Advantages of adaptable automation, in which the human adjusts the way tasks are divided and shared, are complemented with those of adaptive automation, in which the machine allocates tasks. To be able to support design for dynamic task allocation, a theory about task allocation decision making by means of modeling of trust is proposed. It is suggested that dynamic task allocation is improved when information about situational abilities of agents is provided and the cost of observing and re-directing agents is reduced

    Towards Task Allocation Decision Support by means of Cognitive Modeling of Trust

    Get PDF
    An important issue in research on human-machine cooperation concerns how tasks should be dynamically allocated within a human-machine team in order to improve team performance. The ability to support humans in task allocation decision making requires a thorough understanding of its underlying cognitive processes, and that of relative trust more specifically. This paper presents a computational agent-based model of these cognitive processes and proposes an experiment design that can be used to validate theoretical aspects of this model

    Liber

    Get PDF
    In this paper a cognitive model for visual attention is introduced. The cognitive model is part of the design of a software agent that supports a naval warfare officer in its task to compile a tactical picture of the situation in the field. An executable formal specification of the cognitive model is given and a case study is described in which the model is used to simulate a human subject's attention. The foundation of the model is based on formal specification of representation relations for attentional states, specifying their intended meaning. The model has been automatically verified against these relations. © 2006 IEEE

    Supporting Intelligence Analysts with a Trust-Based Question-Answering System

    Full text link
    Intelligence analysts have to work in highly demanding circumstances. This causes mistakes with severe consequences, which is the reason that support systems for intelligence analysts have been developed. The support system proposed in this paper assists humans by offering support that improves their performance, without reducing them in their freedom. This is done with a trust-based question answering system (T-QAS). An important part of T-QAS are trust models which keep track of trust in each of the agents gathering information. Using these trust models, the system can support the intelligence analyst by: 1) helping to decide which agents are trusted enough to receive questions, 2) providing information about the reliability of each of the sources used, and 3) advising in making decisions based on information from possibly unreliable sources. An implementation of last two capabilities of T-QAS is evaluated in an experiment in which participants perform a decision making task with information from possibly unreliable sources. Results show that the proposed T-QAS support indeed helps participants to improve their performance. We therefore expect that future intelligence analyst support systems can benefit from the inclusion of T-QAS

    Disclosure with an emotional intelligent synthetic partner

    Get PDF
    To talk and write about one’s feelings has a beneficial effect on one’s physical and psychological health. More specifically, conversation evoking disclosure of emotions and traumatic events has a positive effect on one’s health, rather than chitchat. Astronauts on a mission are exposed to stressful situations, without the presence of a therapist or even comfortable communication with home base. Given that it is important one is able to express one’s feelings regularly, this situation clearly is a threat to success of enduring space missions. In this paper we discuss using an emotional intelligent relational agent to help solve this problem

    Personalisation of computational models of attention by simulated annealing parameter tuning

    No full text
    In this paper it is explored whether personalisation of an existing computational model of attention can increase the model's validity. Computational models of attention are for instance applied in attention allocation support systems and can benefit from this increased validity. Personalisation is done by tuning the model's parameters during a training phase, using Simulated Annealing (SA). The adapted attention model is validated using a task, varying in difficulty and attentional demand. Results show that the attention model with personalisation results in a more accurate estimation of an individual's attention as compared to the model without personalisation. © 2010 IEEE

    Personalisation of computational models of attention by simulated annealing parameter tuning

    No full text
    In this paper it is explored whether personalisation of an existing computational model of attention can increase the model's validity. Computational models of attention are for instance applied in attention allocation support systems and can benefit from this increased validity. Personalisation is done by tuning the model's parameters during a training phase, using Simulated Annealing (SA). The adapted attention model is validated using a task, varying in difficulty and attentional demand. Results show that the attention model with personalisation results in a more accurate estimation of an individual's attention as compared to the model without personalisation. © 2010 IEEE

    A framework for explaining reliance on decision aids

    No full text
    This study presents a framework for understanding task and psychological factors affecting reliance on advice from decision aids. The framework describes how informational asymmetries in combination with rational, motivational and heuristic factors explain human reliance behavior. To test hypotheses derived from the framework, 79 participants performed an uncertain pattern learning and prediction task. They received advice from a decision aid either before or after they expressed their own prediction, and received feedback about performance. When their prediction conflicted with that of the decision aid, participants had to choose to rely on their own prediction or on that of the decision aid. We measured reliance behavior, perceived and actual reliability of self and decision aid, responsibility felt for task outcomes, understandability of one's own reasoning and of the decision aid, and attribution of errors. We found evidence that (1) reliance decisions are based on relative trust, but only when advice is presented after people have formed their own prediction; (2) when people rely as much on themselves as on the decision aid, they still perceive the decision aid to be more reliable than themselves; (3) the less people perceive the decision aid's reasoning to be cognitively available and understandable, the less people rely on the decision aid; (4) the more people feel responsible for the task outcome, the more they rely on the decision aid; (5) when feedback about performance is provided, people underestimate both one's own reliability and that of the decision aid; (6) underestimation of the reliability of the decision aid is more prevalent and more persistent than underestimation of one's own reliability; and (7) unreliability of the decision aid is less attributed to temporary and uncontrollable (but not external) causes than one's own unreliability. These seven findings are potentially applicable for the improved design of decision aids and training procedures. © 2012 Elsevier Ltd. All rights reserved
    corecore