19 research outputs found

    Known and unknown requirements in healthcare

    Get PDF
    We report experience in requirements elicitation of domain knowledge from experts in clinical and cognitive neurosciences. The elicitation target was a causal model for early signs of dementia indicated by changes in user behaviour and errors apparent in logs of computer activity. A Delphi-style process consisting of workshops with experts followed by a questionnaire was adopted. The paper describes how the elicitation process had to be adapted to deal with problems encountered in terminology and limited consensus among the experts. In spite of the difficulties encountered, a partial causal model of user behavioural pathologies and errors was elicited. This informed requirements for configuring data- and text-mining tools to search for the specific data patterns. Lessons learned for elicitation from experts are presented, and the implications for requirements are discussed as “unknown unknowns”, as well as configuration requirements for directing data-/text-mining tools towards refining awareness requirements in healthcare applications

    Certified Reputation - How an Agent Can Trust a Stranger

    No full text
    Current computational trust models are usually built either on an agents direct experience of an interaction partner (interaction trust) or reports provided by third parties about their experiences with a partner (witness reputation). However, both of these approaches have their limitations. Models using direct experience often result in poor performance until an agent has had a sufficient number of interactions to build up a reliable picture of a, particular partner and witness reports rely on self-interested agents being willing to freely share their experience. To this end, this paper presents Certified Reputation (CR), a novel model of trust that can overcome these limitations. Specifically, CR works by allowing agents to actively provide third-party references about their previous performance as a, means of building up the trust in them of their potential interaction partners. By so doing, trust relationships can quickly be established with very little cost to the involved parties. Here we empirically evaluate CR and show that it helps agents pick better interaction partners more quickly than models that do not incorporate this form of trust. Copyright 2006 ACM

    An integrated trust and reputation model for open multi-agent systems

    No full text
    Trust and reputation are central to effective interactions in open multi-agent systems in which agents, that are owned by a variety of stakeholders, continuously enter and leave the system. This openness means existing trust and reputation models cannot readily be used since their performance suffers when there are various (unforseen) changes in the environment. To this end, this paper presents FIRE, a trust and reputation model that integrates a number of information sources to produce a comprehensive assessment of an agent's likely performance in open systems. Specifically, FIRE incorporates interaction trust, role-based trust, witness reputation, and certified reputation to provide trust metrics in most circumstances. FIRE is empirically evaluated and is shown to help agents gain better utility (by effectively selecting appropriate interaction partners) than our benchmarks in a variety of agent populations. It is also shown that FIRE is able to effectively respond to changes that occur in an agent's environment

    Ontologies as Facilitators for Repurposing Web Documents

    No full text
    This paper investigates the role of ontologies as a central part of an architecture to repurpose existing material from the Web. A prototype system called ArtEquAKT is presented, which combines information extraction, knowledge management and consolidation techniques and adaptive document generation. All of these components are co-ordinated using one central ontology, providing a common vocabulary for describing the information fragments as they are processed. Each of the components of the architecture is described in detail and an evaluation of the system discussed. Conclusions are drawn as to the effectiveness of such an approach and further challenges are outlined

    On Handling Inaccurate Witness Reports

    No full text
    Witness reports are a key building block for reputation systems in open multi-agent systems in which agents, that are owned by a variety of stakeholders, continuously enter and leave the system. However, in such open and dynamic environments, these reports can be inaccurate because of the differing views of the reporters. Moreover, due to the conflicting interests that stem from the multiple stakeholders, some witnesses may deliberately provide false information to serve their own interests. Now, in either case, if such inaccuracy is not recognised and dealt with, it will adversely affect the function of the reputation model. To this end, this paper presents a generic method that detects inaccuracy in witness reports and updates the witness?s credibility accordingly so that less credence is placed on its future reports. Our method is empirically evaluated and is shown to help agents effectively detect inaccurate witness reports in a variety of scenarios where various degrees of inaccuracy in witness reports are introduced

    Developing an integrated trust and reputation model for open multi-agent systems

    No full text
    Trust and reputation are central to effective interactions in open multi-agent systems in which agents, that are owned by a variety of stakeholders, can enter and leave the system at any time. This openness means existing trust and reputation models cannot readily be used. To this end, we present FIRE, a trust and reputation model that integrates a number of information sources to produce a comprehensive assessment of an agent?s likely performance. Specifically, FIRE incorporates interaction trust, role-based trust, witness reputation, and certified reputation to provide a trust metric in most circumstances. FIRE is empirically benchmarked and is shown to help agents effectively select appropriate interaction partners

    Knowledge-based acquisition of tradeoff preferences of negotiating agents

    No full text
    A wide range of algorithms have been developed for various types of automated negotiation. In developing such algorithms the main focus has been on their efficiency and their effectiveness. However, this is only part of the picture. Agents typically negotiate on behalf of their owners and for this to be effective the agent must be able to adequately represent the owners preferences. However, the process by which such knowledge is acquired is typically left unspecified. To remove this shortcoming, we present a case study indicating how the knowledge for a particular negotiation algorithm can be acquired. More precisely, according to the analysis on the automated negotiation model, we identified that user trade-off preferences play a fundamental role in negotiation in general. This topic has been addressed little in the research area of user preference elicitation for general decision making problems as well. In a previous paper, we proposed an exhaustive method to acquire user trade-off preferences. In this paper, we developed another method to remove the limitation of the high user workload of the exhaustive method. Although we cannot say that it can exactly capture user trade-off preferences, it models the main commonalities of trade-off relations and reflects users individualities as well
    corecore