29,521 research outputs found

    No Grice: Computers that Lie, Deceive and Conceal

    Get PDF
    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behavior, and our interactions. Fusion of such information and reasoning about such information makes it possible, using computational models of human behavior and activities, to provide context- and person-aware interpretations of human behavior and activities, including determination of attitudes, moods, and emotions. Sensors include cameras, microphones, eye trackers, position and proximity sensors, tactile or smell sensors, et cetera. Sensors can be embedded in an environment, but they can also move around, for example, if they are part of a mobile social robot or if they are part of devices we carry around or are embedded in our clothes or body. \ud \ud Our daily life behavior and daily life interactions are recorded and interpreted. How can we use such environments and how can such environments use us? Do we always want to cooperate with these environments; do these environments always want to cooperate with us? In this paper we argue that there are many reasons that users or rather human partners of these environments do want to keep information about their intentions and their emotions hidden from these smart environments. On the other hand, their artificial interaction partner may have similar reasons to not give away all information they have or to treat their human partner as an opponent rather than someone that has to be supported by smart technology.\ud \ud This will be elaborated in this paper. We will survey examples of human-computer interactions where there is not necessarily a goal to be explicit about intentions and feelings. In subsequent sections we will look at (1) the computer as a conversational partner, (2) the computer as a butler or diary companion, (3) the computer as a teacher or a trainer, acting in a virtual training environment (a serious game), (4) sports applications (that are not necessarily different from serious game or education environments), and games and entertainment applications

    A Direct Reputation Model for VO Formation

    No full text
    We show that reputation is a basic ingredient in the Virtual Organisation (VO) formation process. Agents can use their experiences gained in direct past interactions to model other’s reputation and deciding on either join a VO or determining who is the most suitable set of partners. Reputation values are computed using a reinforcement learning algorithm, so agents can learn and adapt their reputation models of their partners according to their recent behaviour. Our approach is especially powerful if the agent participates in a VO in which the members can change their behaviour to exploit their partners. The reputation model presented in this paper deals with the questions of deception and fraud that have been ignored in current models of VO formation

    CRiBAC: Community-centric role interaction based access control model

    Get PDF
    As one of the most efficient solutions to complex and large-scale problems, multi-agent cooperation has been in the limelight for the past few decades. Recently, many research projects have focused on context-aware cooperation to dynamically provide complex services. As cooperation in the multi-agent systems (MASs) becomes more common, guaranteeing the security of such cooperation takes on even greater importance. However, existing security models do not reflect the agents' unique features, including cooperation and context-awareness. In this paper, we propose a Community-based Role interaction-based Access Control model (CRiBAC) to allow secure cooperation in MASs. To do this, we refine and extend our preliminary RiBAC model, which was proposed earlier to support secure interactions among agents, by introducing a new concept of interaction permission, and then extend it to CRiBAC to support community-based cooperation among agents. We analyze potential problems related to interaction permissions and propose two approaches to address them. We also propose an administration model to facilitate administration of CRiBAC policies. Finally, we present the implementation of a prototype system based on a sample scenario to assess the proposed work and show its feasibility. © 2012 Elsevier Ltd. All rights reserved

    EDI and intelligent agents integration to manage food chains

    Get PDF
    Electronic Data Interchange (EDI) is a type of inter-organizational information system, which permits the automatic and structured communication of data between organizations. Although EDI is used for internal communication, its main application is in facilitating closer collaboration between organizational entities, e.g. suppliers, credit institutions, and transportation carriers. This study illustrates how agent technology can be used to solve real food supply chain inefficiencies and optimise the logistics network. For instance, we explain how agribusiness companies can use agent technology in association with EDI to collect data from retailers, group them into meaningful categories, and then perform different functions. As a result, the distribution chain can be managed more efficiently. Intelligent agents also make available timely data to inventory management resulting in reducing stocks and tied capital. Intelligent agents are adoptive to changes so they are valuable in a dynamic environment where new products or partners have entered into the supply chain. This flexibility gives agent technology a relative advantage which, for pioneer companies, can be a competitive advantage. The study concludes with recommendations and directions for further research

    Dialogue with computers: dialogue games in action

    Get PDF
    With the advent of digital personal assistants for mobile devices, systems that are marketed as engaging in (spoken) dialogue have reached a wider public than ever before. For a student of dialogue, this raises the question to what extent such systems are genuine dialogue partners. In order to address this question, this study proposes to use the concept of a dialogue game as an analytical tool. Thus, we reframe the question as asking for the dialogue games that such systems play. Our analysis, as applied to a number of landmark systems and illustrated with dialogue extracts, leads to a fine-grained classification of such systems. Drawing on this analysis, we propose that the uptake of future generations of more powerful dialogue systems will depend on whether they are self-validating. A self-validating dialogue system can not only talk and do things, but also discuss the why of what it says and does, and learn from such discussions
    corecore