123 research outputs found

    Towards a Model of Open and Reliable Cognitive Multiagent Systems: Dealing with Trust and Emotions

    Get PDF
     Open multiagent systems are those in which the agents can enter or leave the system freely. In these systems any entity with unknown intention can occupy the environment. For this scenario trust and reputation mechanisms should be used to choose partners in order to request services or delegate tasks. Trust and reputation models have been proposed in the Multiagent Systems area as a way to assist agents to select good partners in order to improve interactions between them. Most of the trust and reputation models proposed in the literature take into account their functional aspects, but not how they affect the reasoning cycle of the agent. That is, under the perspective of the agent, a trust model is usually just a “black box” and the agents usually does not take into account their emotional state to make decisions as well as humans often do. As well as trust, agent’s emotions also have been studied with the aim of making the actions and reactions of the agents more like those of humans being in order to imitate their reasoning and decision making mechanisms. In this paper we analyse some proposed models found in the literature and propose a BDI and multi-context based agent model which includes emotional reasoning to lead trust and reputation in open multiagent systems

    Analysing Trust Issues in Cloud Identity Environments

    Get PDF
    Trust acts as a facilitator for decision making in environments, where decisions are subject to risk and uncertainty. Security is one of the factors contributing to the trust model that is a requirement for service users. In this paper we ask, What can be done to improve end user trust in choosing a cloud identity provider? Security and privacy are central issues in a cloud identity environment and it is the end user who determines the amount of trust they have in any identity system. This paper is an in-depth literature survey that evaluates identity service delivery in a cloud environment from the perspective of the service user

    A methodology for maintaining trust in virtual environments

    Get PDF
    The increasing interest in carrying out business in virtual environments has resulted in much research and discussion of trust establishment between the entities involved. Researchers over the years have acknowledged that the success of any transaction or interaction via the virtual medium is determined by the trust level between trusting agent and trusted agent. Numerous publications have attempted to address the various challenges of assigning a trust level and building trust in an interacting party. However, the building and allocating a value of trust is neither easy nor quick. It involves high cost and effort. Hence, the ensuing research challenge is how to maintain the trust that has been established and assigned. Due to the dynamic nature of trust, the trust evolution, and the fragility of trust in virtual environments, one of the most pressing challenges facing the research community is how trust can be maintained over time. This thesis is an effort in that direction. Specifically, the objective of this thesis is to propose a methodology for trust maintenance in virtual environments which we term “Trust Maintenance Methodology” (TMM). The methodology comprises five frameworks that can be used to achieve the objective of trust maintenance.In order to achieve the aforesaid objective, this thesis proposes a: (a) Framework for third party agent selection, (b) Framework for Formalization and Negotiation of service requirements, (c) Framework for Proactive Continuous Performance Monitoring, (d) Framework for Incentive Mechanism, and (e) Framework for Trust Re-calibration.The framework for third party agent selection is used for choosing and selecting a neutral agent who will supervise the interaction between two parties. This is the first step of our methodology. The neutral agent is involved throughout the course of the interaction between two parties and takes a proactive-corrective role in continuous performance monitoring. Once both parties have chosen a neutral agent, they carry out a formalization and negotiation process of their service requirements using our proposed framework. This is in order to create an SLA which will guide the interaction between two parties. The framework for proactive continuous performance monitoring then can be used to evaluate the performance of both parties in delivering their service based on the SLA. If a performance gap occurs during the course of transaction, the third party agent will take action to help both parties close the performance gap in a timely manner. A key salient feature of our continuous performance monitoring is that it is proactive-corrective. Additionally, we design a framework for providing an incentive during the course of interaction to motivate both parties to perform as closely as possible to the terms of the mutual agreement or SLA. By the end of the interaction time space, both parties will be able to re-assess or re-calibrate their trust level using our proposed framework for trust re-calibration.Finally, in order to validate our proposed methodology, we engineered a multi-agent system to simulate the validity of the TMM. Numerous case studies are presented to elucidate the workings of our proposed methodology. Moreover, we run several experiments under various testing conditions including boundary conditions. The results of experiments show that our methodology is effective in assisting the parties to maintain their trust level in virtual environments

    Human Factors in Agile Software Development

    Full text link
    Through our four years experiments on students' Scrum based agile software development (ASD) process, we have gained deep understanding into the human factors of agile methodology. We designed an agile project management tool - the HASE collaboration development platform to support more than 400 students self-organized into 80 teams to practice ASD. In this thesis, Based on our experiments, simulations and analysis, we contributed a series of solutions and insights in this researches, including 1) a Goal Net based method to enhance goal and requirement management for ASD process, 2) a novel Simple Multi-Agent Real-Time (SMART) approach to enhance intelligent task allocation for ASD process, 3) a Fuzzy Cognitive Maps (FCMs) based method to enhance emotion and morale management for ASD process, 4) the first large scale in-depth empirical insights on human factors in ASD process which have not yet been well studied by existing research, and 5) the first to identify ASD process as a human-computation system that exploit human efforts to perform tasks that computers are not good at solving. On the other hand, computers can assist human decision making in the ASD process.Comment: Book Draf

    Enhancing trustability in MMOGs environments

    Get PDF
    Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds (VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more autonomic, security, and trust mechanisms in a way similar to humans do in the real life world. As known, this is a difficult matter because trusting in humans and organizations depends on the perception and experience of each individual, which is difficult to quantify or measure. In fact, these societal environments lack trust mechanisms similar to those involved in humans-to-human interactions. Besides, interactions mediated by compute devices are constantly evolving, requiring trust mechanisms that keep the pace with the developments and assess risk situations. In VW/MMOGs, it is widely recognized that users develop trust relationships from their in-world interactions with others. However, these trust relationships end up not being represented in the data structures (or databases) of such virtual worlds, though they sometimes appear associated to reputation and recommendation systems. In addition, as far as we know, the user is not provided with a personal trust tool to sustain his/her decision making while he/she interacts with other users in the virtual or game world. In order to solve this problem, as well as those mentioned above, we propose herein a formal representation of these personal trust relationships, which are based on avataravatar interactions. The leading idea is to provide each avatar-impersonated player with a personal trust tool that follows a distributed trust model, i.e., the trust data is distributed over the societal network of a given VW/MMOG. Representing, manipulating, and inferring trust from the user/player point of view certainly is a grand challenge. When someone meets an unknown individual, the question is “Can I trust him/her or not?”. It is clear that this requires the user to have access to a representation of trust about others, but, unless we are using an open source VW/MMOG, it is difficult —not to say unfeasible— to get access to such data. Even, in an open source system, a number of users may refuse to pass information about its friends, acquaintances, or others. Putting together its own data and gathered data obtained from others, the avatar-impersonated player should be able to come across a trust result about its current trustee. For the trust assessment method used in this thesis, we use subjective logic operators and graph search algorithms to undertake such trust inference about the trustee. The proposed trust inference system has been validated using a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy increase in evaluating trustability of avatars. Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g., graph search methods), on an individual basis, rather than based on usual centralized reputation systems. In particular, and unlike other trust discovery methods, our methods run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft), mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo, Facebook) necessitam de mecanismos de confiança mais autónomos, capazes de assegurar a segurança e a confiança de uma forma semelhante à que os seres humanos utilizam na vida real. Como se sabe, esta não é uma questão fácil. Porque confiar em seres humanos e ou organizações depende da percepção e da experiência de cada indivíduo, o que é difícil de quantificar ou medir à partida. Na verdade, esses ambientes sociais carecem dos mecanismos de confiança presentes em interacções humanas presenciais. Além disso, as interacções mediadas por dispositivos computacionais estão em constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da evolução para avaliar situações de risco. Em VW/MMOGs, é amplamente reconhecido que os utilizadores desenvolvem relações de confiança a partir das suas interacções no mundo com outros. No entanto, essas relações de confiança acabam por não ser representadas nas estruturas de dados (ou bases de dados) do VW/MMOG específico, embora às vezes apareçam associados à reputação e a sistemas de reputação. Além disso, tanto quanto sabemos, ao utilizador não lhe é facultado nenhum mecanismo que suporte uma ferramenta de confiança individual para sustentar o seu processo de tomada de decisão, enquanto ele interage com outros utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como os mencionados acima, propomos nesta tese uma representação formal para essas relações de confiança pessoal, baseada em interacções avatar-avatar. A ideia principal é fornecer a cada jogador representado por um avatar uma ferramenta de confiança pessoal que segue um modelo de confiança distribuída, ou seja, os dados de confiança são distribuídos através da rede social de um determinado VW/MMOG. Representar, manipular e inferir a confiança do ponto de utilizador/jogador, é certamente um grande desafio. Quando alguém encontra um indivíduo desconhecido, a pergunta é “Posso confiar ou não nele?”. É claro que isto requer que o utilizador tenha acesso a uma representação de confiança sobre os outros, mas, a menos que possamos usar uma plataforma VW/MMOG de código aberto, é difícil — para não dizer impossível — obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de código aberto, um número de utilizadores pode recusar partilhar informações sobre seus amigos, conhecidos, ou sobre outros. Ao juntar seus próprios dados com os dados obtidos de outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir. Relativamente ao método de avaliação de confiança empregue nesta tese, utilizamos lógica subjectiva para a representação da confiança, e também operadores lógicos da lógica subjectiva juntamente com algoritmos de procura em grafos para empreender o processo de inferência da confiança relativamente a outro utilizador. O sistema de inferência de confiança proposto foi validado através de um número de cenários Open-Simulator (opensimulator.org), que mostrou um aumento na precisão na avaliação da confiança de avatares. Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos virtuais, conjuntamente com métricas de avaliação de confiança (por exemplo, a lógica subjectiva) e em métodos de procura de caminhos de confiança (com por exemplo, através de métodos de pesquisa em grafos), partindo de uma base individual, em vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao contrário de outros métodos de determinação do grau de confiança, os nossos métodos são executados em tempo real

    Three Essays on Law Enforcement and Emergency Response Information Sharing and Collaboration: An Insider Perspective

    Get PDF
    This dissertation identifies what may be done to overcome barriers to information sharing among federal, tribal, state, and local law enforcement agencies and emergency responders. Social, technical, and policy factors related to information sharing and collaboration in the law enforcement and emergency response communities are examined. This research improves information sharing and cooperation in this area. Policing in most societies exists in a state of dynamic tension between forces that tend to isolate it and those that tend to integrate its functioning with other social structures (Clark, 1965). Critical incidents and crimes today cross jurisdictions and involve multiple stakeholders and levels. Law enforcement and emergency response agencies at federal, tribal, state, and local levels, including private sector entities, gather information and resources but do not effectively share this with each other. Despite mandates to improve information sharing and cooperation, gaps remain perhaps because there is no clear understanding of what the barriers to information sharing are. Information sharing is examined using a multi-method, primarily qualitative, approach. A model for information sharing is presented that identifies social, technical, and policy factors as influencers. Facets of General Systems Theory, Socio-technical Theory, and Stakeholder Theory (among others) are considered in this context. Information sharing is the subject of the first work of the dissertation: a theoretical piece arguing for use of a conceptual framework consisting of social, technical, and policy factors. Social, technology, and policy factors are investigated in the second essay. That essay introduces a new transformative technology, edgeware, that allows for unprecedented connectivity among devices. Social and policy implications for crisis response are examined in light of having technological barriers to sharing resources reduced. Human and other factors relevant to information sharing and collaboration are further examined through a case study of the Central New York Interoperable Communications Consortium (CNYICC) Network, a five-county collaboration involving law enforcement, public safety, government, and non-government participants. The three included essays have a common focus vis-Ă -vis information sharing and collaboration in law enforcement and emergency response. The propositions here include: (P1) Information sharing is affected by social, technical, and policy factors, and this conceptualization frames the problem of information sharing in a way that it can be commonly understood by government and non-government stakeholders. The next proposition involves the role of technology, policy, and social systems in information sharing: (P2) Social and policy factors influence information sharing more than technical factors (assuming it is physically possible to connect and/or share). A third proposition investigated is: (P3) Social factors play the greatest role in the creation and sustaining of information sharing relationships. The findings provide a greater understanding of the forces that impact public safety agencies as they consider information sharing and will, it is hoped, lead to identifiable solutions to the problem from a new perspective

    A Software Product Line Approach to Ontology-based Recommendations in E-Tourism Systems

    Get PDF
    This study tackles two concerns of developers of Tourism Information Systems (TIS). First is the need for more dependable recommendation services due to the intangible nature of the tourism product where it is impossible for customers to physically evaluate the services on offer prior to practical experience. Second is the need to manage dynamic user requirements in tourism due to the advent of new technologies such as the semantic web and mobile computing such that etourism systems (TIS) can evolve proactively with emerging user needs at minimal time and development cost without performance tradeoffs. However, TIS have very predictable characteristics and are functionally identical in most cases with minimal variations which make them attractive for software product line development. The Software Product Line Engineering (SPLE) paradigm enables the strategic and systematic reuse of common core assets in the development of a family of software products that share some degree of commonality in order to realise a significant improvement in the cost and time of development. Hence, this thesis introduces a novel and systematic approach, called Product Line for Ontology-based Tourism Recommendation (PLONTOREC), a special approach focusing on the creation of variants of TIS products within a product line. PLONTOREC tackles the aforementioned problems in an engineering-like way by hybridizing concepts from ontology engineering and software product line engineering. The approach is a systematic process model consisting of product line management, ontology engineering, domain engineering, and application engineering. The unique feature of PLONTOREC is that it allows common TIS product requirements to be defined, commonalities and differences of content in TIS product variants to be planned and limited in advance using a conceptual model, and variant TIS products to be created according to a construction specification. We demonstrated the novelty in this approach using a case study of product line development of e-tourism systems for three countries in the West-African Region of Africa

    The Impacts of the Relation between Users and Software Agents in Delegated Negotiation: A Control Perspective

    Get PDF
    Software agents are being increasingly applied to e-commerce activities, including commerce negotiations. Agents can be used to conduct negotiation tasks on behalf of users. When users delegate negotiation tasks to agents, information technology plays a role in determining social affairs. The locus of control over social affairs partially shifts from human participants to technology. When this negotiation approach is adopted, an important question arises: how will users treat and assess their agents when they delegate negotiations to agents? It is challenging to develop agents that are able to connect with users in meaningful ways. This thesis argues that users will not treat their negotiating agents in the same manner as they treat classical computer-enabled tools or aids, because of the autonomy of the agents. When assessing agents, users will be heavily oriented towards their relationships with the agents. Drawing on several streams of literature, this thesis proposes that the notion of control helps to characterize the relationships between users and agents. Users’ experienced control will influence their assessments and adoption of their negotiating agents. Users’ experienced control can connect to instrumental control, which is a set of means that empowers the interaction between users and agents. An experiment was conducted in order to test these propositions. The experiment results provide support for the propositions

    Multi-Agent Systems

    Get PDF
    A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. Agent systems are open and extensible systems that allow for the deployment of autonomous and proactive software components. Multi-agent systems have been brought up and used in several application domains
    • …
    corecore