326,461 research outputs found

    A BDI agent architecture for the GAMA modeling and simulation platform

    Get PDF
    International audienceWith the increase of computing power and the development of user-friendly multi-agent simulation frameworks, social simulations have become increasingly realistic. However, most agent architectures in these simulations use simple reactive models. Indeed, cognitive agent architectures face two main obstacles: their complexity for the field-expert modeler, and their computational cost. In this paper, we propose a new cognitive agent architecture based on the BDI (Belief-Desire-Intention) paradigm integrated into the GAMA modeling platform and its GAML modeling language. This architecture was designed to be simple-to-use for modelers, flexible enough to manage complex behaviors, and with low computational cost. An experiment carried out with different profiles of end-users shows that the architecture is actually usable even by modelers who have little knowledge in programming and in Artificial Intelligence

    User evaluation of a market-based recommender system

    No full text
    Recommender systems have been developed for a wide variety of applications (ranging from books, to holidays, to web pages). These systems have used a number of different approaches, since no one technique is best for all users in all situations. Given this, we believe that to be effective, systems should incorporate a wide variety of such techniques and then some form of overarching framework should be put in place to coordinate them so that only the best recommendations (from whatever source) are presented to the user. To this end, in our previous work, we detailed a market-based approach in which various recommender agents competed with one another to present their recommendations to the user. We showed through theoretical analysis and empirical evaluation with simulated users that an appropriately designed marketplace should be able to provide effective coordination. Building on this, we now report on the development of this multi-agent system and its evaluation with real users. Specifically, we show that our system is capable of consistently giving high quality recommendations, that the best recommendations that could be put forward are actually put forward, and that the combination of recommenders performs better than any constituent recommende

    Towards a ubiquitous end-user programming system for smart spaces

    Full text link
    This article presents a rule–based agent mechanism as the kernel of a ubiquitous end–user, UI–independent programming system. The underlying goal of our work is to allow end–users to control and program their environments in a uniform, application–independent way. The heterogeneity of environments, users and programming skills, as well as the coexistence of different users and domains of automation in the same environment are some of the main challenges analyzed. For doing so, we present our system and describe some of the real–environments, user studies and experiences we have had in the development process.This work has been partially funded by the following projects: HADA (Ministerio de Ciencia y EducaciĂłn de España, TIN2007-64718), Vesta (Ministerio de Industria, Turismo y Comercio de España, TSI-020100-2009-828) y eMadrid (Comunidad de Madrid, S2009/TIC-1650)

    CCCI metrics for the measurement of quality of e-service

    Get PDF
    The growing development in web-based trust and reputation systems in the 21st century will have powerful social and economic impact on all business entities, and will make transparent quality assessment and customer assurance realities in the distributed web-based service oriented environments. The growth in web-based trust and reputation systems will be the foundation for web intelligence in the future. Trust and Reputation systems help capture business intelligence through establishing customer relationships, learning consumer behaviour, capturing market reaction on products and services, disseminating customer feedback, buyers? opinions and end-user recommendations, and revealing dishonest services, unfair trading, biased assessment, discriminatory actions, fraudulent behaviours, and un-true advertising. The continuing development of these technologies will help in the improvement of professional business behaviour, sales, reputation of sellers, providers, products and services. In this paper, we present a new methodology known as CCCI (Correlation, Commitment, Clarity, and Influence) for trustworthiness measure that is used in the Trust and Reputation System. The methodology is based on determining the correlation between the originally committed services and the services actually delivered by a Trusted Agent in a business interaction over the service oriented networks to determine the trustworthiness of the Trusted Agent

    Modeling intelligent agents for web-based information gathering

    Full text link
    The recent emergence of intelligent agent technology and advances in information gathering have been the important steps forward in efficiently managing and using the vast amount of information now available on the Web to make informed decisions. There are, however, still many problems that need to be overcome in the information gathering research arena to enable the delivery of relevant information required by end users. Good decisions cannot be made without sufficient, timely, and correct information. Traditionally it is said that knowledge is power, however, nowadays sufficient, timely, and correct information is power. So gathering relevant information to meet user information needs is the crucial step for making good decisions. The ideal goal of information gathering is to obtain only the information that users need (no more and no less). However, the volume of information available, diversity formats of information, uncertainties of information, and distributed locations of information (e.g. World Wide Web) hinder the process of gathering the right information to meet the user needs. Specifically, two fundamental issues in regard to efficiency of information gathering are mismatch and overload. The mismatch means some information that meets user needs has not been gathered (or missed out), whereas, the overload means some gathered information is not what users need. Traditional information retrieval has been developed well in the past twenty years. The introduction of the Web has changed people\u27s perceptions of information retrieval. Usually, the task of information retrieval is considered to have the function of leading the user to those documents that are relevant to his/her information needs. The similar function in information retrieval is to filter out the irrelevant documents (or called information filtering). Research into traditional information retrieval has provided many retrieval models and techniques to represent documents and queries. Nowadays, information is becoming highly distributed, and increasingly difficult to gather. On the other hand, people have found a lot of uncertainties that are contained in the user information needs. These motivate the need for research in agent-based information gathering. Agent-based information systems arise at this moment. In these kinds of systems, intelligent agents will get commitments from their users and act on the users behalf to gather the required information. They can easily retrieve the relevant information from highly distributed uncertain environments because of their merits of intelligent, autonomy and distribution. The current research for agent-based information gathering systems is divided into single agent gathering systems, and multi-agent gathering systems. In both research areas, there are still open problems to be solved so that agent-based information gathering systems can retrieve the uncertain information more effectively from the highly distributed environments. The aim of this thesis is to research the theoretical framework for intelligent agents to gather information from the Web. This research integrates the areas of information retrieval and intelligent agents. The specific research areas in this thesis are the development of an information filtering model for single agent systems, and the development of a dynamic belief model for information fusion for multi-agent systems. The research results are also supported by the construction of real information gathering agents (e.g., Job Agent) for the Internet to help users to gather useful information stored in Web sites. In such a framework, information gathering agents have abilities to describe (or learn) the user information needs, and act like users to retrieve, filter, and/or fuse the information. A rough set based information filtering model is developed to address the problem of overload. The new approach allows users to describe their information needs on user concept spaces rather than on document spaces, and it views a user information need as a rough set over the document space. The rough set decision theory is used to classify new documents into three regions: positive region, boundary region, and negative region. Two experiments are presented to verify this model, and it shows that the rough set based model provides an efficient approach to the overload problem. In this research, a dynamic belief model for information fusion in multi-agent environments is also developed. This model has a polynomial time complexity, and it has been proven that the fusion results are belief (mass) functions. By using this model, a collection fusion algorithm for information gathering agents is presented. The difficult problem for this research is the case where collections may be used by more than one agent. This algorithm, however, uses the technique of cooperation between agents, and provides a solution for this difficult problem in distributed information retrieval systems. This thesis presents the solutions to the theoretical problems in agent-based information gathering systems, including information filtering models, agent belief modeling, and collection fusions. It also presents solutions to some of the technical problems in agent-based information systems, such as document classification, the architecture for agent-based information gathering systems, and the decision in multiple agent environments. Such kinds of information gathering agents will gather relevant information from highly distributed uncertain environments

    Development of an autonomous distributed multiagent monitoring system for the automatic classification of end users

    Get PDF
    The purpose of this study is to investigate the feasibility of constructing a software Multi-Agent based monitoring and classification system and utilizing it to provide an automated and accurate classification for end users developing applications in the spreadsheet domain. Resulting in, is the creation of the Multi-Agent Classification System (MACS). The Microsoft‘s .NET Windows Service based agents were utilized to develop the Monitoring Agents of MACS. These agents function autonomously to provide continuous and periodic monitoring of spreadsheet workbooks by content. .NET Windows Communication Foundation (WCF) Services technology was used together with the Service Oriented Architecture (SOA) approach for the distribution of the agents over the World Wide Web in order to satisfy the monitoring and classification of the multiple developer aspect. The Prometheus agent oriented design methodology and its accompanying Prometheus Design Tool (PDT) was employed for specifying and designing the agents of MACS, and Visual Studio.NET 2008 for creating the agency using visual C# programming language. MACS was evaluated against classification criteria from the literature with the support of using real-time data collected from a target group of excel spreadsheet developers over a network. The Monitoring Agents were configured to execute automatically, without any user intervention as windows service processes in the .NET web server application of the system. These distributed agents listen to and read the contents of excel spreadsheets development activities in terms of file and author properties, function and formulas used, and Visual Basic for Application (VBA) macro code constructs. Data gathered by the Monitoring Agents from various resources over a period of time was collected and filtered by a Database Updater Agent residing in the .NET client application of the system. This agent then transfers and stores the data in Oracle server database via Oracle stored procedures for further processing that leads to the classification of the end user developers. Oracle data mining classification algorithms: Naive Bayes, Adaptive Naive Bayes, Decision Trees, and Support Vector Machine were utilized to analyse the results from the data gathering process in order to automate the classification of excel spreadsheet developers. The accuracy of the predictions achieved by the models was compared. The results of the comparison showed that Naive Bayes classifier achieved the best results with accuracy of 0.978. Therefore, the MACS can be utilized to provide a Multi-Agent based automated classification solution to spreadsheet developers with a high degree of accuracy

    Close the Deal and Deliver the System: Sales Training for IS Developers

    Get PDF
    The communication gap between IS developers and end users poses a key problem in IS development efforts. While a variety of tactics have been suggested to bring developers and users together to enhance communication and create a positive impact on system development efforts, the gap still remains. This paper proposes the implementation of an IS developer training program based on the fundamental principles of sales and sales training programs. The sales model is compared to models of change which closely represent the IS implementation process. Parallels are drawn between the roles of the IS developer and the sales agent, and the roles of the user and the sales customer. The sales training program is adapted to the environment of the IS developer, and recommendations are offered for the implementation of the program. Potential impacts of the training program for IS development efforts are also discussed

    Semi-autonomous, context-aware, agent using behaviour modelling and reputation systems to authorize data operation in the Internet of Things

    Full text link
    In this paper we address the issue of gathering the "informed consent" of an end user in the Internet of Things. We start by evaluating the legal importance and some of the problems linked with this notion of informed consent in the specific context of the Internet of Things. From this assessment we propose an approach based on a semi-autonomous, rule based agent that centralize all authorization decisions on the personal data of a user and that is able to take decision on his behalf. We complete this initial agent by integrating context-awareness, behavior modeling and community based reputation system in the algorithm of the agent. The resulting system is a "smart" application, the "privacy butler" that can handle data operations on behalf of the end-user while keeping the user in control. We finally discuss some of the potential problems and improvements of the system.Comment: This work is currently supported by the BUTLER Project co-financed under the 7th framework program of the European Commission. published in Internet of Things (WF-IoT), 2014 IEEE World Forum, 6-8 March 2014, Seoul, P411-416, DOI: 10.1109/WF-IoT.2014.6803201, INSPEC: 1425565

    PC-BASED AND WEB-BASED SYSTEM FOR CONTROLLING XY ROBOT

    Get PDF
    This dissertation presents the development ofPC-based system, Web-based system and Human Machine Interface for the project 'PC-Based and Web-Based System for Controlling XY Robot '. The objective of this project is to create systems that are capable of controlling an XY Robot through Human Machine Interface (HMI). For that matter, the project is being divided into three sections; the development of PC-based system, the development of Web-based system and the development of Human Machine Interface. PC-based system is aim to control the robot through HMI through serial link communication. As for Web-based system, its aim is to control the robot through HMI over the Ethernet connection. For this system, its HMI is being made into an interactive web page. Users can access its server's web site for the purpose of controlling the robot. The devices used in this project are mainly the OMRON Factory Automation devices. Therefore, the software used come from OMRON as well. The software essential for this project are ;Visual Basic 6.0, ONC Active XControl, FA Components, Data Agent and SYSMAC Compolet for HMI and web application development, FinsGateway for device communication and CX Programmer for robot system programming. Both systems are developed by designing its' respective system architecture and followed by devices settings and configuration. The systems are deem successful when communication is established between the end user and PLC which contains the program of the robot
    • 

    corecore