189 research outputs found

    Multiagent negotiation for fair and unbiased resource allocation

    Get PDF
    This paper proposes a novel solution for the n agent cake cutting (resource allocation) problem. We propose a negotiation protocol for dividing a resource among n agents and then provide an algorithm for allotting portions of the resource. We prove that this protocol can enable distribution of the resource among n agents in a fair manner. The protocol enables agents to choose portions based on their internal utility function, which they do not have to reveal. In addition to being fair, the protocol has desirable features such as being unbiased and verifiable while allocating resources. In the case where the resource is two-dimensional (a circular cake) and uniform, it is shown that each agent can get close to l/n of the whole resource.Utility theory ; Utility function ; Bargaining ; Artificial intelligence ; Resource allocation ; Multiagent system

    Agent Teams: Building and Implementing Software

    Get PDF
    Agents will become fundamental building blocks for general-purpose Internet-based software. The software may not display any explicitly agent-like characteristics, but it will exhibit the benefits of tolerance to errors, ease of maintenance, adaptability to change, and speed of construction that agents provide. Moreover, an agent-based approach to software development can lead to new types of software solutions that might not otherwise be obvious. The author considers how an approach based on teams of active, cooperative, and persistent software components, that is agents, shows special promise in enabling the rapid construction of robust and reusable software

    Agent Societies: Magnitude and Duration

    Get PDF
    If you only need agents to search the Web for cheap CDs, scalability is not an issue. The Web can support numerous agents if each acts independently. In short order, however, billions of embedded agents that sense their environment and interact with us and other agents will fill our world, making the human environment friendlier and more efficient. These agents will need not only scalable infrastructures and communication services, but also scalable social services encompassing ethics and laws. Research projects are under way around the world to develop and deploy such services. The author takes a look at the critical relationship between scalability and intelligent agents

    The Sentient Web

    Get PDF
    In a startling revelation, a team of university scientists has reported that a network of computers has become conscious and sentient, and is beginning to assume control of online information system. In spite of the ominous tone typically chosen for dramatic effect, a sentient Web would be more helpful and much easier for people to use. An agent is an active, persistent software component that perceives, reasons, and acts, and whose actions include communication. Agents inherently take intentional actions based on sensory information and memories of past actions. All agents have necessary communication ability, but they do not necessarily possess introspective capabilities or awareness of place and time. Four things characterize being sentient Web conscious: 1) knowing 2) having intentions 3) introspecting and 4) experiencing phenomena. For the first two, it is easy to show that most Web entities possess and demonstrate the use of knowledge, and other entities, including Web services, exhibit intentions. The last two, introspection and phenomenal experience, are facets of awareness and are not as obvious in current Web systems, so we will consider them more thoroughly and conclude with future prospects

    Networking Embedded Agents

    Get PDF
    Most of us will soon be managing an intranet in our homes, though we might not realize it. We might also be surprised at the devices that will be networked together. Just about every electrical device now contains one or more microprocessors. Designers typically find this a cost-effective way to provide device functionality, even when much of a processor\u27s power is unnecessary or unused. For example, my coffee maker contains a processor, even though the appliance needn\u27t be very smart and wastes most of its CPU cycles. Nevertheless, it is cheaper to include a general-purpose microprocessor than to incorporate custom logic devices. My kitchen, in fact, has at least six processors, in such appliances as the microwave, the dishwasher, and the toaster. These household devices are diverse and use their processors in quite different ways, but in the future they will share one important characteristic: Each will contain an agent. The agent will provide an intelligent interface to the device and, most importantly, will communicate with other devices in my home. At present, my devices are not very agent-like, and it is not useful to think, “My toaster knows when the toast is done” or “My coffee pot knows when the coffee is ready.” However, once the devices are interconnected so that they can communicate, they can arrange to have my coffee and toast ready at approximately the same time. Then I may think of them in anthropomorphic terms. For example, when I shut off my alarm clock, I can imagine it telling my kitchen devices to prepare my breakfast. When devices talk to each other, they begin to seem more like agents. At: this point my house becomes more than just a collection of processors-it becomes a multiagent system communicating over an intranet

    Probability and Agents

    Get PDF
    To make sense of the information that agents gather from the Web, they need to reason about it. If the information is precise and correct, they can use engines such as theorem provers to reason logically and derive correct conclusions. Unfortunately, the information is often imprecise and uncertain, which means they will need a probabilistic approach. More than 150 years ago, George Boole presented the logic that bears his name. There is concern that classical logic is not sufficient to model how people do or should reason. Adopting a probabilistic approach in constructing software agents and multiagent systems simplifies some thorny problems and exposes some difficult issues that you might overlook if you used purely logical approaches or (worse!) let procedural matters monopolize design concerns. Assessing the quality of the information received from another agent is a major problem in an agent system. The authors describe Bayesian networks and illustrate how you can use them for information quality assessment

    Service-Oriented Computing: Key Concepts and Principles

    Get PDF
    Traditional approaches to software development - the ones embodied in CASE tools and modeling frameworks - are appropriate for building individual software components, but they are not designed to face the challenges of open environments. Service-oriented computing provides a way to create a new architecture that reflects components\u27 trends toward autonomy and heterogeneity. We thus emphasize SOC concepts instead of how to deploy Web services in accord with current standards. To begin the series, we describe the key concepts and abstractions of SOC and the elements of a corresponding engineering methodology

    Exploiting Expertise through Knowledge Networks

    Get PDF
    The paper discusses the necessary capabilities of knowledge networks: categorizing (the ability to classify Web pages and other unstructured data automatically); hyperlinking (the ability to add to each item of information appropriate pointers to other relevant items of information); alerting (the automatic notification of users and agents to new information that might be of interest to them); and profiling (the construction of models of users and agents to describe their interests and expertise)

    Consensus Ontologies: Reconciling the Semantics of Web Pages and Agents

    Get PDF
    As you build a Web site, it is worthwhile asking, Should I put my information where it belongs or where people are most likely to look for it? Our recent research into improving searching through ontologies is providing some interesting results to answer this question. The techniques developed by our research bring organization to the information received and reconcile the semantics of each document. Our goal is to help users retrieve dynamically generated information that is tailored to their individual needs and preferences. We believe that it is easier for individuals or small groups to develop their own ontologies, regardless of whether global ones are available, and that these can be automatically and ex-post-facto related. We are working to determine the efficacy of local annotation for Web sources, as well as performing reconciliation that is qualified by measures of semantic distance. If successful, this research will enable software agents to resolve the semantic misconceptions that inhibit successful interoperation with other agents and that limit the effectiveness of searching distributed information sources
    corecore