97 research outputs found

    Intrusiveness, Trust and Argumentation: Using Automated Negotiation to Inhibit the Transmission of Disruptive Information

    No full text
    The question of how to promote the growth and diffusion of information has been extensively addressed by a wide research community. A common assumption underpinning most studies is that the information to be transmitted is useful and of high quality. In this paper, we endorse a complementary perspective. We investigate how the growth and diffusion of high quality information can be managed and maximized by preventing, dampening and minimizing the diffusion of low quality, unwanted information. To this end, we focus on the conflict between pervasive computing environments and the joint activities undertaken in parallel local social contexts. When technologies for distributed activities (e.g. mobile technology) develop, both artifacts and services that enable people to participate in non-local contexts are likely to intrude on local situations. As a mechanism for minimizing the intrusion of the technology, we develop a computational model of argumentation-based negotiation among autonomous agents. A key component in the model is played by trust: what arguments are used and how they are evaluated depend on how trustworthy the agents judge one another. To gain an insight into the implications of the model, we conduct a number of virtual experiments. Results enable us to explore how intrusiveness is affected by trust, the negotiation network and the agents' abilities of conducting argumentation

    Coalition Formation with Spatial and Temporal Constraints

    No full text
    The coordination of emergency responders and robots to undertake a number of tasks in disaster scenarios is a grand challenge for multi-agent systems. Central to this endeavour is the problem of forming the best teams (coalitions) of responders to perform the various tasks in the area where the disaster has struck. Moreover, these teams may have to form, disband, and reform in different areas of the disaster region. This is because in most cases there will be more tasks than agents. Hence, agents need to schedule themselves to attempt each task in turn. Second, the tasks themselves can be very complex: requiring the agents to work on them for different lengths of time and having deadlines by when they need to be completed. The problem is complicated still further when different coalitions perform tasks with different levels of efficiency. Given all these facets, we define this as The Coalition Formation with Spatial and Temporal constraints problem (CFSTP).We show that this problem is NP-hard—in particular, it contains the wellknown complex combinatorial problem of Team Orienteering as a special case. Based on this, we design a Mixed Integer Program to optimally solve small-scale instances of the CFSTP and develop new anytime heuristics that can, on average, complete 97% of the tasks for large problems (20 agents and 300 tasks). In so doing, our solutions represent the first results for CFSTP

    Human-agent collectives

    No full text
    We live in a world where a host of computer systems, distributed throughout our physical and information environments, are increasingly implicated in our everyday actions. Computer technologies impact all aspects of our lives and our relationship with the digital has fundamentally altered as computers have moved out of the workplace and away from the desktop. Networked computers, tablets, phones and personal devices are now commonplace, as are an increasingly diverse set of digital devices built into the world around us. Data and information is generated at unprecedented speeds and volumes from an increasingly diverse range of sources. It is then combined in unforeseen ways, limited only by human imagination. People’s activities and collaborations are becoming ever more dependent upon and intertwined with this ubiquitous information substrate. As these trends continue apace, it is becoming apparent that many endeavours involve the symbiotic interleaving of humans and computers. Moreover, the emergence of these close-knit partnerships is inducing profound change. Rather than issuing instructions to passive machines that wait until they are asked before doing anything, we will work in tandem with highly inter-connected computational components that act autonomously and intelligently (aka agents). As a consequence, greater attention needs to be given to the balance of control between people and machines. In many situations, humans will be in charge and agents will predominantly act in a supporting role. In other cases, however, the agents will be in control and humans will play the supporting role. We term this emerging class of systems human-agent collectives (HACs) to reflect the close partnership and the flexible social interactions between the humans and the computers. As well as exhibiting increased autonomy, such systems will be inherently open and social. This means the participants will need to continually and flexibly establish and manage a range of social relationships. Thus, depending on the task at hand, different constellations of people, resources, and information will need to come together, operate in a coordinated fashion, and then disband. The openness and presence of many distinct stakeholders means participation will be motivated by a broad range of incentives rather than diktat. This article outlines the key research challenges involved in developing a comprehensive understanding of HACs. To illuminate this agenda, a nascent application in the domain of disaster response is presented

    Correlation Clustering Based Coalition Formation For Multi-Robot Task Allocation

    Full text link
    In this paper, we study the multi-robot task allocation problem where a group of robots needs to be allocated to a set of tasks so that the tasks can be finished optimally. One task may need more than one robot to finish it. Therefore the robots need to form coalitions to complete these tasks. Multi-robot coalition formation for task allocation is a well-known NP-hard problem. To solve this problem, we use a linear-programming based graph partitioning approach along with a region growing strategy which allocates (near) optimal robot coalitions to tasks in a negligible amount of time. Our proposed algorithm is fast (only taking 230 secs. for 100 robots and 10 tasks) and it also finds a near-optimal solution (up to 97.66% of the optimal). We have empirically demonstrated that the proposed approach in this paper always finds a solution which is closer (up to 9.1 times) to the optimal solution than a theoretical worst-case bound proved in an earlier work

    Decentralized Coordination in RoboCup Rescue

    Full text link

    Managing Social Influences through Argumentation-Based Negotiation

    No full text
    Social influences play an important part in the actions that an individual agent may perform within a multi-agent society. However, the incomplete knowledge and the diverse and conflicting influences present within such societies, may stop an agent from abiding by all its social influences. This may, in turn, lead to conflicts that the agents need to identify, manage, and resolve in order for the society to behave in a coherent manner. To this end, we present an empirical study of an argumentation-based negotiation (ABN) approach that allows the agents to detect such conflicts, and then manage and resolve them through the use of argumentative dialogues. To test our theory, we map our ABN model to a multi-agent task allocation scenario. Our results show that using an argumentation approach allows agents to both efficiently and effectively manage their social influences even under high degrees of incompleteness. Finally, we show that allowing agents to argue and resolve such conflicts early in the negotiation encounter increases their efficiency in managing social influences

    Agent-based homeostatic control for green energy in the smart grid

    No full text
    With dwindling non-renewable energy reserves and the adverse effects of climate change, the development of the smart electricity grid is seen as key to solving global energy security issues and to reducing carbon emissions. In this respect, there is a growing need to integrate renewable (or green) energy sources in the grid. However, the intermittency of these energy sources requires that demand must also be made more responsive to changes in supply, and a number of smart grid technologies are being developed, such as high-capacity batteries and smart meters for the home, to enable consumers to be more responsive to conditions on the grid in real-time. Traditional solutions based on these technologies, however, tend to ignore the fact that individual consumers will behave in such a way that best satisfies their own preferences to use or store energy (as opposed to that of the supplier or the grid operator). Hence, in practice, it is unclear how these solutions will cope with large numbers of consumers using their devices in this way. Against this background, in this paper, we develop novel control mechanisms based on the use of autonomous agents to better incorporate consumer preferences in managing demand. These agents, residing on consumers' smart meters, can both communicate with the grid and optimise their owner's energy consumption to satisfy their preferences. More specifically, we provide a novel control mechanism that models and controls a system comprising of a green energy supplier operating within the grid and a number of individual homes (each possibly owning a storage device). This control mechanism is based on the concept of homeostasis whereby control signals are sent to individual components of a system, based on their continuous feedback, in order to change their state so that the system may reach a stable equilibrium. Thus, we define a new carbon-based pricing mechanism for this green energy supplier that takes advantage of carbon-intensity signals available on the internet in order to provide real-time pricing. The pricing scheme is designed in such a way that it can be readily implemented using existing communication technologies and is easily understandable by consumers. Building upon this, we develop new control signals that the supplier can use to incentivise agents to shift demand (using their storage device) to times when green energy is available. Moreover, we show how these signals can be adapted according to changes in supply and to various degrees of penetration of storage in the system. We empirically evaluate our system and show that, when all homes are equipped with storage devices, the supplier can significantly reduce its reliance on other carbon-emitting power sources to cater for its own shortfalls. By so doing, the supplier reduces the carbon emission of the system by up to 25% while the consumer reduces its costs by up to 14.5%. Finally, we demonstrate that our homeostatic control mechanism is not sensitive to small prediction errors and the supplier is incentivised to accurately predict its green production to minimise costs

    Learning from expert teams

    Get PDF
    We studied the experimental adoption of an image classifying tool as an organisation plans the adoption within its teams of intelligence analysts. We identified that existing models of expert decision-making and function allocation can be employed to inform the design and adoption of these tools

    Parallelisation and application of AD3 as a method for solving large scale combinatorial auctions

    Get PDF
    Auctions, and combinatorial auctions (CAs), have been successfully employed to solve coordination problems in a wide range of application domains. However, the scale of CAs that can be optimally solved is small because of the complexity of the winner determination problem (WDP), namely of finding the bids that maximise the auctioneer’s revenue. A way of approximating the solution of a WDP is to solve its linear programming relaxation. The recently proposed Alternate Direction Dual Decomposition algorithm (AD3) has been shown to ef- ficiently solve large-scale LP relaxations. Hence, in this paper we show how to encode the WDP so that it can be approximated by means of AD3. Moreover, we present PAR-AD3, the first parallel implementation of AD3. PAR-AD3 shows to be up to 12.4 times faster than CPLEX in a single-thread execution, and up to 23 times faster than parallel CPLEX in an 8-core architecture. Therefore PAR- AD3 becomes the algorithm of choice to solve large-scale WDP LP relaxations for hard instances. Furthermore, PAR-AD3 has potential when considering large- scale coordination problems that must be solved as optimisation problems.Research supported by MICINN projects TIN2011-28689-C02-01, TIN2013-45732-C4-4-P and TIN2012-38876-C02-01Peer reviewe

    The effect of data visualisation quality and task density on human-swarm interaction

    Get PDF
    Despite the advantages of having robot swarms, human supervision is required for real-world applications. The performance of the human-swarm system depends on several factors including the data availability for the human operators. In this paper, we study the human factors aspect of the human-swarm interaction and investigate how having access to high-quality data can affect the performance of the human-swarm system - the number of tasks completed and the human trust level in operation. We designed an experiment where a human operator is tasked to operate a swarm to identify casualties in an area within a given time period. One group of operators had the option to request high-quality pictures while the other group had to base their decision on the available low-quality images. We performed a user study with 120 participants and recorded their success rate (directly logged via the simulation platform) as well as their workload and trust level (measured through a questionnaire after completing a human-swarm scenario). The findings from our study indicated that the group granted access to high-quality data exhibited an increased workload and placed greater trust in the swarm, thus confirming our initial hypothesis. However, we also found that the number of accurately identified casualties did not significantly vary between the two groups, suggesting that data quality had no impact on the successful completion of task
    corecore