9,128 research outputs found

    HAC-ER: a disaster response system based on human-agent collectives

    Get PDF
    This paper proposes a novel disaster management system called HAC-ER that addresses some of the challenges faced by emergency responders by enabling humans and agents, using state-of-the-art algorithms, to collaboratively plan and carry out tasks in teams referred to as human-agent collectives. In particular, HAC-ER utilises crowdsourcing combined with machine learning to extract situational awareness information from large streams of reports posted by members of the public and trusted organisations. We then show how this information can inform human-agent teams in coordinating multi-UAV deployments as well as task planning for responders on the ground. Finally, HAC-ER incorporates a tool for tracking and analysing the provenance of information shared across the entire system. In summary, this paper describes a prototype system, validated by real-world emergency responders, that combines several state-of-the-art techniques for integrating humans and agents, and illustrates, for the first time, how such an approach can enable more effective disaster response operations

    Human-agent collectives

    No full text
    We live in a world where a host of computer systems, distributed throughout our physical and information environments, are increasingly implicated in our everyday actions. Computer technologies impact all aspects of our lives and our relationship with the digital has fundamentally altered as computers have moved out of the workplace and away from the desktop. Networked computers, tablets, phones and personal devices are now commonplace, as are an increasingly diverse set of digital devices built into the world around us. Data and information is generated at unprecedented speeds and volumes from an increasingly diverse range of sources. It is then combined in unforeseen ways, limited only by human imagination. Peopleā€™s activities and collaborations are becoming ever more dependent upon and intertwined with this ubiquitous information substrate. As these trends continue apace, it is becoming apparent that many endeavours involve the symbiotic interleaving of humans and computers. Moreover, the emergence of these close-knit partnerships is inducing profound change. Rather than issuing instructions to passive machines that wait until they are asked before doing anything, we will work in tandem with highly inter-connected computational components that act autonomously and intelligently (aka agents). As a consequence, greater attention needs to be given to the balance of control between people and machines. In many situations, humans will be in charge and agents will predominantly act in a supporting role. In other cases, however, the agents will be in control and humans will play the supporting role. We term this emerging class of systems human-agent collectives (HACs) to reflect the close partnership and the flexible social interactions between the humans and the computers. As well as exhibiting increased autonomy, such systems will be inherently open and social. This means the participants will need to continually and flexibly establish and manage a range of social relationships. Thus, depending on the task at hand, different constellations of people, resources, and information will need to come together, operate in a coordinated fashion, and then disband. The openness and presence of many distinct stakeholders means participation will be motivated by a broad range of incentives rather than diktat. This article outlines the key research challenges involved in developing a comprehensive understanding of HACs. To illuminate this agenda, a nascent application in the domain of disaster response is presented

    The social in the platform trap: Why a microscopic system focus limits the prospect of social machines

    Get PDF
    ā€œFilter bubbleā€, ā€œecho chambersā€, ā€œinformation dietā€ ā€“ the metaphors to describe todayā€™s information dynamics on social media platforms are fairly diverse. People use them to describe the impact of the viral spread of fake, biased or purposeless content online, as witnessed during the recent race for the US presidency or the latest outbreak of the Ebola virus (in the latter case a tasteless racist meme was drowning out any meaningful content). This unravels the potential envisioned to arise from emergent activities of human collectives on the World Wide Web, as exemplified by the Arab Spring mass movements or digital disaster response supported by the Ushahidi tool suite

    Toward a Collectivist National Defense

    Get PDF
    Most philosophers writing on the ethics of war endorse ā€œreductivist individualism,ā€ a view that holds both that killing in war is subject to the very same principles of ordinary morality ; and that morality concerns individuals and their rights, and does not treat collectives as having any special status. I argue that this commitment to individualism poses problems for this view in the case of national defense. More specifically, I argue that the main strategies for defending individualist approaches to national defense either fail by their own lights or yield deeply counterintuitive implications. I then offer the foundations for a collectivist approach. I argue that such an approach must do justice to the collective goods that properly constituted states make possible and protect through certain acts of defensive war; and that any such picture of national defense must make room for some form of national partiality

    Humanā€“agent collaboration for disaster response

    Get PDF
    In the aftermath of major disasters, first responders are typically overwhelmed with large numbers of, spatially distributed, search and rescue tasks, each with their own requirements. Moreover, responders have to operate in highly uncertain and dynamic environments where new tasks may appear and hazards may be spreading across the disaster space. Hence, rescue missions may need to be re-planned as new information comes in, tasks are completed, or new hazards are discovered. Finding an optimal allocation of resources to complete all the tasks is a major computational challenge. In this paper, we use decision theoretic techniques to solve the task allocation problem posed by emergency response planning and then deploy our solution as part of an agent-based planning tool in real-world field trials. By so doing, we are able to study the interactional issues that arise when humans are guided by an agent. Specifically, we develop an algorithm, based on a multi-agent Markov decision process representation of the task allocation problem and show that it outperforms standard baseline solutions. We then integrate the algorithm into a planning agent that responds to requests for tasks from participants in a mixed-reality location-based game, called AtomicOrchid, that simulates disaster response settings in the real-world. We then run a number of trials of our planning agent and compare it against a purely human driven system. Our analysis of these trials show that human commanders adapt to the planning agent by taking on a more supervisory role and that, by providing humans with the flexibility of requesting plans from the agent, allows them to perform more tasks more efficiently than using purely human interactions to allocate tasks. We also discuss how such flexibility could lead to poor performance if left unchecked

    Collective Virtues:A Response to Mandevillian Morality

    Get PDF

    Collective Virtues:A Response to Mandevillian Morality

    Get PDF

    Group Agency and Artificial Intelligence

    Get PDF
    The aim of this exploratory paper is to discuss a sometimes recognized but still under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entitiesā€™ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical

    Real-Time Sensing of Trust in Human-Machine Interactions

    Get PDF
    Human trust in automation plays an important role in successful interactions between humans and machines. To design intelligent machines that can respond to changes in human trust, real-time sensing of trust level is needed. In this paper, we describe an empirical trust sensor model that maps psychophysiological measurements to human trust level. The use of psychophysiological measurements is motivated by their ability to capture a human\u27s response in real time. An exhaustive feature set is considered, and a rigorous statistical approach is used to determine a reduced set of ten features. Multiple classification methods are considered for mapping the reduced feature set to the categorical trust level. The results show that psychophysiological measurements can be used to sense trust in real-time. Moreover, a mean accuracy of 71.57% is achieved using a combination of classifiers to model trust level in each human subject. Future work will consider the effect of human demographics on feature selection and modeling
    • ā€¦
    corecore