1,195 research outputs found
An Abstract Formal Basis for Digital Crowds
Crowdsourcing, together with its related approaches, has become very popular
in recent years. All crowdsourcing processes involve the participation of a
digital crowd, a large number of people that access a single Internet platform
or shared service. In this paper we explore the possibility of applying formal
methods, typically used for the verification of software and hardware systems,
in analysing the behaviour of a digital crowd. More precisely, we provide a
formal description language for specifying digital crowds. We represent digital
crowds in which the agents do not directly communicate with each other. We
further show how this specification can provide the basis for sophisticated
formal methods, in particular formal verification.Comment: 32 pages, 4 figure
Towards Verifiably Ethical Robot Behaviour
Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making.Comment: Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAA
Practical Challenges in Explicit Ethical Machine Reasoning
We examine implemented systems for ethical machine reasoning with a view to identifying the practical challenges (as opposed to philosophical challenges) posed by the area. We identify a need for complex ethical machine reasoning not only to be multi-objective, proactive, and scrutable but that it must draw on heterogeneous evidential reasoning. We also argue that, in many cases, it needs to operate in real time and be verifiable. We propose a general architecture involving a declarative ethical arbiter which draws upon multiple evidential reasoners each responsible for a particular ethical feature of the system's environment. We claim that this architecture enables some separation of concerns among the practical challenges that ethical machine reasoning poses
Agent Based Approaches to Engineering Autonomous Space Software
Current approaches to the engineering of space software such as satellite
control systems are based around the development of feedback controllers using
packages such as MatLab's Simulink toolbox. These provide powerful tools for
engineering real time systems that adapt to changes in the environment but are
limited when the controller itself needs to be adapted.
We are investigating ways in which ideas from temporal logics and agent
programming can be integrated with the use of such control systems to provide a
more powerful layer of autonomous decision making. This paper will discuss our
initial approaches to the engineering of such systems.Comment: 3 pages, 1 Figure, Formal Methods in Aerospac
Developing Multi-Agent Systems with Degrees of Neuro-Symbolic Integration [A Position Paper]
In this short position paper we highlight our ongoing work on verifiable
heterogeneous multi-agent systems and, in particular, the complex (and often
non-functional) issues that impact the choice of structure within each agent
The Impact of COVID-19 on Primary Care Practitioners : Transformation, Upheaval and Uncertainty
Postprin
- …