10 research outputs found

    Image Recognition Applied to Security Systems: The Case of Burkina Faso

    Get PDF
    In this article we propose à model composed of five layers of convolution and two layers of maxpooling and three layers of fully connected. What will allow image recognition to be applied to security systems: the case of Burkina Faso The main contributions are : -        The establishment of a rapid and efficient aerial reconnaissance system ; -       Stable and fluid navigation of drones by learning the identification of simulated targets -       Improving security in Burkina Faso. The results show us that the accuracy of learning and testing increases with the number of epochs, this reflects that at each epoch the model learns more information. If the precision is decreased then we will need more information to make our model learn and therefore we must increase the number of epochs and vice versa. Similarly, the learning and validation error decreases with the number of epochs. Keywords : artificial intelligence, image, recognition, security, Burkina Faso DOI: 10.7176/NMMC/102-04 Publication date:October 31st 202

    AERoS: Assurance of Emergent Behaviour in Autonomous Robotic Swarms

    Full text link
    The behaviours of a swarm are not explicitly engineered. Instead, they are an emergent consequence of the interactions of individual agents with each other and their environment. This emergent functionality poses a challenge to safety assurance. The main contribution of this paper is a process for the safety assurance of emergent behaviour in autonomous robotic swarms called AERoS, following the guidance on the Assurance of Machine Learning for use in Autonomous Systems (AMLAS). We explore our proposed process using a case study centred on a robot swarm operating a public cloakroom.Comment: 12 pages, 11 figure

    RV4JaCa - Runtime Verification for Multi-Agent Systems

    Get PDF
    This paper presents a Runtime Verification (RV) approach for Multi-Agent Systems (MAS) using the JaCaMo framework. Our objective is to bring a layer of security to the MAS. This layer is capable of controlling events during the execution of the system without needing a specific implementation in the behaviour of each agent to recognise the events. MAS have been used in the context of hybrid intelligence. This use requires communication between software agents and human beings. In some cases, communication takes place via natural language dialogues. However, this kind of communication brings us to a concern related to controlling the flow of dialogue so that agents can prevent any change in the topic of discussion that could impair their reasoning. We demonstrate the implementation of a monitor that aims to control this dialogue flow in a MAS that communicates with the user through natural language to aid decision-making in hospital bed allocation

    On Specifying for Trustworthiness

    Get PDF
    As autonomous systems (AS) increasingly become part of our daily lives, ensuring their trustworthiness is crucial. In order to demonstrate the trustworthiness of an AS, we first need to specify what is required for an AS to be considered trustworthy. This roadmap paper identifies key challenges for specifying for trustworthiness in AS, as identified during the "Specifying for Trustworthiness" workshop held as part of the UK Research and Innovation (UKRI) Trustworthy Autonomous Systems (TAS) programme. We look across a range of AS domains with consideration of the resilience, trust, functionality, verifiability, security, and governance and regulation of AS and identify some of the key specification challenges in these domains. We then highlight the intellectual challenges that are involved with specifying for trustworthiness in AS that cut across domains and are exacerbated by the inherent uncertainty involved with the environments in which AS need to operate.Comment: Accepted version of paper. 13 pages, 1 table, 1 figur

    AN INTRODUCTION TO FRAMEWORK ADAPTATIONS FOR ADDITIONAL ASSURANCE OF A DEEP NEURAL NETWORK WITHIN NAVAL TEST AND EVALUATION

    Get PDF
    The complexity of modern warfare has rapidly outmatched the capacity of a human brain to accomplish the required tasks of a defined mission set. Task-shedding mundane tasks would prove immensely beneficial, freeing the warfighter to solve more complex issues; however, most tasks that a human might find menial, and shed-worthy, prove vastly abstract for a computer to solve. Advances in Deep Neural Network technology have demonstrated extensive applications as of late. As DNNs become more capable of accomplishing increasingly complex tasks, and the processors to run those neural nets continue to decrease in size, incorporation of DNN technology into legacy and next-generation aerial Department of Defense platforms has become eminently useful and advantageous. The assimilation of DNN-based systems using traditional testing methods and frameworks to produce artifacts in support of platform certification within Naval Airworthiness, however, proves prohibitive from a cost and time perspective, is not factored for agile development, and would provide an incomplete understanding of the capabilities and limitations of a neural network. The framework presented in this paper provides updated methodologies and considerations for the testing and evaluation and assurance of neural networks in support of the Naval Test and Evaluation process.Commander, United States NavyApproved for public release; distribution is unlimited

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Moral responsibility for unforeseen harms caused by autonomous systems

    Get PDF
    Autonomous systems are machines which embody Artificial Intelligence and Machine Learning and which take actions in the world, independently of direct human control. Their deployment raises a pressing question, which I call the 'locus of moral responsibility' question: who, if anyone, is morally responsible for a harm caused directly by an autonomous system? My specific focus is moral responsibility for unforeseen harms. First, I set up the 'locus of moral responsibility' problem. Unforeseen harms from autonomous systems create a problem for what I call the Standard View, rooted in common sense, that human agents are morally responsible. Unforeseen harms give credence to the main claim of ‘responsibility gap’ arguments – that humans do not meet the control and knowledge conditions of responsibility sufficiently to warrant such an ascription. Second, I argue a delegation framework offers a powerful route for answering the 'locus of moral responsibility' question. I argue that responsibility as attributability traces to the human principals who made the decision to delegate to the system, notwithstanding a later suspension of control and knowledge. These principals would also be blameworthy if their decision to delegate did not serve a purpose that morally justified the subsequent risk- imposition in the first place. Because I argue that different human principals share moral responsibility, I defend a pluralist Standard View. Third, I argue that, while today’s autonomous systems do not meet the agential condition for moral responsibility, it is neither conceptually incoherent nor physically impossible that they might. Because I take it to be a contingent and not a necessary truth that human principals exclusively bear moral responsibility, I defend a soft, pluralist Standard View. Finally, I develop and sharpen my account in response to possible objections, and I explore its wider implications

    Towards a framework for certification of reliable autonomous systems

    Get PDF
    A computational system is called autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control. The capability and spread of such systems have reached the point where they are beginning to touch much of everyday life. However, regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace? We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system, analyse what can be done as the state-of-the-art in automated verification, and propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators. Case studies in seven distinct domains illustrate the article

    Towards a Framework for Certification of Reliable Autonomous Systems

    Get PDF
    A computational system is called autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control. The capability and spread of such systems have reached the point where they are beginning to touch much of everyday life. However, regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace? We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system, analyse what can be done as the state-of-the-art in automated verification, and propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators. Case studies in seven distinct domains illustrate the article.This article is published as Fisher, Michael, Viviana Mascardi, Kristin Yvonne Rozier, Bernd-Holger Schlingloff, Michael Winikoff, and Neil Yorke-Smith. "Towards a Framework for Certification of Reliable Autonomous Systems." Autonomous Agents and Multi-Agent Systems 35, no. 1 (2021): 8. DOI: 10.1007/s10458-020-09487-2. Posted with permission.</p
    corecore