98,138 research outputs found

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    [Subject benchmark statement]: computing

    Get PDF

    Electronic health record standards

    Get PDF
    Objectives: This paper seeks to provide an overview of the initiatives that are proceeding internationally to develop standards for the exchange of electronic health record (EHR) information between EHR systems.Methods: The paper reviews the clinical and ethico-legal requirements and research background on the representation and communication of EHR data, which primarily originates from Europe through a series of EU funded Health Telematics projects over the post thirteen years. The major concept that underpin the information models and knowledge models are summarised. These provide the requirements and the best evidential basis from which HER communications standards should be developed.Results. The main focus of EHR communications standardisation is presently occurring at a European level, through the Committee for European Normalisation (CEN). The major constructs of the CEN 13606 model ate outlined. Complementary activity is taking place in ISO and in HL7, and some of these efforts are also summarised.Conclusior: There is a strong prospect that a generic EHR interoperability standard can be agreed at a European (and hopefully international) level. Parts of the challenge of EHR i interoperability cannot yet he standardised, because good solutions to the preservation of clinical meaning across heterogeneous systems remain to be explored. Further research and empirical projects are therefore also needed

    Human-agent collectives

    No full text
    We live in a world where a host of computer systems, distributed throughout our physical and information environments, are increasingly implicated in our everyday actions. Computer technologies impact all aspects of our lives and our relationship with the digital has fundamentally altered as computers have moved out of the workplace and away from the desktop. Networked computers, tablets, phones and personal devices are now commonplace, as are an increasingly diverse set of digital devices built into the world around us. Data and information is generated at unprecedented speeds and volumes from an increasingly diverse range of sources. It is then combined in unforeseen ways, limited only by human imagination. People’s activities and collaborations are becoming ever more dependent upon and intertwined with this ubiquitous information substrate. As these trends continue apace, it is becoming apparent that many endeavours involve the symbiotic interleaving of humans and computers. Moreover, the emergence of these close-knit partnerships is inducing profound change. Rather than issuing instructions to passive machines that wait until they are asked before doing anything, we will work in tandem with highly inter-connected computational components that act autonomously and intelligently (aka agents). As a consequence, greater attention needs to be given to the balance of control between people and machines. In many situations, humans will be in charge and agents will predominantly act in a supporting role. In other cases, however, the agents will be in control and humans will play the supporting role. We term this emerging class of systems human-agent collectives (HACs) to reflect the close partnership and the flexible social interactions between the humans and the computers. As well as exhibiting increased autonomy, such systems will be inherently open and social. This means the participants will need to continually and flexibly establish and manage a range of social relationships. Thus, depending on the task at hand, different constellations of people, resources, and information will need to come together, operate in a coordinated fashion, and then disband. The openness and presence of many distinct stakeholders means participation will be motivated by a broad range of incentives rather than diktat. This article outlines the key research challenges involved in developing a comprehensive understanding of HACs. To illuminate this agenda, a nascent application in the domain of disaster response is presented

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts

    Platform Advocacy and the Threat to Deliberative Democracy

    Get PDF
    Businesses have long tried to influence political outcomes, but today, there is a new and potent form of corporate political power—Platform Advocacy. Internet-based platforms, such as Facebook, Google, and Uber, mobilize their user bases through direct solicitation of support and the more troubling exploitation of irrational behavior. Platform Advocacy helps platforms push policy agendas that create favorable legal environments for themselves, thereby strengthening their own dominance in the marketplace. This new form of advocacy will have radical effects on deliberative democracy. In the age of constant digital noise and uncertainty, it is more important than ever to detect and analyze new forms of political power. This Article will contribute to our understanding of one such new form and provide a way forward to ensure the exceptional power of platforms do not improperly influence consumers and, by extension, lawmakers
    corecore