615,172 research outputs found
Explanation and trust: what to tell the user in security and AI?
There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, the goal of explanation is to acquire or maintain the users' trust. In this paper, we investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis, using elements from system theory as well as actor-network theory. We apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on expert systems (AI) and electronic voting systems (security). Finally, we discuss consequences of our analysis for ethics in terms of (un)informed consent and dissent, and the associated division of responsibilities
Artificial intelligence and UK national security: Policy considerations
RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security.
The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data
Can Artificial Intelligence Alleviate Resource Scarcity?
During summer 2017, I explored the implications of the potential application of artificial intelligence (AI) to resource management at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge in the United Kingdom. Alongside my mentor, Dr. Simon Beard, I sought to determine the most noteworthy risks and benefits associated with developing AI that could offer agricultural guidance and that could someday offer insight into more efficient, effective, and equitable resource distribution. My research, funded by a Summer Undergraduate Research Fellowship (SURF) grant, involved discussing AI-related issues in the context of resource scarcity with academics and experts in the fields of AI, climate science, data analytics, economics, ethics, and robotics. I found that while AI could present a solution to the problem of scarcity by harnessing data and algorithms to increase agricultural yield, the technology also must be considered in the context of risks—including bias and a lack of trustworthiness. If the positive potential and risks associated with AI for resource management are thoughtfully considered throughout development, the technology could improve food security and ultimately contribute to a better future
- …
