119 research outputs found
The Future of Cybercrime: AI and Emerging Technologies Are Creating a Cybercrime Tsunami
This paper reviews the impact of AI and emerging technologies on the future of cybercrime and the necessary strategies to combat it effectively. Society faces a pressing challenge as cybercrime proliferates through AI and emerging technologies. At the same time, law enforcement and regulators struggle to keep it up. Our primary challenge is raising awareness as cybercrime operates within a distinct criminal ecosystem. We explore the hijacking of emerging technologies by criminals (CrimeTech) and their use in illicit activities, along with the tools and processes (InfoSec) to protect against future cybercrime. We also explore the role of AI and emerging technologies (DeepTech) in supporting law enforcement, regulation, and legal services (LawTech)
On Controllability of Artificial Intelligence
Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security. This paper can serve as a comprehensive reference for the topic of uncontrollability
Artificial Superintelligence: Coordination & Strategy
Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield “collateral benefits” for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility
Manhattan_Project.exe: A Nuclear Option for the Digital Age
This article explores the possible implications and consequences arising from the use of an artificial intelligence construct as a weapon of mass destruction. The digital age has ushered in many technological advances, as well as certain dangers. Chief among these pitfalls is the lack of reliable security found in critical information technology systems. These security gaps can give cybercriminals unauthorized access to highly sensitive computer networks that control the very infrastructure of the United States. Cyberattacks are rising in both frequency and severity and the response by the U.S. has been ineffective. A cyber-weapon of mass destruction (CWMD) implementing an artificial intelligence construct would operate on different fundamental principles than a kinetic WMD, but it would be no less effective in eliminating threats to the security of domestic information networks. This article will first examine the current state of artificial intelligence as it exists in both the private sector and in military and intelligence applications. Second, this article will discuss the distinctions between kinetic war and cyberwar and the deployment of WMDs; the capabilities and applications of a possible CWMD will be discussed at this point as well. Third, issues concerning international law will be addressed as applicable to artificial intelligence, automated warfare, and WMDs generally. Finally, this article will examine some dangers associated with the use of an artificial intelligence construct capable of learning as well as the necessity of such a program
Manhattan_Project.exe: A Nuclear Option for the Digital Age
This article explores the possible implications and consequences arising from the use of an artificial intelligence construct as a weapon of mass destruction. The digital age has ushered in many technological advances, as well as certain dangers. Chief among these pitfalls is the lack of reliable security found in critical information technology systems. These security gaps can give cybercriminals unauthorized access to highly sensitive computer networks that control the very infrastructure of the United States. Cyberattacks are rising in both frequency and severity and the response by the U.S. has been ineffective. A cyber-weapon of mass destruction (CWMD) implementing an artificial intelligence construct would operate on different fundamental principles than a kinetic WMD, but it would be no less effective in eliminating threats to the security of domestic information networks. This article will first examine the current state of artificial intelligence as it exists in both the private sector and in military and intelligence applications. Second, this article will discuss the distinctions between kinetic war and cyberwar and the deployment of WMDs; the capabilities and applications of a possible CWMD will be discussed at this point as well. Third, issues concerning international law will be addressed as applicable to artificial intelligence, automated warfare, and WMDs generally. Finally, this article will examine some dangers associated with the use of an artificial intelligence construct capable of learning as well as the necessity of such a program
Journey of Artificial Intelligence Frontier: A Comprehensive Overview
The field of Artificial Intelligence AI is a transformational force with limitless promise in the age of fast technological growth This paper sets out on a thorough tour through the frontiers of AI providing a detailed understanding of its complex environment Starting with a historical context followed by the development of AI seeing its beginnings and growth On this journey fundamental ideas are explored looking at things like Machine Learning Neural Networks and Natural Language Processing Taking center stage are ethical issues and societal repercussions emphasising the significance of responsible AI application This voyage comes to a close by looking ahead to AI s potential for human-AI collaboration ground-breaking discoveries and the difficult obstacles that lie ahead This provides with a well-informed view on AI s past present and the unexplored regions it promises to explore by thoroughly navigating this terrai
Recommended from our members
Cyber threat: its origins and consequence and the use of qualitative and quantitative methods in cyber risk assessment
Purpose - Consumers increasingly rely on organisations for online services and data storage while these same institutions seek to digitise the information assets they hold to create economic value. Cybersecurity failures arising from malicious or accidental actions can lead to significant reputational and financial loss which organisations must guard against. Despite having some critical weaknesses, qualitative cybersecurity risk analysis is widely used in developing cybersecurity plans. This research explores these weaknesses, considers how quantitative methods might address the constraints, and seeks the insights and recommendations of leading cybersecurity practitioners on the use of qualitative and quantitative cyber risk assessment methods.
Design/methodology/approach - The study is based upon a literature review and thematic analysis of in-depth qualitative interviews with 16 senior cybersecurity practitioners representing financial services and advisory companies from across the world.
Findings - While most organisations continue to rely on qualitative methods for cybersecurity risk assessment, some are also actively using quantitative approaches to enhance their cybersecurity planning efforts. The recommendation of this paper is that organisations should adopt both a qualitative and quantitative cyber risk assessment approach.
Originality/value - This work provides the first insight into how senior practitioners are using and combining qualitative and quantitative cybersecurity risk assessment, and highlights the need for in-depth comparisons of these two different approaches
- …