16,224 research outputs found

    Reasoning about Cyber Threat Actors

    Get PDF
    abstract: Reasoning about the activities of cyber threat actors is critical to defend against cyber attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult to determine who the attacker is, what the desired goals are of the attacker, and how they will carry out their attacks. These three questions essentially entail understanding the attacker’s use of deception, the capabilities available, and the intent of launching the attack. These three issues are highly inter-related. If an adversary can hide their intent, they can better deceive a defender. If an adversary’s capabilities are not well understood, then determining what their goals are becomes difficult as the defender is uncertain if they have the necessary tools to accomplish them. However, the understanding of these aspects are also mutually supportive. If we have a clear picture of capabilities, intent can better be deciphered. If we understand intent and capabilities, a defender may be able to see through deception schemes. In this dissertation, I present three pieces of work to tackle these questions to obtain a better understanding of cyber threats. First, we introduce a new reasoning framework to address deception. We evaluate the framework by building a dataset from DEFCON capture-the-flag exercise to identify the person or group responsible for a cyber attack. We demonstrate that the framework not only handles cases of deception but also provides transparent decision making in identifying the threat actor. The second task uses a cognitive learning model to determine the intent – goals of the threat actor on the target system. The third task looks at understanding the capabilities of threat actors to target systems by identifying at-risk systems from hacker discussions on darkweb websites. To achieve this task we gather discussions from more than 300 darkweb websites relating to malicious hacking.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201

    Cybersecurity: mapping the ethical terrain

    Get PDF
    This edited collection examines the ethical trade-offs involved in cybersecurity: between security and privacy; individual rights and the good of a society; and between the types of burdens placed on particular groups in order to protect others. Foreword Governments and society are increasingly reliant on cyber systems. Yet the more reliant we are upon cyber systems, the more vulnerable we are to serious harm should these systems be attacked or used in an attack. This problem of reliance and vulnerability is driving a concern with securing cyberspace. For example, a ‘cybersecurity’ team now forms part of the US Secret Service. Its job is to respond to cyber-attacks in specific environments such as elevators in a building that hosts politically vulnerable individuals, for example, state representatives. Cybersecurity aims to protect cyberinfrastructure from cyber-attacks; the concerning aspect of the threat from cyber-attack is the potential for serious harm that damage to cyber-infrastructure presents to resources and people. These types of threats to cybersecurity might simply target information and communication systems: a distributed denial of service (DDoS) attack on a government website does not harm a website in any direct way, but prevents its normal use by stifling the ability of users to connect to the site. Alternatively, cyber-attacks might disrupt physical devices or resources, such as the Stuxnet virus, which caused the malfunction and destruction of Iranian nuclear centrifuges. Cyber-attacks might also enhance activities that are enabled through cyberspace, such as the use of online media by extremists to recruit members and promote radicalisation. Cyber-attacks are diverse: as a result, cybersecurity requires a comparable diversity of approaches. Cyber-attacks can have powerful impacts on people’s lives, and so—in liberal democratic societies at least—governments have a duty to ensure cybersecurity in order to protect the inhabitants within their own jurisdiction and, arguably, the people of other nations. But, as recent events following the revelations of Edward Snowden have demonstrated, there is a risk that the governmental pursuit of cybersecurity might overstep the mark and subvert fundamental privacy rights. Popular comment on these episodes advocates transparency of government processes, yet given that cybersecurity risks represent major challenges to national security, it is unlikely that simple transparency will suffice. Managing the risks of cybersecurity involves trade-offs: between security and privacy; individual rights and the good of a society; and types of burdens placed on particular groups in order to protect others. These trade-offs are often ethical trade-offs, involving questions of how we act, what values we should aim to promote, and what means of anticipating and responding to the risks are reasonably—and publicly—justifiable. This Occasional Paper (prepared for the National Security College) provides a brief conceptual analysis of cybersecurity, demonstrates the relevance of ethics to cybersecurity and outlines various ways in which to approach ethical decision-making when responding to cyber-attacks

    Perspectives for Cyber Strategists on Law for Cyberwar

    Get PDF
    The proliferation of martial rhetoric in connection with the release of thousands of pages of sensitive government documents by the WikiLeaks organization underlines how easily words that have legal meanings can be indiscriminately applied to cyber events in ways that can confuse decision makers and strategists alike. The WikiLeaks phenomenon is but the latest in a series of recent cyber-related incidents––ranging from cyber crises in Estonia and Georgia to reports of the Stuxnet cyberworm allegedly infecting Iranian computers––that have contributed to a growing perception that “cyberwar” is inevitable, if not already underway. All of this generates a range of legal questions, with popular wisdom being that the law is inadequate or lacking entirely. Lt Gen Keith B. Alexander, the first commander of US Cyber Command, told Congress at his April 2010 confirmation hearings that there was a “mismatch between our technical capabilities to conduct operations and the governing laws and policies.” Likewise, Jeffrey Addicott, a highly respected cyber-law authority, asserts that “international laws associated with the use of force are woefully inadequate in terms of addressing the threat of cyberwarfare.” This article takes a somewhat different tact concerning the ability of the law of armed conflict (LOAC) to address cyber issues. Specifically, it argues that while there is certainly room for improvement in some areas, the basic tenets of LOAC are sufficient to address the most important issues of cyberwar. Among other things, this article contends that very often the real difficulty with respect to the law and cyberwar is not any lack of “law,” per se, but rather in the complexities that arise in determining the necessary facts which must be applied to the law to render legal judgments

    Machine-assisted Cyber Threat Analysis using Conceptual Knowledge Discovery

    Get PDF
    Over the last years, computer networks have evolved into highly dynamic and interconnected environments, involving multiple heterogeneous devices and providing a myriad of services on top of them. This complex landscape has made it extremely difficult for security administrators to keep accurate and be effective in protecting their systems against cyber threats. In this paper, we describe our vision and scientific posture on how artificial intelligence techniques and a smart use of security knowledge may assist system administrators in better defending their networks. To that end, we put forward a research roadmap involving three complimentary axes, namely, (I) the use of FCA-based mechanisms for managing configuration vulnerabilities, (II) the exploitation of knowledge representation techniques for automated security reasoning, and (III) the design of a cyber threat intelligence mechanism as a CKDD process. Then, we describe a machine-assisted process for cyber threat analysis which provides a holistic perspective of how these three research axes are integrated together

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data
    • 

    corecore