31,018 research outputs found

    The Impact of Artificial Intelligence and Machine Learning on Organizations Cybersecurity

    Get PDF
    As internet technology proliferate in volume and complexity, the ever-evolving landscape of malicious cyberattacks presents unprecedented security risks in cyberspace. Cybersecurity challenges have been further exacerbated by the continuous growth in the prevalence and sophistication of cyber-attacks. These threats have the capacity to disrupt business operations, erase critical data, and inflict reputational damage, constituting an existential threat to businesses, critical services, and infrastructure. The escalating threat is further compounded by the malicious use of artificial intelligence (AI) and machine learning (ML), which have increasingly become tools in the cybercriminal arsenal. In this dynamic landscape, the emergence of offensive AI introduces a new dimension to cyber threats. The current wave of attacks is surpassing human capabilities, incorporating AI to outsmart and outpace traditional, rule-based detection tools. The advent of offensive AI allows cybercriminals to execute targeted attacks with unprecedented speed and scale, operating stealthily and evading conventional security measures. As offensive AI looms on the horizon, organizations face the imperative to adopt new, more sophisticated defenses. Human-driven responses to cyber-attacks are struggling to match the speed and complexity of automated threats. In anticipation of this growing challenge, the implementation of advanced technologies, including AI-driven defenses, becomes crucial. This dissertation explored the profound impact of both AI and ML on cybersecurity in the United States. Through a qualitative, multiple case study, combining a comprehensive literature review with insights from cybersecurity experts, the research identified key trends, challenges, and opportunities for utilizing AI in cybersecurity

    Multi-aspect rule-based AI: Methods, taxonomy, challenges and directions towards automation, intelligence and transparent cybersecurity modeling for critical infrastructures

    Get PDF
    Critical infrastructure (CI) typically refers to the essential physical and virtual systems, assets, and services that are vital for the functioning and well-being of a society, economy, or nation. However, the rapid proliferation and dynamism of today\u27s cyber threats in digital environments may disrupt CI functionalities, which would have a debilitating impact on public safety, economic stability, and national security. This has led to much interest in effective cybersecurity solutions regarding automation and intelligent decision-making, where AI-based modeling is potentially significant. In this paper, we take into account “Rule-based AI” rather than other black-box solutions since model transparency, i.e., human interpretation, explainability, and trustworthiness in decision-making, is an essential factor, particularly in cybersecurity application areas. This article provides an in-depth study on multi-aspect rule based AI modeling considering human interpretable decisions as well as security automation and intelligence for CI. We also provide a taxonomy of rule generation methods by taking into account not only knowledge-driven approaches based on human expertise but also data-driven approaches, i.e., extracting insights or useful knowledge from data, and their hybridization. This understanding can help security analysts and professionals comprehend how systems work, identify potential threats and anomalies, and make better decisions in various real-world application areas. We also cover how these techniques can address diverse cybersecurity concerns such as threat detection, mitigation, prediction, diagnosis for root cause findings, and so on in different CI sectors, such as energy, defence, transport, health, water, agriculture, etc. We conclude this paper with a list of identified issues and opportunities for future research, as well as their potential solution directions for how researchers and professionals might tackle future generation cybersecurity modeling in this emerging area of study

    Autonomic computing architecture for SCADA cyber security

    Get PDF
    Cognitive computing relates to intelligent computing platforms that are based on the disciplines of artificial intelligence, machine learning, and other innovative technologies. These technologies can be used to design systems that mimic the human brain to learn about their environment and can autonomously predict an impending anomalous situation. IBM first used the term ‘Autonomic Computing’ in 2001 to combat the looming complexity crisis (Ganek and Corbi, 2003). The concept has been inspired by the human biological autonomic system. An autonomic system is self-healing, self-regulating, self-optimising and self-protecting (Ganek and Corbi, 2003). Therefore, the system should be able to protect itself against both malicious attacks and unintended mistakes by the operator

    Global Risks 2015, 10th Edition.

    Get PDF
    The 2015 edition of the Global Risks report completes a decade of highlighting the most significant long-term risks worldwide, drawing on the perspectives of experts and global decision-makers. Over that time, analysis has moved from risk identification to thinking through risk interconnections and the potentially cascading effects that result. Taking this effort one step further, this year's report underscores potential causes as well as solutions to global risks. Not only do we set out a view on 28 global risks in the report's traditional categories (economic, environmental, societal, geopolitical and technological) but also we consider the drivers of those risks in the form of 13 trends. In addition, we have selected initiatives for addressing significant challenges, which we hope will inspire collaboration among business, government and civil society communitie

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks

    Full text link
    The benefits of autonomous vehicles (AVs) are widely acknowledged, but there are concerns about the extent of these benefits and AV risks and unintended consequences. In this article, we first examine AVs and different categories of the technological risks associated with them. We then explore strategies that can be adopted to address these risks, and explore emerging responses by governments for addressing AV risks. Our analyses reveal that, thus far, governments have in most instances avoided stringent measures in order to promote AV developments and the majority of responses are non-binding and focus on creating councils or working groups to better explore AV implications. The US has been active in introducing legislations to address issues related to privacy and cybersecurity. The UK and Germany, in particular, have enacted laws to address liability issues, other countries mostly acknowledge these issues, but have yet to implement specific strategies. To address privacy and cybersecurity risks strategies ranging from introduction or amendment of non-AV specific legislation to creating working groups have been adopted. Much less attention has been paid to issues such as environmental and employment risks, although a few governments have begun programmes to retrain workers who might be negatively affected.Comment: Transport Reviews, 201

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    This volume has been created as a continuation of the previous one, with the aim of outlining a set of focus areas and actions that the Italian Nation research community considers essential. The book touches many aspects of cyber security, ranging from the definition of the infrastructure and controls needed to organize cyberdefence to the actions and technologies to be developed to be better protected, from the identification of the main technologies to be defended to the proposal of a set of horizontal actions for training, awareness raising, and risk management
    corecore