20,411 research outputs found

    Algorithmic Diversity for Software Security

    Full text link
    Software diversity protects against a modern-day exploits such as code-reuse attacks. When an attacker designs a code-reuse attack on an example executable, it relies on replicating the target environment. With software diversity, the attacker cannot reliably replicate their target. This is a security benefit which can be applied to massive-scale software distribution. When applied to large-scale communities, an invested attacker may perform analysis of samples to improve the chances of a successful attack (M. Franz). We present a general NOP-insertion algorithm which can be expanded and customized for security, performance, or other costs. We demonstrate an improvement in security so that a code-reuse attack based on any one variant has minimal chances of success on another and analyse the costs of this method. Alternately, the variants may be customized to meet performance or memory overhead constraints. Deterministic diversification allows for the flexibility to balance these needs in a way that doesn't exist in a random online method

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    Mechatronics & the cloud

    Get PDF
    Conventionally, the engineering design process has assumed that the design team is able to exercise control over all elements of the design, either directly or indirectly in the case of sub-systems through their specifications. The introduction of Cyber-Physical Systems (CPS) and the Internet of Things (IoT) means that a design team’s ability to have control over all elements of a system is no longer the case, particularly as the actual system configuration may well be being dynamically reconfigured in real-time according to user (and vendor) context and need. Additionally, the integration of the Internet of Things with elements of Big Data means that information becomes a commodity to be autonomously traded by and between systems, again according to context and need, all of which has implications for the privacy of system users. The paper therefore considers the relationship between mechatronics and cloud-basedtechnologies in relation to issues such as the distribution of functionality and user privacy

    Transparent, explainable, and accountable AI for robotics

    Get PDF
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems

    The future of computing beyond Moore's Law.

    Get PDF
    Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'

    Technology Changes Everything: Inclusive Tech and Jobs for a Diverse Workforce: Pierce Memorial Foundation Report

    Get PDF
    This document serves as the final report to the Pierce Foundation for funding to support the design and implementation of a 1.5-day Forum entitled “Technology Changes Everything: A Forum on Inclusive Tech and Jobs for a Diverse Workforce” conducted in NYC on October 26-27, 2017 at Baruch College. The conference idea was conceived to address the need to raise awareness across a number of distinct areas where technology is currently impacting employment outcomes for people with disabilities. The topics ranged from one as straightforward as the critical need for attention on equitably integrating individuals with disabilities into the rapidly exploding tech sector workforce, to the much more nuanced and complex application of algorithmic screening and job-matching tools increasingly used in online job applications and selection processes. Other topics focused on were equitable access to entrepreneurship opportunities, inclusive design in technology-based products and services, and the growing targeted focus of technology sector and tech-intensive industries in affirmative recruitment and hiring of individuals with Autism

    CryptoKnight:generating and modelling compiled cryptographic primitives

    Get PDF
    Cryptovirological augmentations present an immediate, incomparable threat. Over the last decade, the substantial proliferation of crypto-ransomware has had widespread consequences for consumers and organisations alike. Established preventive measures perform well, however, the problem has not ceased. Reverse engineering potentially malicious software is a cumbersome task due to platform eccentricities and obfuscated transmutation mechanisms, hence requiring smarter, more efficient detection strategies. The following manuscript presents a novel approach for the classification of cryptographic primitives in compiled binary executables using deep learning. The model blueprint, a Dynamic Convolutional Neural Network (DCNN), is fittingly configured to learn from variable-length control flow diagnostics output from a dynamic trace. To rival the size and variability of equivalent datasets, and to adequately train our model without risking adverse exposure, a methodology for the procedural generation of synthetic cryptographic binaries is defined, using core primitives from OpenSSL with multivariate obfuscation, to draw a vastly scalable distribution. The library, CryptoKnight, rendered an algorithmic pool of AES, RC4, Blowfish, MD5 and RSA to synthesise combinable variants which automatically fed into its core model. Converging at 96% accuracy, CryptoKnight was successfully able to classify the sample pool with minimal loss and correctly identified the algorithm in a real-world crypto-ransomware applicatio
    • …
    corecore