178 research outputs found
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Analyzing the Unanalyzable: an Application to Android Apps
In general, software is unreliable. Its behavior can deviate from users’ expectations because of bugs, vulnerabilities, or even malicious code. Manually vetting software is a challenging, tedious, and highly-costly task that does not scale. To alleviate excessive costs and analysts’ burdens, automated static analysis techniques have been proposed by both the research and practitioner communities making static analysis a central topic in software engineering. In the meantime, mobile apps have considerably grown in importance. Today, most humans carry software in their pockets, with the Android operating system leading the market. Millions of apps have been proposed to the public so far, targeting a wide range of activities such as games, health, banking, GPS, etc. Hence, Android apps collect and manipulate a considerable amount of sensitive information, which puts users’ security and privacy at risk. Consequently, it is paramount to ensure that apps distributed through public channels (e.g., the Google Play) are free from malicious code. Hence, the research and practitioner communities have put much effort into devising new automated techniques to vet Android apps against malicious activities over the last decade. Analyzing Android apps is, however, challenging. On the one hand, the Android framework proposes constructs that can be used to evade dynamic analysis by triggering the malicious code only under certain circumstances, e.g., if the device is not an emulator and is currently connected to power. Hence, dynamic analyses can -easily- be fooled by malicious developers by making some code fragments difficult to reach. On the other hand, static analyses are challenged by Android-specific constructs that limit the coverage of off-the-shell static analyzers. The research community has already addressed some of these constructs, including inter-component communication or lifecycle methods. However, other constructs, such as implicit calls (i.e., when the Android framework asynchronously triggers a method in the app code), make some app code fragments unreachable to the static analyzers, while these fragments are executed when the app is run. Altogether, many apps’ code parts are unanalyzable: they are either not reachable by dynamic analyses or not covered by static analyzers. In this manuscript, we describe our contributions to the research effort from two angles: ①statically detecting malicious code that is difficult to access to dynamic analyzers because they are triggered under specific circumstances; and ② statically analyzing code not accessible to existing static analyzers to improve the comprehensiveness of app analyses. More precisely, in Part I, we first present a replication study of a state-of-the-art static logic bomb detector to better show its limitations. We then introduce a novel hybrid approach for detecting suspicious hidden sensitive operations towards triaging logic bombs. We finally detail the construction of a dataset of Android apps automatically infected with logic bombs. In Part II, we present our work to improve the comprehensiveness of Android apps’ static analysis. More specifically, we first show how we contributed to account for atypical inter-component communication in Android apps. Then, we present a novel approach to unify both the bytecode and native in Android apps to account for the multi-language trend in app development. Finally, we present our work to resolve conditional implicit calls in Android apps to improve static and dynamic analyzers
Self-adapting security monitoring in Eucalyptus cloud environment
This paper discusses the importance of virtual machine (VM) scheduling strategies in cloud computing environments for handling the increasing number of tasks due to virtualization and cloud computing technology adoption. The paper evaluates legacy methods and specific VM scheduling algorithms for the Eucalyptus cloud environment and compare existing algorithms using QoS. The paper also presents a self-adapting security monitoring system for cloud infrastructure that takes into account the specific monitoring requirements of each tenant. The system uses Master Adaptation Drivers to convert tenant requirements into configuration settings and the Adaptation Manager to coordinate the adaptation process. The framework ensures security, cost efficiency, and responsiveness to dynamic events in the cloud environment. The paper also presents the need for improvement in the current security monitoring platform to support more types of monitoring devices and cover the consequences of multi-tenant setups. Future work includes incorporating log collectors and aggregators and addressing the needs of a super-tenant in the security monitoring architecture. The equitable sharing of monitoring resources between tenants and the provider should be established with an adjustable threshold mentioned in the SLA. The results of experiments show that Enhanced Round-Robin uses less energy compared to other methods, and the Fusion Method outperforms other techniques by reducing the number of Physical Machines turned on and increasing power efficienc
Recommended from our members
Novel information and data exchange within power systems using enhanced blockchain technologies
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonCurrent energy systems are primarily designed for centralized power generation and supplying bulk electricity to users with stable and predictable usage patterns. However, with the increasing penetration of renewable energy sources (RES), future energy systems will require greater flexibility and wider distribution of both demand and supply. Integrating RES on a large scale poses challenges to the hosting capacity of distribution systems. To address these challenges, the digitalization of energy systems through novel Information and Communication Technologies (ICT) infrastructure is essential. The shift from centralized to highly distributed systems necessitates increased coordination and communication efforts. This is because a distributed system is composed of multiple independent entities that need to communicate and collaborate effectively to accomplish a shared objective. Coordination and communication are necessary to ensure that the system is operating efficiently and effectively.
Traditional centralized cloud-based data exchange schemes depend on a single trusted third party, this may lead to single-point failure and lack of data privacy and access control. To overcome these issues, a novel approach is proposed for exchanging data within power systems using blockchain technology. This approach enables users to securely exchange data while maintaining ownership. The experiments conducted demonstrate that the proposed approach can handle more users and enables information and data exchange within power systems.
Secondly, this thesis proposes an Artificial Neural Network (ANN) based prediction model to optimize the performance of the blockchain-enabled data exchange approach. A use case for exchanging data within the power system is implemented on the proposed platform using various performance metrics. The results of the proposed approach are compared to two other schemes: the baseline scheme and an optimized scheme. The evaluation results indicate that the proposed approach can enhance network performance when compared to the baseline and optimized schemes.
In summary, the proposed novel approach to ICT infrastructure for successfully exchanging information and data within power systems entities. The performance of the novel approach is evaluated based on the ability to handle multiple users, scalability, reliability, and security
CLARIN
The book provides a comprehensive overview of the Common Language Resources and Technology Infrastructure – CLARIN – for the humanities. It covers a broad range of CLARIN language resources and services, its underlying technological infrastructure, the achievements of national consortia, and challenges that CLARIN will tackle in the future. The book is published 10 years after establishing CLARIN as an Europ. Research Infrastructure Consortium
Understanding Whitehead
Originally published in 1962. The central aim of this book is to discuss the development of Alfred North Whitehead's thought and to underscore how it is unique. The book collects nine essays written by Victor Lowe originally published between 1941 and 1961. The essays have been revised for inclusion in this volume
Architectural Support for Hypervisor-Level Intrusion Tolerance in MPSoCs
Increasingly, more aspects of our lives rely on the correctness and safety of computing systems, namely in the embedded and cyber-physical (CPS) domains, which directly affect the physical world. While systems have been pushed to their limits of functionality and efficiency, security threats and generic hardware quality have challenged their safety.
Leveraging the enormous modular power, diversity and flexibility of these systems, often deployed in multi-processor systems-on-chip (MPSoC), requires careful orchestration of complex and heterogeneous resources, a task left to low-level software, e.g., hypervisors. In current architectures, this software forms a single point of failure (SPoF) and a worthwhile target for attacks: once compromised, adversaries can gain access to all information and full control over the platform and the environment it controls, for instance by means of privilege escalation and resource allocation. Currently, solutions to protect low-level software often rely on a simpler, underlying trusted layer which is often a SPoF itself and/or exhibits downgraded performance.
Architectural hybridization allows for the introduction of trusted-trustworthy components, which combined with fault and intrusion tolerance (FIT) techniques leveraging replication, are capable of safely handling critical operations, thus eliminating SPoFs. Performing quorum-based consensus on all critical operations, in particular privilege management, ensures no compromised low-level software can single handedly manipulate privilege escalation or resource allocation to negatively affect other system resources by propagating faults or further extend an adversary’s control. However, the performance impact of traditional Byzantine fault tolerant state-machine replication (BFT-SMR) protocols is prohibitive in the context of MPSoCs due to the high costs of cryptographic operations and the quantity of messages exchanged. Furthermore, fault isolation, one of the key prerequisites in FIT, presents a complicated challenge to tackle, given the whole system resides within one chip in such platforms.
There is so far no solution completely and efficiently addressing the SPoF issue in critical low-level management software. It is our aim, then, to devise such a solution that, additionally, reaps benefit of the tight-coupled nature of such manycore systems. In this thesis we present two architectures, using trusted-trustworthy mechanisms and consensus protocols, capable of protecting all software layers, specifically at low level, by performing critical operations only when a majority of correct replicas agree to their execution: iBFT and Midir. Moreover, we discuss ways in which these can be used at application level on the example of replicated applications sharing critical data structures. It then becomes possible to confine software-level faults and some hardware faults to the individual tiles of an MPSoC, converting tiles into fault containment domains, thus, enabling fault isolation and, consequently, making way to high-performance FIT at the lowest level
A sense of self for power side-channel signatures: instruction set disassembly and integrity monitoring of a microcontroller system
Cyber-attacks are on the rise, costing billions of dollars in damages, response, and investment annually. Critical United States National Security and Department of Defense weapons systems are no exception, however, the stakes go well beyond financial. Dependence upon a global supply chain without sufficient insight or control poses a significant issue. Additionally, systems are often designed with a presumption of trust, despite their microelectronics and software-foundations being inherently untrustworthy. Achieving cybersecurity requires coordinated and holistic action across disciplines commensurate with the specific systems, mission, and threat.
This dissertation explores an existing gap in low-level cybersecurity while proposing a side-channel based security monitor to support attack detection and the establishment of trusted foundations for critical embedded systems. Background on side-channel origins, the more typical side-channel attacks, and microarchitectural exploits are described. A survey of related side-channel efforts is provided through side-channel organizing principles. The organizing principles enable comparison of dissimilar works across the side-channel spectrum. We find that the maturity of existing side-channel security monitors is insufficient, as key transition to practice considerations are often not accounted for or resolved.
We then document the development, maturation, and assessment of a power side-channel disassembler, Time-series Side-channel Disassembler (TSD), and extend it for use as a security monitor, TSD-Integrity Monitor (TSD-IM). We also introduce a prototype microcontroller power side-channel collection fixture, with benefits to experimentation and transition to practice. TSD-IM is finally applied to a notional Point of Sale (PoS) application for proof of concept evaluation. We find that TSD and TSD-IM advance state of the art for side-channel disassembly and security monitoring in open literature.
In addition to our TSD and TSD-IM research on microcontroller signals, we explore beneficial side-channel measurement abstractions as well as the characterization of the underlying microelectronic circuits through Impulse Signal Analysis (ISA). While some positive results were obtained, we find that further research in these areas is necessary. Although the need for a non-invasive, on-demand microelectronics-integrity capability is supported, other methods may provide suitable near-term alternatives to ISA
- …