264 research outputs found

    Collide+Power: Leaking Inaccessible Data with Software-based Power Side Channels

    Get PDF
    Differential Power Analysis (DPA) measures single-bit differences between data values used in computer systems by statistical analysis of power traces. In this paper, we show that the mere co-location of data values, e.g., attacker and victim data in the same buffers and caches, leads to power leakage in modern CPUs that depends on a combination of both values, resulting in a novel attack, Collide+Power. We systematically analyze the power leakage of the CPU's memory hierarchy to derive precise leakage models enabling practical end-to-end attacks. These attacks can be conducted in software with any signal related to power consumption, e.g., power consumption interfaces or throttling-induced timing variations. Leakage due to throttling requires 133.3 times more samples than direct power measurements. We develop a novel differential measurement technique amplifying the exploitable leakage by a factor of 8.778 on average, compared to a straightforward DPA approach. We demonstrate that Collide+Power leaks single-bit differences from the CPU's memory hierarchy with fewer than 23000 measurements. Collide+Power varies attacker-controlled data in our end-to-end DPA attacks. We present a Meltdown-style attack, leaking from attacker-chosen memory locations, and a faster MDS-style attack, which leaks 4.82 bit/h. Collide+Power is a generic attack applicable to any modern CPU, arbitrary memory locations, and victim applications and data. However, the Meltdown-style attack is not yet practical, as it is limited by the state of the art of prefetching victim data into the cache, leading to an unrealistic real-world attack runtime with throttling of more than a year for a single bit. Given the different variants and potentially more practical prefetching methods, we consider Collide+Power a relevant threat that is challenging to mitigate

    A Study of Galactic and Extragalactic Evolution with the MUSE Integral-Field Spectrograph

    Get PDF
    The Milky Way is by far the best-studied galaxy in the Universe, and it has long been regarded as a benchmark and ideal laboratory for understanding galaxy evolution. Recent studies compared the Milky Way with its analogues from IFS surveys and revealed common structures such as the B/P bulge, thin and thick disks, etc. This presents a unique opportunity to link Galactic and extragalactic studies to understand galaxy evolution. However, high stellar density regions like the Galactic bulge and the core of globular clusters have not been fully explored but they are essential in the Galactic evolution. This is due to the fiber collision issues of fiber-fed multi-object spectrographs and extreme extinction. Furthermore, direct comparisons of Galactic and extragalactic observations are marred by differences in using individual stars vs. stellar populations and methods of measuring galaxy properties. It is difficult to identify whether the differences between the Milky Way and other galaxies are due to systematic offsets or different evolution processes. This thesis addresses these difficulties using the MUSE integral-field spectrograph. We develop a novel method to measure stellar chemical abundances of individual stars from MUSE. This method is then used to study the age distribution and chemical evolution of MSTO-SGB stars in low-extinction regions of the Galactic center. Our results agree with previous studies but are obtained more efficiently. We further present a tool to generate mock MUSE observations of the Milky Way using GCE models and spectral templates for unbiased comparisons with external galaxies. We use mock data to investigate the reliability of a broadly used tool PPXF in measuring external galaxy properties and explore the impact of dust on these results. This research facilitates a deeper understanding of critical Galactic structures such as the bulge and bridges the gap between Galactic and extragalactic studies for understanding galaxy evolutio

    Improving interoperability in distributed multi-tier software stacks

    Get PDF
    Distributed multi-tier software stacks organise and deploy software components as a hierarchy of interacting tiers. The components are typically heterogeneous, i.e. each component may be written in a different language and may interoperate using a variety of protocols. Tiered software is modular but leads to a range of interoperability challenges including the following. (1) Interoperating components in multiple languages and paradigms increases developer cognitive load since they must simultaneously reason in multiple languages and paradigms. (2) There must be correct interoperation of components, e.g. adherence to the API or communication protocols between components. (3) Interoperation between different components can lead to diverse modes of failure as each component can fail in unique ways. Many of these challenges are the result of contributing factors like tight coupling or polyglot programmming. This thesis investigates techniques to improve heterogeneous interoperability in distributed multi-tier software stacks. Some common approaches include microservices and tierless languages. Microservices are perceived to offer better reliability than components in multi-tier software stacks through the loose coupling of services. The reliability of microservices is investigated by combining the established properties of dependence and state with reliability. This defines a new three-dimensional space: the Microservices Dependency State Reliability (MDSR) classification with six classes. The feasibility of statically identifying MDSR classes is demonstrated with a prototype analyser that identifies all six classes in Flask microservices web applications. The reliability implications of the different MDSR classes are evaluated by running three case study applications (Hipster-Shop, JPyL & WordPress) against a fault injector. Key results are as follows. (1) All applications fail catastrophically if a critical microservice fails. (2) Applications survive the failure of individual minor microservice(s). (3) The failure of any chain of microservices in JPyL & Hipster is catastrophic. (4) Individual microservices do not necessarily have minor reliability implications. In a tierless language, the compiler generates the code for each component and ensures their correct interoperation. They are mainly used to implement web stacks. However, their use in implementing IoT stacks is less common. This investigation compares interoperation in tiered and tierless IoT stacks through the systematic evaluation of four implementations of the prototype UoG smart campus IoT system: two tierless and two Python-based tiered. Key results of the study are as follows. (1) Tierless languages have the potential to significantly reduce the development effort for IoT systems, requiring 70% less code than the tiered implementations. (2) Tierless languages have the potential to significantly improve the reliability of IoT systems. (3) The first comparison of a tierless codebase for resource-rich sensor nodes and one for resourceconstrained sensor nodes shows that they have very similar functional structure and code sizes - within 7%. Tier elimination is a technique that removes a tier/component by integrating two tiers. Specifically, this thesis investigates the implications of eliminating the Apache web server in a 4-tier web stack: Jupyter Notebook, Apache, Python, Linux (JAPyL) and replacing it with PHP libraries in the frontend webpage to get the 3-tier (JPL). The study reveals the following. (1) The JPL 3-tier web stack requires that the developer uses fewer programming languages and paradigms than JAPyL, i.e two compared with four languages and two compared with three paradigms. (2) JPL requires 42% less code than JAPyL. (3) In JPL, some of the functionalities can be automated due to the decreased abstraction levels at the upper layers of the stack. (4) However, the latency in JPL is two to three times greater than that of JAPyL. So while tier elimination reduces developer effort and semantic friction the tradeoffs are high performance overhead & resource consumption and increasing code complexity

    Flexible Hardware-based Security-aware Mechanisms and Architectures

    Get PDF
    For decades, software security has been the primary focus in securing our computing platforms. Hardware was always assumed trusted, and inherently served as the foundation, and thus the root of trust, of our systems. This has been further leveraged in developing hardware-based dedicated security extensions and architectures to protect software from attacks exploiting software vulnerabilities such as memory corruption. However, the recent outbreak of microarchitectural attacks has shaken these long-established trust assumptions in hardware entirely, thereby threatening the security of all of our computing platforms and bringing hardware and microarchitectural security under scrutiny. These attacks have undeniably revealed the grave consequences of hardware/microarchitecture security flaws to the entire platform security, and how they can even subvert the security guarantees promised by dedicated security architectures. Furthermore, they shed light on the sophisticated challenges particular to hardware/microarchitectural security; it is more critical (and more challenging) to extensively analyze the hardware for security flaws prior to production, since hardware, unlike software, cannot be patched/updated once fabricated. Hardware cannot reliably serve as the root of trust anymore, unless we develop and adopt new design paradigms where security is proactively addressed and scrutinized across the full stack of our computing platforms, at all hardware design and implementation layers. Furthermore, novel flexible security-aware design mechanisms are required to be incorporated in processor microarchitecture and hardware-assisted security architectures, that can practically address the inherent conflict between performance and security by allowing that the trade-off is configured to adapt to the desired requirements. In this thesis, we investigate the prospects and implications at the intersection of hardware and security that emerge across the full stack of our computing platforms and System-on-Chips (SoCs). On one front, we investigate how we can leverage hardware and its advantages, in contrast to software, to build more efficient and effective security extensions that serve security architectures, e.g., by providing execution attestation and enforcement, to protect the software from attacks exploiting software vulnerabilities. We further propose that they are microarchitecturally configured at runtime to provide different types of security services, thus adapting flexibly to different deployment requirements. On another front, we investigate how we can protect these hardware-assisted security architectures and extensions themselves from microarchitectural and software attacks that exploit design flaws that originate in the hardware, e.g., insecure resource sharing in SoCs. More particularly, we focus in this thesis on cache-based side-channel attacks, where we propose sophisticated cache designs, that fundamentally mitigate these attacks, while still preserving performance by enabling that the performance security trade-off is configured by design. We also investigate how these can be incorporated into flexible and customizable security architectures, thus complementing them to further support a wide spectrum of emerging applications with different performance/security requirements. Lastly, we inspect our computing platforms further beneath the design layer, by scrutinizing how the actual implementation of these mechanisms is yet another potential attack surface. We explore how the security of hardware designs and implementations is currently analyzed prior to fabrication, while shedding light on how state-of-the-art hardware security analysis techniques are fundamentally limited, and the potential for improved and scalable approaches

    Structured parallelism discovery with hybrid static-dynamic analysis and evaluation technique

    Get PDF
    Parallel computer architectures have dominated the computing landscape for the past two decades; a trend that is only expected to continue and intensify, with increasing specialization and heterogeneity. This creates huge pressure across the software stack to produce programming languages, libraries, frameworks and tools which will efficiently exploit the capabilities of parallel computers, not only for new software, but also revitalizing existing sequential code. Automatic parallelization, despite decades of research, has had limited success in transforming sequential software to take advantage of efficient parallel execution. This thesis investigates three approaches that use commutativity analysis as the enabler for parallelization. This has the potential to overcome limitations of traditional techniques. We introduce the concept of liveness-based commutativity for sequential loops. We examine the use of a practical analysis utilizing liveness-based commutativity in a symbolic execution framework. Symbolic execution represents input values as groups of constraints, consequently deriving the output as a function of the input and enabling the identification of further program properties. We employ this feature to develop an analysis and discern commutativity properties between loop iterations. We study the application of this approach on loops taken from real-world programs in the OLDEN and NAS Parallel Benchmark (NPB) suites, and identify its limitations and related overheads. Informed by these findings, we develop Dynamic Commutativity Analysis (DCA), a new technique that leverages profiling information from program execution with specific input sets. Using profiling information, we track liveness information and detect loop commutativity by examining the code’s live-out values. We evaluate DCA against almost 1400 loops of the NPB suite, discovering 86% of them as parallelizable. Comparing our results against dependence-based methods, we match the detection efficacy of two dynamic and outperform three static approaches, respectively. Additionally, DCA is able to automatically detect parallelism in loops which iterate over Pointer-Linked Data Structures (PLDSs), taken from wide range of benchmarks used in the literature, where all other techniques we considered failed. Parallelizing the discovered loops, our methodology achieves an average speedup of 3.6× across NPB (and up to 55×) and up to 36.9× for the PLDS-based loops on a 72-core host. We also demonstrate that our methodology, despite relying on specific input values for profiling each program, is able to correctly identify parallelism that is valid for all potential input sets. Lastly, we develop a methodology to utilize liveness-based commutativity, as implemented in DCA, to detect latent loop parallelism in the shape of patterns. Our approach applies a series of transformations which subsequently enable multiple applications of DCA over the generated multi-loop code section and match its loop commutativity outcomes against the expected criteria for each pattern. Applying our methodology on sets of sequential loops, we are able to identify well-known parallel patterns (i.e., maps, reduction and scans). This extends the scope of parallelism detection to loops, such as those performing scan operations, which cannot be determined as parallelizable by simply evaluating liveness-based commutativity conditions on their original form

    Cache Attacks and Defenses

    Get PDF
    In the digital age, as our daily lives depend heavily on interconnected computing devices, information security has become a crucial concern. The continuous exchange of data between devices over the Internet exposes our information vulnerable to potential security breaches. Yet, even with measures in place to protect devices, computing equipment inadvertently leaks information through side-channels, which emerge as byproducts of computational activities. One particular source of such side channels is the cache, a vital component of modern processors that enhances computational speed by storing frequently accessed data from random access memory (RAM). Due to their limited capacity, caches often need to be shared among concurrently running applications, resulting in vulnerabilities. Cache side-channel attacks, which exploit such vulnerabilities, have received significant attention due to their ability to stealthily compromise information confidentiality and the challenge in detecting and countering them. Consequently, numerous defense strategies have been proposed to mitigate these attacks. This thesis explores these defense strategies against cache side-channels, assesses their effectiveness, and identifies any potential vulnerabilities that could be used to undermine the effectiveness of these defense strategies. The first contribution of this thesis is a software framework to assess the security of secure cache designs. We show that while most secure caches are protected from eviction-set-based attacks, they are vulnerable to occupancybased attacks, which works just as well as eviction-set-based attacks, and therefore should be taken into account when designing and evaluating secure caches. Our second contribution presents a method that utilizes speculative execution to enable high-resolution attacks on low-resolution timers, a common cache attack countermeasure adopted by web browsers. We demonstrate that our technique not only allows for high-resolution attacks to be performed on low-resolution timers, but is also Turing-complete and is capable of performing robust calculations on cache states. Through this research, we uncover a new attack vector on low-resolution timers. By exposing this vulnerability, we hope to prompt the necessary measures to address the issue and enhance the security of systems in the future. Our third contribution is a survey, paired with experimental assessment of cache side-channel attack detection techniques using hardware performance counters. We show that, despite numerous claims regarding their efficacy, most detection techniques fail to perform proper evaluation of their performance, leaving them vulnerable to more advanced attacks. We identify and outline these shortcomings, and furnish experimental evidence to corroborate our findings. Furthermore, we demonstrate a new attack that is capable of compromising these detection methods. Our aim is to bring attention to these shortcomings and provide insights that can aid in the development of more robust cache side-channel attack detection techniques. This thesis contributes to a deeper comprehension of cache side-channel attacks and their potential effects on information security. Furthermore, it offers valuable insights into the efficacy of existing mitigation approaches and detection methods, while identifying areas for future research and development to better safeguard our computing devices and data from these insidious attacks.Thesis (MPhil) -- University of Adelaide, School of Computer and Mathematical Sciences, 202

    Circuits and Systems Advances in Near Threshold Computing

    Get PDF
    Modern society is witnessing a sea change in ubiquitous computing, in which people have embraced computing systems as an indispensable part of day-to-day existence. Computation, storage, and communication abilities of smartphones, for example, have undergone monumental changes over the past decade. However, global emphasis on creating and sustaining green environments is leading to a rapid and ongoing proliferation of edge computing systems and applications. As a broad spectrum of healthcare, home, and transport applications shift to the edge of the network, near-threshold computing (NTC) is emerging as one of the promising low-power computing platforms. An NTC device sets its supply voltage close to its threshold voltage, dramatically reducing the energy consumption. Despite showing substantial promise in terms of energy efficiency, NTC is yet to see widescale commercial adoption. This is because circuits and systems operating with NTC suffer from several problems, including increased sensitivity to process variation, reliability problems, performance degradation, and security vulnerabilities, to name a few. To realize its potential, we need designs, techniques, and solutions to overcome these challenges associated with NTC circuits and systems. The readers of this book will be able to familiarize themselves with recent advances in electronics systems, focusing on near-threshold computing

    Open Source Law, Policy and Practice

    Get PDF
    This book examines various policies, including the legal and commercial aspects of the Open Source phenomenon. Here, ‘Open Source’ is adopted as convenient shorthand for a collection of diverse users and communities, whose differences can be as great as their similarities. The common thread is their reliance on, and use of, law and legal mechanisms to govern the source code they write, use, and distribute. The central fact of open source is that maintaining control over source code relies on the existence and efficacy of intellectual property (‘IP’) laws, particularly copyright law. Copyright law is the primary statutory tool that achieves the end of openness, although implemented through private law arrangements at varying points within the software supply chain. This dependent relationship is itself a cause of concern for some philosophically in favour of ‘open’, with some predicting (or hoping) that the free software movement will bring about the end of copyright as a means for protecting software

    ECOS 2012

    Get PDF
    The 8-volume set contains the Proceedings of the 25th ECOS 2012 International Conference, Perugia, Italy, June 26th to June 29th, 2012. ECOS is an acronym for Efficiency, Cost, Optimization and Simulation (of energy conversion systems and processes), summarizing the topics covered in ECOS: Thermodynamics, Heat and Mass Transfer, Exergy and Second Law Analysis, Process Integration and Heat Exchanger Networks, Fluid Dynamics and Power Plant Components, Fuel Cells, Simulation of Energy Conversion Systems, Renewable Energies, Thermo-Economic Analysis and Optimisation, Combustion, Chemical Reactors, Carbon Capture and Sequestration, Building/Urban/Complex Energy Systems, Water Desalination and Use of Water Resources, Energy Systems- Environmental and Sustainability Issues, System Operation/ Control/Diagnosis and Prognosis, Industrial Ecology
    • …
    corecore