2,777 research outputs found

    Adaptive Microarchitectural Optimizations to Improve Performance and Security of Multi-Core Architectures

    Get PDF
    With the current technological barriers, microarchitectural optimizations are increasingly important to ensure performance scalability of computing systems. The shift to multi-core architectures increases the demands on the memory system, and amplifies the role of microarchitectural optimizations in performance improvement. In a multi-core system, microarchitectural resources are usually shared, such as the cache, to maximize utilization but sharing can also lead to contention and lower performance. This can be mitigated through partitioning of shared caches.However, microarchitectural optimizations which were assumed to be fundamentally secure for a long time, can be used in side-channel attacks to exploit secrets, as cryptographic keys. Timing-based side-channels exploit predictable timing variations due to the interaction with microarchitectural optimizations during program execution. Going forward, there is a strong need to be able to leverage microarchitectural optimizations for performance without compromising security. This thesis contributes with three adaptive microarchitectural resource management optimizations to improve security and/or\ua0performance\ua0of multi-core architectures\ua0and a systematization-of-knowledge of timing-based side-channel attacks.\ua0We observe that to achieve high-performance cache partitioning in a multi-core system\ua0three requirements need to be met: i) fine-granularity of partitions, ii) locality-aware placement and iii) frequent changes. These requirements lead to\ua0high overheads for current centralized partitioning solutions, especially as the number of cores in the\ua0system increases. To address this problem, we present an adaptive and scalable cache partitioning solution (DELTA) using a distributed and asynchronous allocation algorithm. The\ua0allocations occur through core-to-core challenges, where applications with larger performance benefit will gain cache capacity. The\ua0solution is implementable in hardware, due to low computational complexity, and can scale to large core counts.According to our analysis, better performance can be achieved by coordination of multiple optimizations for different resources, e.g., off-chip bandwidth and cache, but is challenging due to the increased number of possible allocations which need to be evaluated.\ua0Based on these observations, we present a solution (CBP) for coordinated management of the optimizations: cache partitioning, bandwidth partitioning and prefetching.\ua0Efficient allocations, considering the inter-resource interactions and trade-offs, are achieved using local resource managers to limit the solution space.The continuously growing number of\ua0side-channel attacks leveraging\ua0microarchitectural optimizations prompts us to review attacks and defenses to understand the vulnerabilities of different microarchitectural optimizations. We identify the four root causes of timing-based side-channel attacks: determinism, sharing, access violation\ua0and information flow.\ua0Our key insight is that eliminating any of the exploited root causes, in any of the attack steps, is enough to provide protection.\ua0Based on our framework, we present a systematization of the attacks and defenses on a wide range of microarchitectural optimizations, which highlights their key similarities.\ua0Shared caches are an attractive attack surface for side-channel attacks, while defenses need to be efficient since the cache is crucial for performance.\ua0To address this issue, we present an adaptive and scalable cache partitioning solution (SCALE) for protection against cache side-channel attacks. The solution leverages randomness,\ua0and provides quantifiable and information theoretic security guarantees using differential privacy. The solution closes the performance gap to a state-of-the-art non-secure allocation policy for a mix of secure and non-secure applications

    Quantifying randomness from Bell nonlocality

    Get PDF
    The twentieth century was marked by two scientific revolutions. On the one hand, quantum mechanics questioned our understanding of nature and physics. On the other hand, came the realisation that information could be treated as a mathematical quantity. They together brought forward the age of information. A conceptual leap took place in the 1980's, that consisted in treating information in a quantum way as well. The idea that the intuitive notion of information could be governed by the counter-intuitive laws of quantum mechanics proved extremely fruitful, both from fundamental and applied points of view. The notion of randomness plays a central role in that respect. Indeed, the laws of quantum physics are probabilistic: that contrasts with thousands of years of physical theories that aimed to derive deterministic laws of nature. This, in turn, provides us with sources of random numbers, a crucial resource for information protocols. The fact that quantum theory only describes probabilistic behaviours was for some time regarded as a form of incompleteness. But nonlocality, in the sense of Bell, showed that this was not the case: the laws of quantum physics are inherently random, i.e., the randomness they imply cannot be traced back to a lack of knowledge. This observation has practical consequences: the outputs of a nonlocal physical process are necessarily unpredictable. Moreover, the random character of these outputs does not depend on the physical system, but only of its nonlocal character. For that reason, nonlocality-based randomness is certified in a device-independent manner. In this thesis, we quantify nonlocality-based randomness in various frameworks. In the first scenario, we quantify randomness without relying on the quantum formalism. We consider a nonlocal process and assume that it has a specific causal structure that is only due to how it evolves with time. We provide trade-offs between nonlocality and randomness for the various causal structures that we consider. Nonlocality-based randomness is usually defined in a theoretical framework. In the second scenario, we take a practical approach and ask how much randomness can be certified in a practical situation, where only partial information can be gained from an experiment. We describe a method to optimise how much randomness can be certified in such a situation. Trade-offs between nonlocality and randomness are usually studied in the bipartite case, as two agents is the minimal requirement to define nonlocality. In the third scenario, we quantify how much randomness can be certified for a tripartite process. Though nonlocality-based randomness is device-independent, the process from which randomness is certified is actually realised with a physical state. In the fourth scenario, we ask what physical requirements should be imposed on the physical state for maximal randomness to be certified, and more specifically, how entangled the underlying state should be. We show that maximal randomness can be certified from any level of entanglement.El siglo XX estuvo marcado por dos revoluciones científicas. Por un lado, la mecánica cuántica cuestionó nuestro entendimiento de la naturaleza y de la física. Por otro lado, quedó claro que la información podía ser tratada como un objeto matemático. Juntos, ambas revoluciones dieron inicio a la era de la información. Un salto conceptual ocurrió en los años 80: se descubrió que la información podía ser tratada de manera cuántica. La idea de que la noción intuitiva de información podía ser gobernada por las leyes contra intuitivas de la mecánica cuántica resultó extremadamente fructífera tanto desde un punto de vista teórico como práctico. El concepto de aleatoriedad desempeña un papel central en este respecto. En efecto, las leyes de la física cuántica son probabilistas, lo que contrasta con siglos de teorías físicas cuyo objetivo era elaborar leyes deterministas de la naturaleza. Además, esto constituye una fuente de números aleatorios, un recurso crucial para criptografía. El hecho de que la física cuántica solo describe comportamientos aleatorios fue a veces considerado como una forma de incompletitud en la teoría. Pero la no-localidad, en el sentido de Bell, probó que no era el caso: las leyes cuánticas son intrínsecamente probabilistas, es decir, el azar que contienen no puede ser atribuido a una falta de conocimiento. Esta observación tiene consecuencias prácticas: los datos procedentes de un proceso físico no-local son necesariamente impredecibles. Además, el carácter aleatorio de estos datos no depende del sistema físico, sino solo de su carácter no-local. Por esta razón, el azar basado en la no-localidad está certificado independientemente del dispositivo físico. En esta tesis, cuantificamos el azar basado en la no-localidad en varios escenarios. En el primero, no utilizamos el formalismo cuántico. Estudiamos un proceso no-local dotado de varias estructuras causales en relación con su evolución temporal, y calculamos las relaciones entre aleatoriedad y no-localidad para estas diferentes estructuras causales. El azar basado en la no-localidad suele ser definido en un marco teórico. En el segundo escenario, adoptamos un enfoque práctico, y examinamos la relación entre aleatoriedad y no-localidad en una situación real, donde solo tenemos una información parcial, procedente de un experimento, sobre el proceso. Proponemos un método para optimizar la aleatoriedad en este caso. Hasta ahora, las relaciones entre aleatoriedad y no-localidad han sido estudiadas en el caso bipartito, dado que dos agentes forman el requisito mínimo para definir el concepto de no-localidad. En el tercer escenario, estudiamos esta relación en el caso tripartito. Aunque el azar basado en la no-localidad no depende del dispositivo físico, el proceso que sirve para generar azar debe sin embargo ser implementado con un estado cuántico. En el cuarto escenario, preguntamos si hay que imponer requisitos sobre el estado para poder certificar una máxima aleatoriedad de los resultados. Mostramos que se puede obtener la cantidad máxima de aleatoriedad indiferentemente del nivel de entrelazamiento del estado cuántico.Postprint (published version

    WebSocket vs WebRTC in the stream overlays of the Streamr Network

    Get PDF
    The Streamr Network is a decentralized publish-subscribe system. This thesis experimentally compares WebSocket and WebRTC as transport protocols in the system’s d-regular random graph type unstructured stream overlays. The thesis explores common designs for publish-subscribe and decentralized P2P systems. Underlying network protocols including NAT traversal are explored to understand how the WebSocket and WebRTC protocols function. The requirements set for the Streamr Network and how its design and implementations fulfill them are discussed. The design and implementations are validated with the use simulations, emulations and AWS deployed real-world experiments. The performance metrics measured from the real-world experiments are compared to related work. As the implementations using the two protocols are separate incompatible versions, the differences between them was taken into account during analysis of the experiments. Although the WebSocket versions overlay construction is known to be inefficient and vulnerable to churn, it is found to be unintentionally topology aware. This caused the WebSocket stream overlays to perform better in terms of latency. The WebRTC stream overlays were found to be more predictable and more optimized for small payloads as estimates for message propagation delays had a MEPA of 1.24% compared to WebSocket’s 3.98%. Moreover, the WebRTC version enables P2P connections between hosts behind NATs. As the WebRTC version’s overlay construction is more accurate, reliable, scalable, and churn tolerant, it can be used to create intentionally topology aware stream overlays to fully take over the results of the WebSocket implementation

    Designing Security Requirements – A Flexible, Balanced, and Threshold-Based Approach

    Get PDF
    Defining security requirements is the important first step in designing, implementing and evaluating a secure system. In thispaper, we propose a formal approach for designing security requirements, which is flexible for a user to express his/hersecurity requirements with different levels of details and for the system developers to take different options to design andimplement the system to satisfy the user’s requirements. The proposed approach also allows the user to balance the requiredsystem security properties and some unfavorable features (e.g., performance degrading due to tight control and strongsecurity). Given the importance of social-technical factors in information security, the proposed approach also incorporateseconomic and organizational security management factors in specifying user’s security requirements. We demonstrate theapplication of our approach with the help of a concrete pervasive information system

    Modeling, Predicting and Capturing Human Mobility

    Get PDF
    Realistic models of human mobility are critical for modern day applications, specifically for recommendation systems, resource planning and process optimization domains. Given the rapid proliferation of mobile devices equipped with Internet connectivity and GPS functionality today, aggregating large sums of individual geolocation data is feasible. The thesis focuses on methodologies to facilitate data-driven mobility modeling by drawing parallels between the inherent nature of mobility trajectories, statistical physics and information theory. On the applied side, the thesis contributions lie in leveraging the formulated mobility models to construct prediction workflows by adopting a privacy-by-design perspective. This enables end users to derive utility from location-based services while preserving their location privacy. Finally, the thesis presents several approaches to generate large-scale synthetic mobility datasets by applying machine learning approaches to facilitate experimental reproducibility

    AN EVALUATION OF RANDOMIZED ROUTING STRATEGIES FOR DECEPTION IN MOBILE NETWORKED CONTROL SYSTEMS

    Get PDF
    Networked unmanned autonomous systems will increasingly be employed to support ground force operations. Approaches to collaborative control can find near-optimal position recommendations that optimize over system parameters such as sensing and communication to increase mission effectiveness. However, over time these recommendations can create predictable paths that may provide leading indications of the force’s operational intent. We assume that the adversary’s goal is to identify a ground force’s operational intent. Using randomized routing strategies to generate deception plans for unmanned systems against the adversary, this red methodology has the potential to change many aspects of military operational planning, including operational and strategic level planning and wargaming. This topic builds on research from L. Wigington in 2021, which developed an adversarial assessment of unmanned mobile networked control systems. From that and based on prior research, this thesis applies and potentially extends prior methodologies to analyzing adversarial behaviors and manipulating their behaviors to NCS using randomized routing strategies.Lieutenant, United States NavyApproved for public release. Distribution is unlimited

    Analyzing Avoidance: Judicial Strategy in Comparative Perspective

    Get PDF
    Courts sometimes avoid deciding contentious issues. One prominent justification for this practice is that, by employing avoidance strategically, a court can postpone reaching decisions that might threaten its institutional viability. Avoidance creates delay, which can allow for productive dialogue with and among the political branches. That dialogue, in turn, may result in the democratic resolution of—or the evolution of popular societal consensus around—a contested question, relieving the court of its duty. Many scholars and judges assume that, by creating and deferring to this dialogue, a court can safeguard its institutional legitimacy and security. Accepting this assumption arguendo, this Article seeks to evaluate avoidance as it relates to dialogue. It identifies two key factors in the avoidance decision that might affect dialogue with the political branches: first, the timing of avoidance (i.e., when in the life cycle of a case does a high court choose to avoid); and, second, a court’s candor about the decision (i.e., to what degree does a court openly acknowledge its choice to avoid). The Article draws on a series of avoidance strategies from apex courts around the world to tease out the relationships among timing, candor, and dialogue. As the first study to analyze avoidance from a comparative perspective, the Article generates a new framework for assessing avoidance by highlighting the impact of timing on the quality of dialogue, the possible unintended consequences of candor, and the critical trade-offs between avoidance and power
    corecore