286 research outputs found

    Dependability where the mobile world meets the enterprise world

    Get PDF
    As we move toward increasingly larger scales of computing, complexity of systems and networks has increased manifold leading to massive failures of cloud providers (Amazon Cloudfront, November 2014) and geographically localized outages of cellular services (T-Mobile, June 2014). In this dissertation, we investigate the dependability aspects of two of the most prevalent computing platforms today, namely, smartphones and cloud computing. These two seemingly disparate platforms are part of a cohesive story—they interact to provide end-to-end services which are increasingly being delivered over mobile platforms, examples being iCloud, Google Drive and their smartphone counterparts iPhone and Android. ^ In one of the early work on characterizing failures in dominant mobile OSes, we analyzed bug repositories of Android and Symbian and found similarities in their failure modes [ISSRE2010]. We also presented a classification of root causes and quantified the impact of ease of customizing the smartphones on system reliability. Our evaluation of Inter-Component Communication in Android [DSN2012] show an alarming number of exception handling errors where a phone may be crashed by passing it malformed component invocation messages, even from unprivileged applications. In this work, we also suggest language extensions that can mitigate these problems. ^ Mobile applications today are increasingly being used to interact with enterprise-class web services commonly hosted in virtualized environments. Virutalization suffers from the problem of imperfect performance isolation where contention for low-level hardware resources can impact application performance. Through a set of rigorous experiments in a private cloud testbed and in EC2, we show that interference induced performance degradation is a reality. Our experiments have also shown that optimal configuration settings for web servers change during such phases of interference. Based on this observation, we design and implement the IC 2engine which can mitigate effects of interference by reconfiguring web server parameters [MW2014]. We further improve IC 2 by incorporating it into a two-level configuration engine, named ICE, for managing web server clusters [ICAC2015]. Our evaluations show that, compared to an interference agnostic configuration, IC 2 can improve response time of web servers by upto 40%, while ICE can improve response time by up to 94% during phases of interference

    Detecting feature influences to quality attributes in large and partially measured spaces using smart sampling and dynamic learning

    Get PDF
    Emergent application domains (e.g., Edge Computing/Cloud/B5G systems) are complex to be built manually. They are characterised by high variability and are modelled by large Variability Models (VMs), leading to large configuration spaces. Due to the high number of variants present in such systems, it is challenging to find the best-ranked product regarding particular Quality Attributes (QAs) in a short time. Moreover, measuring QAs sometimes is not trivial, requiring a lot of time and resources, as is the case of the energy footprint of software systems — the focus of this paper. Hence, we need a mechanism to analyse how features and their interactions influence energy footprint, but without measuring all configurations. While practical, sampling and predictive techniques base their accuracy on uniform spaces or some initial domain knowledge, which are not always possible to achieve. Indeed, analysing the energy footprint of products in large configuration spaces raises specific requirements that we explore in this work. This paper presents SAVRUS (Smart Analyser of Variability Requirements in Unknown Spaces), an approach for sampling and dynamic statistical learning without relying on initial domain knowledge of large and partially QA-measured spaces. SAVRUS reports the degree to which features and pairwise interactions influence a particular QA, like energy efficiency. We validate and evaluate SAVRUS with a selection of likewise systems, which define large searching spaces containing scattered measurements.Funding for open access charge: Universidad de Málaga / CBUA. This work is supported by the European Union’s H2020 re search and innovation programme under grant agreement DAEMON H2020-101017109, by the projects IRIS PID2021-12281 2OB-I00 (co-financed by FEDER funds), Rhea P18-FR-1081 (MCI/AEI/ FEDER, UE), and LEIA UMA18-FEDERIA-157, and the PRE2019-087496 grant from the Ministerio de Ciencia e Innovación, Spain

    Hardware-Assisted Dependable Systems

    Get PDF
    Unpredictable hardware faults and software bugs lead to application crashes, incorrect computations, unavailability of internet services, data losses, malfunctioning components, and consequently financial losses or even death of people. In particular, faults in microprocessors (CPUs) and memory corruption bugs are among the major unresolved issues of today. CPU faults may result in benign crashes and, more problematically, in silent data corruptions that can lead to catastrophic consequences, silently propagating from component to component and finally shutting down the whole system. Similarly, memory corruption bugs (memory-safety vulnerabilities) may result in a benign application crash but may also be exploited by a malicious hacker to gain control over the system or leak confidential data. Both these classes of errors are notoriously hard to detect and tolerate. Usual mitigation strategy is to apply ad-hoc local patches: checksums to protect specific computations against hardware faults and bug fixes to protect programs against known vulnerabilities. This strategy is unsatisfactory since it is prone to errors, requires significant manual effort, and protects only against anticipated faults. On the other extreme, Byzantine Fault Tolerance solutions defend against all kinds of hardware and software errors, but are inadequately expensive in terms of resources and performance overhead. In this thesis, we examine and propose five techniques to protect against hardware CPU faults and software memory-corruption bugs. All these techniques are hardware-assisted: they use recent advancements in CPU designs and modern CPU extensions. Three of these techniques target hardware CPU faults and rely on specific CPU features: ∆-encoding efficiently utilizes instruction-level parallelism of modern CPUs, Elzar re-purposes Intel AVX extensions, and HAFT builds on Intel TSX instructions. The rest two target software bugs: SGXBounds detects vulnerabilities inside Intel SGX enclaves, and “MPX Explained” analyzes the recent Intel MPX extension to protect against buffer overflow bugs. Our techniques achieve three goals: transparency, practicality, and efficiency. All our systems are implemented as compiler passes which transparently harden unmodified applications against hardware faults and software bugs. They are practical since they rely on commodity CPUs and require no specialized hardware or operating system support. Finally, they are efficient because they use hardware assistance in the form of CPU extensions to lower performance overhead

    Analysis and Transformation of Configurable Systems

    Get PDF
    Static analysis tools and transformation engines for source code belong to the standard equipment of a software developer. Their use simplifies a developer's everyday work of maintaining and evolving software systems significantly and, hence, accounts for much of a developer's programming efficiency and programming productivity. This is also beneficial from a financial point of view, as programming errors are early detected and avoided in the the development process, thus the use of static analysis tools reduces the overall software-development costs considerably. In practice, software systems are often developed as configurable systems to account for different requirements of application scenarios and use cases. To implement configurable systems, developers often use compile-time implementation techniques, such as preprocessors, by using #ifdef directives. Configuration options control the inclusion and exclusion of #ifdef-annotated source code and their selection/deselection serve as an input for generating tailor-made system variants on demand. Existing configurable systems, such as the linux kernel, often provide thousands of configuration options, forming a huge configuration space with billions of system variants. Unfortunately, existing tool support cannot handle the myriads of system variants that can typically be derived from a configurable system. Analysis and transformation tools are not prepared for variability in source code, and, hence, they may process it incorrectly with the result of an incomplete and often broken tool support. We challenge the way configurable systems are analyzed and transformed by introducing variability-aware static analysis tools and a variability-aware transformation engine for configurable systems' development. The main idea of such tool support is to exploit commonalities between system variants, reducing the effort of analyzing and transforming a configurable system. In particular, we develop novel analysis approaches for analyzing the myriads of system variants and compare them to state-of-the-art analysis approaches (namely sampling). The comparison shows that variability-aware analysis is complete (with respect to covering the whole configuration space), efficient (it outperforms some of the sampling heuristics), and scales even to large software systems. We demonstrate that variability-aware analysis is even practical when using it with non-trivial case studies, such as the linux kernel. On top of variability-aware analysis, we develop a transformation engine for C, which respects variability induced by the preprocessor. The engine provides three common refactorings (rename identifier, extract function, and inline function) and overcomes shortcomings (completeness, use of heuristics, and scalability issues) of existing engines, while still being semantics-preserving with respect to all variants and being fast, providing an instantaneous user experience. To validate semantics preservation, we extend a standard testing approach for refactoring engines with variability and show in real-world case studies the effectiveness and scalability of our engine. In the end, our analysis and transformation techniques show that configurable systems can efficiently be analyzed and transformed (even for large-scale systems), providing the same guarantees for configurable systems as for standard systems in terms of detecting and avoiding programming errors

    LLM for SoC Security: A Paradigm Shift

    Full text link
    As the ubiquity and complexity of system-on-chip (SoC) designs increase across electronic devices, the task of incorporating security into an SoC design flow poses significant challenges. Existing security solutions are inadequate to provide effective verification of modern SoC designs due to their limitations in scalability, comprehensiveness, and adaptability. On the other hand, Large Language Models (LLMs) are celebrated for their remarkable success in natural language understanding, advanced reasoning, and program synthesis tasks. Recognizing an opportunity, our research delves into leveraging the emergent capabilities of Generative Pre-trained Transformers (GPTs) to address the existing gaps in SoC security, aiming for a more efficient, scalable, and adaptable methodology. By integrating LLMs into the SoC security verification paradigm, we open a new frontier of possibilities and challenges to ensure the security of increasingly complex SoCs. This paper offers an in-depth analysis of existing works, showcases practical case studies, demonstrates comprehensive experiments, and provides useful promoting guidelines. We also present the achievements, prospects, and challenges of employing LLM in different SoC security verification tasks.Comment: 42 page
    • 

    corecore