878 research outputs found

    Contributions for improving debugging of kernel-level services in a monolithic operating system

    Get PDF
    Alors que la recherche sur la qualité du code des systÚmes a connu un formidable engouement, les systÚmes d exploitation sont encore aux prises avec des problÚmes de fiabilité notamment dus aux bogues de programmation au niveau des services noyaux tels que les pilotes de périphériques et l implémentation des systÚmes de fichiers. Des études ont en effet montré que chaque version du noyau Linux contient entre 600 et 700 fautes, et que la propension des pilotes de périphériques à contenir des erreurs est jusqu à sept fois plus élevée que toute autre partie du noyau. Ces chiffres suggÚrent que le code des services noyau n est pas suffisamment testé et que de nombreux défauts passent inaperçus ou sont difficiles à réparer par des programmeurs non-experts, ces derniers formant pourtant la majorité des développeurs de services. Cette thÚse propose une nouvelle approche pour le débogage et le test des services noyau. Notre approche est focalisée sur l interaction entre les services noyau et le noyau central en abordant la question des trous de sûreté dans le code de définition des fonctions de l API du noyau. Dans le contexte du noyau Linux, nous avons mis en place une approche automatique, dénommée Diagnosys, qui repose sur l analyse statique du code du noyau afin d identifier, classer et exposer les différents trous de sûreté de l API qui pourraient donner lieu à des fautes d exécution lorsque les fonctions sont utilisées dans du code de service écrit par des développeurs ayant une connaissance limitée des subtilités du noyau. Pour illustrer notre approche, nous avons implémenté Diagnosys pour la version 2.6.32 du noyau Linux. Nous avons montré ses avantages à soutenir les développeurs dans leurs activités de tests et de débogage.Despite the existence of an overwhelming amount of research on the quality of system software, Operating Systems are still plagued with reliability issues mainly caused by defects in kernel-level services such as device drivers and file systems. Studies have indeed shown that each release of the Linux kernel contains between 600 and 700 faults, and that the propensity of device drivers to contain errors is up to seven times higher than any other part of the kernel. These numbers suggest that kernel-level service code is not sufficiently tested and that many faults remain unnoticed or are hard to fix bynon-expert programmers who account for the majority of service developers. This thesis proposes a new approach to the debugging and testing of kernel-level services focused on the interaction between the services and the core kernel. The approach tackles the issue of safety holes in the implementation of kernel API functions. For Linux, we have instantiated the Diagnosys automated approach which relies on static analysis of kernel code to identify, categorize and expose the different safety holes of API functions which can turn into runtime faults when the functions are used in service code by developers with limited knowledge on the intricacies of kernel code. To illustrate our approach, we have implemented Diagnosys for Linux 2.6.32 and shown its benefits in supporting developers in their testing and debugging tasks.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    The 10th Jubilee Conference of PhD Students in Computer Science

    Get PDF

    Effective Detection of Sleep-in-Atomic-Context Bugs in the Linux Kernel

    Get PDF
    International audienceAtomic context is an execution state of the Linux kernel, in which kernel code monopolizes a CPU core. In this state, the Linux kernel may only perform operations that cannot sleep, as otherwise a system hang or crash may occur. We refer to this kind of concurrency bug as a sleep-in-atomic-context (SAC) bug. In practice, SAC bugs are hard to find, as they do not cause problems in all executions. In this paper, we propose a practical static approach named DSAC, to effectively detect SAC bugs in the Linux kernel. DSAC uses three key techniques: (1) a summary-based analysis to identify the code that may be executed in atomic context, (2) a connection-based alias analysis to identify the set of functions referenced by a function pointer, and (3) a path-check method to filter out repeated reports and false bugs. We evaluate DSAC on Linux 4.17, and find 1159 SAC bugs. We manually check all the bugs, and find that 1068 bugs are real. We have randomly selected 300 of the real bugs and sent them to kernel developers. 220 of these bugs have been confirmed, and 51 of our patches fixing 115 bugs have been applied

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    From experiment to design – fault characterization and detection in parallel computer systems using computational accelerators

    Get PDF
    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads

    Towards the design of efficient error detection mechanisms

    Get PDF
    The pervasive nature of modern computer systems has led to an increase in our reliance on such systems to provide correct and timely services. Moreover, as the functionality of computer systems is being increasingly defined in software, it is imperative that software be dependable. It has previously been shown that a fault intolerant software system can be made fault tolerant through the design and deployment of software mechanisms implementing abstract artefacts known as error detection mechanisms (EDMs) and error recovery mechanisms (ERMs), hence the design of these components is central to the design of dependable software systems. The EDM design problem, which relates to the construction of a boolean predicate over a set of program variables, is inherently difficult, with current approaches relying on system specifications and the experience of software engineers. As this process necessarily entails the identification and incorporation of program variables by an error detection predicate, this thesis seeks to address the EDM design problem from a novel variable-centric perspective, with the research presented supporting the thesis that, where it exists under the assumed system model, an efficient EDM consists of a set of critical variables. In particular, this research proposes (i) a metric suite that can be used to generate a relative ranking of the program variables in a software with respect to their criticality, (ii) a systematic approach for the generation of highly-efficient error detection predicates for EDMs, and (iii) an approach for dependability enhancement based on the protection of critical variables using software wrappers that implement error detection and correction predicates that are known to be efficient. This research substantiates the thesis that an efficient EDM contains a set of critical variables on the basis that (i) the proposed metric suite is able, through application of an appropriate threshold, to identify critical variables, (ii) efficient EDMs can be constructed based only on the critical variables identified by the metric suite, and (iii) the criticality of the identified variables can be shown to extend across a software module such that an efficient EDM designed for that software module should seek to determine the correctness of the identified variables

    New Trends in the Use of Artificial Intelligence for the Industry 4.0

    Get PDF
    Industry 4.0 is based on the cyber-physical transformation of processes, systems and methods applied in the manufacturing sector, and on its autonomous and decentralized operation. Industry 4.0 reflects that the industrial world is at the beginning of the so-called Fourth Industrial Revolution, characterized by a massive interconnection of assets and the integration of human operators with the manufacturing environment. In this regard, data analytics and, specifically, the artificial intelligence is the vehicular technology towards the next generation of smart factories.Chapters in this book cover a diversity of current and new developments in the use of artificial intelligence on the industrial sector seen from the fourth industrial revolution point of view, namely, cyber-physical applications, artificial intelligence technologies and tools, Industrial Internet of Things and data analytics. This book contains high-quality chapters containing original research results and literature review of exceptional merit. Thus, it is in the aim of the book to contribute to the literature of the topic in this regard and let the readers know current and new trends in the use of artificial intelligence for the Industry 4.0

    Towards a systematic security evaluation of the automotive Bluetooth interface

    Get PDF
    In-cabin connectivity and its enabling technologies have increased dramatically in recent years. Security was not considered an essential property, a mind-set that has shifted significantly due to the appearance of demonstrated vulnerabilities in these connected vehicles. Connectivity allows the possibility that an external attacker may compromise the security - and therefore the safety - of the vehicle. Many exploits have already been demonstrated in literature. One of the most pervasive connective technologies is Bluetooth, a short-range wireless communication technology. Security issues with this technology are well-documented, albeit in other domains. A threat intelligence study was carried out to substantiate this motivation and finds that while the general trend is towards increasing (relative) security in automotive Bluetooth implementations, there is still significant technological lag when compared to more traditional computing systems. The main contribution of this thesis is a framework for the systematic security evaluation of the automotive Bluetooth interface from a black-box perspective (as technical specifications were loose or absent). Tests were performed through both the vehicle’s native connection and through Bluetoothenabled aftermarket devices attached to the vehicle. This framework is supported through the use of attack trees and principles as outlined in the Penetration Testing Execution Standard. Furthermore, a proof-of-concept tool was developed to implement this framework in a semi-automated manner, to carry out testing on real-world vehicles. The tool also allows for severity classification of the results acquired, as outlined in the SAE J3061 Cybersecurity Guidebook for Cyber-Physical Vehicle Systems. Results of the severity classification are validated through domain expert review. Finally, how formal methods could be integrated into the framework and tool to improve confidence and rigour, and to demonstrate how future iterations of design could be improved is also explored. In conclusion, there is a need for systematic security testing, based on the findings of the threat intelligence study. The systematic evaluation and the developed tool successfully found weaknesses in both the automotive Bluetooth interface and in the vehicle itself through Bluetooth-enabled aftermarket devices. Furthermore, the results of applying this framework provide a focus for counter-measure development and could be used as evidence in a security assurance case. The systematic evaluation framework also allows for formal methods to be introduced for added rigour and confidence. Demonstrations of how this might be performed (with case studies) were presented. Future recommendations include using this framework with more test vehicles and expanding on the existing attack trees that form the heart of the evaluation. Further work on the tool chain would also be desirable. This would enable further accuracy of any testing or modelling required, and would also take automation of the entire process further
    • 

    corecore