532 research outputs found

    Dynamic Analysis Techniques for Effective and Efficient Debugging

    Get PDF
    Debugging is a tedious and time-consuming process for software developers. Therefore, providing effective and efficient debugging tools is essential for improving programmer productivity. Existing tools for debugging suffer from various drawbacks -- general-purpose debuggers provide little guidance for the programmers in locating the bug source while specialized debuggers require knowledge of the type of bug encountered. This dissertation makes several advances in debugging leading to effective, efficient, and extensible framework for interactive debugging of singlethreaded programs and deterministic debugging of multithreaded programs.This dissertation presents the Qzdb debugger for singlethreaded programs that raises the abstraction level of debugging by introducing high-level and powerful state alteration and state inspection capabilities. Case studies on 5 real reported bugs in 5 popular real programs demonstrate its effectiveness. To support integration of specialized debugging algorithms into Qzdb, anew approach for constructing debuggers is developed that employs declarative specification of bug conditions and their root causes, and automatic generation of debugger code. Experiments show that about 3,300 lines of C code are generated automatically from only 8 lines of specification for 6 memory bugs. Thanks to the effective generated bug locators, for the 8 real-worlds bugs we have applied our approach to, users have to examine just 1 to 16 instructions. To reduce the runtime overhead of dynamic analysis used during debugging, relevant input analysis is developed and employed to carry out input simplification and execution simplification which reduce the length of analyzed execution by reducing the input size and limiting the analysis to subset of the execution. Experiments show that relevant input analysis based input simplification algorithm is both efficient and effective -- it only requires 11% to 21% test runs of that needed by standard delta debugging algorithm and generates even smaller inputs.Finally, to demonstrate that the above approach can also be used for debugging multithreaded programs, this dissertation presents DrDebug, a deterministic and cyclic debugging framework. DrDebug allows efficient debugging by tailoring the scope of replay to a buggy execution region and an execution slice of a buggy region. Case studies of real reported concurrency bugs show that the buggy execution region size is less than 1 million instructions and the lengths of buggy execution region and execution slice are less than 15% and 7% of the total execution respectively

    Design of an integrated airframe/propulsion control system architecture

    Get PDF
    The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that uses both reliability and performance. A detailed account is given for the testing associated with a subset of the architecture and concludes with general observations of applying the methodology to the architecture

    Firmware Development and Integration for ALICE TPC and PHOS Front-end Electronics: A Trigger Based Readout and Control System operating in a Radiation Environment

    Get PDF
    The readout electronics in PHOS and TPC - two of the major detectors of the ALICE experiment at the LHC - consist of a set of Front End Cards (FECs) that digitize, process and buffer the data from the detector sensors. The FECs are connected to a Readout Control Unit (RCU) via two sets of custom made PCB backplanes. For PHOS, 28 FECs are connected to one RCU, while for TPC the number is varying from 18 to 25 FECs depending on location. The RCU is in charge of the data readout, including reception and distribution of triggers and in moving the data from the FECs to the Data Acquisition System. In addition it does low level control tasks. The RCU consists of an RCU Motherboard that hosts a Detector Control System (DCS) board and a Source Interface Unit. The DCS board is an embedded computer running Linux that controls the readout electronics. All the mentioned devices are implemented in commercial grade SRAM based Field Programmable Gate Arrays (FPGAs). Even if these devices are not very radiation tolerant, they are chosen because of their cost and flexibility, and most importantly the possibility to easily do future upgrades of the electronics. Since physical shielding of the electronics is not possible in ALICE due to the architecture of the detector, the radiation related errors need to be handled with other techniques such as firmware mitigation techniques. The main objective of this thesis has been to make firmware modules for the FPGAs reciding in different parts of the readout electronics. Because of the flexibility of the designs, some of them have, with minor adaptations, been applied in different devices surrounding the readout electronics. Additionally, effort has been put into testing and integration of the system. In detail, the work presented in this thesis can be summarized as follows: - Firmware design for radiation environments. All firmware modules that are designed are to be used in a radiation environment, and then special precautions need to be taken. Additionally, a state-of-the-art solution has been designed for protecting the main FPGA on the RCU Motherboard against radiation induced functional failures. - Implementation of Trigger Handling for the TPC/PHOS Readout Electronics. The triggers are received from the global trigger system via an optical link and are handled by an Application Spesific Integrated Circuit (ASIC) on the DCS board. The problem is that the DCS board might have occasional down time 6 due to radiation related errors, so a special interface module is designed for the main FPGA on the RCU Motherboard. This module decodes and verifies the information received from the trigger system. As it is a generic design it has also been implemented as part of the BusyBox. The BusyBox is an important device in the trigger path of the TPC and PHOS sub-detectors. - Testing and Verification of all firmware modules. All firmware modules have been extensively verified with computer simulation before being tested in real hardware. - Maintenance of the DCS board for TPC/PHOS and of the different Fee firmware modules in general. - System Integration and System Level Tests. A big contribution has been done integrating and testing all the modules and sub-systems. This concern both locally on the RCU and the BusyBox, as well as making all the devices play together on a larger scale. - Testing and Verification of all firmware modules. All firmware modules have been extensively verified with computer simulation before being tested in real hardware. - Maintenance of the DCS board for TPC/PHOS and of the different Fee firmware modules in general. - System Integration and System Level Tests. A big contribution has been done integrating and testing all the modules and sub-systems. This concern both locally on the RCU and the BusyBox, as well as making all the devices play together on a larger scale. As the presented electronics are located in a radiation environment and are physically unavailable after commissioning, effort has been put into making designs that are reliable, scalable and possible to upgrade. This has been ensured by following a systematic design approach where testability, version management and documentation are key elements. Some parts of the work described in this thesis have been published and presented in international peer reviewed publications and conferences

    Static Analysis in Practice

    Get PDF
    Static analysis tools search software looking for defects that may cause an application to deviate from its intended behavior. These include defects that compute incorrect values, cause runtime exceptions or crashes, expose applications to security vulnerabilities, or lead to performance degradation. In an ideal world, the analysis would precisely identify all possible defects. In reality, it is not always possible to infer the intent of a software component or code fragment, and static analysis tools sometimes output spurious warnings or miss important bugs. As a result, tool makers and researchers focus on developing heuristics and techniques to improve speed and accuracy. But, in practice, speed and accuracy are not sufficient to maximize the value received by software makers using static analysis. Software engineering teams need to make static analysis an effective part of their regular process. In this dissertation, I examine the ways static analysis is used in practice by commercial and open source users. I observe that effectiveness is hampered, not only by false warnings, but also by true defects that do not affect software behavior in practice. Indeed, mature production systems are often littered with true defects that do not prevent them from functioning, mostly correctly. To understand why this occurs, observe that developers inadvertently create both important and unimportant defects when they write software, but most quality assurance activities are directed at finding the important ones. By the time the system is mature, there may still be a few consequential defects that can be found by static analysis, but they are drowned out by the many true but low impact defects that were never fixed. An exception to this rule is certain classes of subtle security, performance, or concurrency defects that are hard to detect without static analysis. Software teams can use static analysis to find defects very early in the process, when they are cheapest to fix, and in so doing increase the effectiveness of later quality assurance activities. But this effort comes with costs that must be managed to ensure static analysis is worthwhile. The cost effectiveness of static analysis also depends on the nature of the defect being sought, the nature of the application, the infrastructure supporting tools, and the policies governing its use. Through this research, I interact with real users through surveys, interviews, lab studies, and community-wide reviews, to discover their perspectives and experiences, and to understand the costs and challenges incurred when adopting static analysis tools. I also analyze the defects found in real systems and make observations about which ones are fixed, why some seemingly serious defects persist, and what considerations static analysis tools and software teams should make to increase effectiveness. Ultimately, my interaction with real users confirms that static analysis is well received and useful in practice, but the right environment is needed to maximize its return on investment

    Reader\u27s Guide: A Foray into Violence, Trauma and Masculinity in In Our Time

    Get PDF
    Modernism has been called “a reaction to the carnage and disillusionment of the First World War and a search for a new mode of art that would rescue civilization from its state of crisis after the war” (Lewis, 109) Hemingway attempts this rescue by re-thinking aspects of the novel that were taken for granted in earlier periods, just as the conventions of modern life were taken for granted pre-WWI. Furthermore, his work tries to rectify the dissonance between a pre and post-war self through the exploration of social conventions relating to violence, trauma and masculinity

    Enabling Usable and Performant Trusted Execution

    Full text link
    A plethora of major security incidents---in which personal identifiers belonging to hundreds of millions of users were stolen---demonstrate the importance of improving the security of cloud systems. To increase security in the cloud environment, where resource sharing is the norm, we need to rethink existing approaches from the ground-up. This thesis analyzes the feasibility and security of trusted execution technologies as the cornerstone of secure software systems, to better protect users' data and privacy. Trusted Execution Environments (TEE), such as Intel SGX, has the potential to minimize the Trusted Computing Base (TCB), but they also introduce many challenges for adoption. Among these challenges are TEE's significant impact on applications' performance and non-trivial effort required to migrate legacy systems to run on these secure execution technologies. Other challenges include managing a trustworthy state across a distributed system and ensuring these individual machines are resilient to micro-architectural attacks. In this thesis, I first characterize the performance bottlenecks imposed by SGX and suggest optimization strategies. I then address two main adoption challenges for existing applications: managing permissions across a distributed system and scaling the SGX's mechanism for proving authenticity and integrity. I then analyze the resilience of trusted execution technologies to speculative execution, micro-architectural attacks, which put cloud infrastructure at risk. This analysis revealed a devastating security flaw in Intel's processors which is known as Foreshadow/L1TF. Finally, I propose a new architectural design for out-of-order processors which defeats all known speculative execution attacks.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155139/1/oweisse_1.pd

    Network-Wide Monitoring And Debugging

    Get PDF
    Modern networks can encompass over 100,000 servers. Managing such an extensive network with a diverse set of network policies has become more complicated with the introduction of programmable hardwares and distributed network functions. Furthermore, service level agreements (SLAs) require operators to maintain high performance and availability with low latencies. Therefore, it is crucial for operators to resolve any issues in networks quickly. The problems can occur at any layer of stack: network (load imbalance), data-plane (incorrect packet processing), control-plane (bugs in configuration) and the coordination among them. Unfortunately, existing debugging tools are not sufficient to monitor, analyze, or debug modern networks; either they lack visibility in the network, require manual analysis, or cannot check for some properties. These limitations arise from the outdated view of the networks, i.e., that we can look at a single component in isolation. In this thesis, we describe a new approach that looks at measuring, understanding, and debugging the network across devices and time. We also target modern stateful packet processing devices: programmable data-planes and distributed network functions as these becoming increasingly common part of the network. Our key insight is to leverage both in-network packet processing (to collect precise measurements) and out-of-network processing (to coordinate measurements and scale analytics). The resulting systems we design based on this approach can support testing and monitoring at the data center scale, and can handle stateful data in the network. We automate the collection and analysis of measurement data to save operator time and take a step towards self driving networks

    Spacecraft Dormancy Autonomy Analysis for a Crewed Martian Mission

    Get PDF
    Current concepts of operations for human exploration of Mars center on the staged deployment of spacecraft, logistics, and crew. Though most studies focus on the needs for human occupation of the spacecraft and habitats, these resources will spend most of their lifetime unoccupied. As such, it is important to identify the operational state of the unoccupied spacecraft or habitat, as well as to design the systems to enable the appropriate level of autonomy. Key goals for this study include providing a realistic assessment of what "dormancy" entails for human spacecraft, exploring gaps in state-of-the-art for autonomy in human spacecraft design, providing recommendations for investments in autonomous systems technology development, and developing architectural requirements for spacecraft that must be autonomous during dormant operations. The mission that was chosen is based on a crewed mission to Mars. In particular, this study focuses on the time that the spacecraft that carried humans to Mars spends dormant in Martian orbit while the crew carries out a surface mission. Communications constraints are assumed to be severe, with limited bandwidth and limited ability to send commands and receive telemetry. The assumptions made as part of this mission have close parallels with mission scenarios envisioned for dormant cis-lunar habitats that are stepping-stones to Mars missions. As such, the data in this report is expected to be broadly applicable to all dormant deep space human spacecraft

    Evaluation of Dynamic Binary Instrumentation Approaches: Dynamic Binary Translation vs. Dynamic Probe Injection

    Get PDF
    From web browsing to bank transactions, to data analysis and robot automation, just about any task necessitates or benefits from the use of software. Ensuring a piece of software to be effective requires profiling the program’s behavior to evaluate its performance, debugging the program to fix incorrect behaviors, and examining the program to detect security flaws. These tasks are made possible by instrumentation---the method of inserting code into a program to collect data about its behavior. Dynamic binary instrumentation (DBI) enables programmers to understand and reason about program behavior by inserting code into a binary during run time to collect relevant data, and is more flexible than static or source-code instrumentation, but incurs run-time overhead. This thesis attempts to extend the preexisting characterization of the tradeoffs between dynamic binary translation (DBT) and dynamic probe injection (DPI), two popular DBI approaches, using Pin and LiteInst as sample frameworks. It also describes extensions to the LiteInst framework that enable it to instrument function exits correctly. This evaluation involved using the two frameworks to instrument a set of SPEC CPU 2006 benchmarks for counting function entries alone, counting both function entries and exits, or dynamically generating a call graph. On these instrumentation tasks, Pin performed close to native binary time while LiteInst performed significantly slower. Exceptions to this observation, and analysis of the probe types used by LiteInst, suggest that LiteInst incurs significant overhead when executing a large number of probes
    • …
    corecore