80 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationCurrent scaling trends in transistor technology, in pursuit of larger component counts and improving power efficiency, are making the hardware increasingly less reliable. Due to extreme transistor miniaturization, it is becoming easier to flip a bit stored in memory elements built using these transistors. Given that soft errors can cause transient bit-flips in memory elements, caused due to alpha particles and cosmic rays striking those elements, soft errors have become one of the major impediments in system resilience as we move towards exascale computing. Soft errors escaping the hardware-layer may silently corrupt the runtime application data of a program, causing silent data corruption in the output. Also, given that soft errors are transient in nature, it is notoriously hard to trace back their origins. Therefore, techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. It is equally important to have a flexible infrastructure capable of simulating realistic soft error models to promote an effective evaluation of newly developed error detectors. In this work, we present a set of techniques for efficiently detecting soft errors affecting control-flow, data, and structured address computations in an application. We evaluate the efficacy of the proposed techniques by evaluating them on a collection of benchmarks through fault-injection driven studies. As an important requirement, we also introduce two new LLVM-based fault injectors, KULFI and VULFI, which are geared towards scalar and vector architectures, respectively. Through this work, we aim to make contributions to the system resilience community by making our research tools (in the form of error detectors and fault injectors) publicly available

    Master of Science

    Get PDF
    thesisTo minimize resource consumption and maximize performance, computer architecture research has been investigating approaches that may compute inaccurate solutions. Such hardware inaccuracies may induce a wide variety of program behaviors which are not obs

    Progress Report : 1991 - 1994

    Get PDF

    Energy Complexity for Sorting Algorithms in Java

    Full text link
    This study extends the concept of time complexity to energy, i.e., energy complexity, by showing a strong correlation between time complexity and energy consumption for sorting algorithms: Bubble Sort, Counting Sort, Merge Sort and Quick Sort, written in Java and run on single kernels. We investigate the correlation between wall time and time complexity, as well as the correlation between energy consumption and wall time. The primary finding is that time complexity can be used as a guideline to estimate the energy consumption of O(n*n), O(nlog(n)) and O(n + k) sorting algorithms. The secondary finding is that the inputs producing the theoretical worst cases for Merge Sort and Bubble Sort did not produce the worst case wall time nor the worst case energy consumption

    Assessing the Use of Machine Learning to Find the Worst-Case Execution Time of Avionics Software

    Get PDF
    FA8702-15-D-0002Many parts in aircraft today rely on software that interacts with its physical environment. Typically, this interaction involves taking sensor readings, sending actuation commands, reading commands from humans, and presenting information to humans. These interactions require that the software deliver results at the right time,\uf020as argued in the guidance document DO-178C and in previous FAA reports. Correct timing, in turn, depends on the execution time of individual programs. Hence, the problem of finding an upper bound on the execution time of a program,\uf020called Worst-Case Execution Time (WCET) analysis,\uf020is an important step in avionics certification. Unfortunately, WCET analysis is difficult for several reasons. A program can typically execute a large number of different paths. During the execution of one path, the program uses resources in a complex way and this resource use is difficult to analyze. Finally, during the execution of one path, the speed of execution depends on temperature, which, in turn, depends on earlier execution. This report presents research on WCET analysis using Machine Learning (ML) and Artificial Intelligence (AI) aiming to make WCET analysis less dependent on detailed knowledge of the program that is analyzed and the hardware used

    ETEA: A euclidean minimum spanning tree-Based evolutionary algorithm for multiobjective optimization

    Get PDF
    © the Massachusetts Institute of TechnologyAbstract The Euclidean minimum spanning tree (EMST), widely used in a variety of domains, is a minimum spanning tree of a set of points in the space, where the edge weight between each pair of points is their Euclidean distance. Since the generation of an EMST is entirely determined by the Euclidean distance between solutions (points), the properties of EMSTs have a close relation with the distribution and position information of solutions. This paper explores the properties of EMSTs and proposes an EMST-based Evolutionary Algorithm (ETEA) to solve multiobjective optimization problems (MOPs). Unlike most EMO algorithms that focus on the Pareto dominance relation, the proposed algorithm mainly considers distance-based measures to evaluate and compare individuals during the evolutionary search. Specifically in ETEA, four strategies are introduced: 1) An EMST-based crowding distance (ETCD) is presented to estimate the density of individuals in the population; 2) A distance comparison approach incorporating ETCD is used to assign the fitness value for individuals; 3) A fitness adjustment technique is designed to avoid the partial overcrowding in environmental selection; 4) Three diversity indicators-the minimum edge, degree, and ETCD-with regard to EMSTs are applied to determine the survival of individuals in archive truncation. From a series of extensive experiments on 32 test instances with different characteristics, ETEA is found to be competitive against five state-of-the-art algorithms and its predecessor in providing a good balance among convergence, uniformity, and spread.Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/K001310/1, and the National Natural Science Foundation of China under Grant 61070088

    Advances and applications in high-dimensional heuristic optimization

    Get PDF
    “Applicable to most real-world decision scenarios, multiobjective optimization is an area of multicriteria decision-making that seeks to simultaneously optimize two or more conflicting objectives. In contrast to single-objective scenarios, nontrivial multiobjective optimization problems are characterized by a set of Pareto optimal solutions wherein no solution unanimously optimizes all objectives. Evolutionary algorithms have emerged as a standard approach to determine a set of these Pareto optimal solutions, from which a decision-maker can select a vetted alternative. While easy to implement and having demonstrated great efficacy, these evolutionary approaches have been criticized for their runtime complexity when dealing with many alternatives or a high number of objectives, effectively limiting the range of scenarios to which they may be applied. This research introduces mechanisms to improve the runtime complexity of many multiobjective evolutionary algorithms, achieving state-of-the-art performance, as compared to many prominent methods from the literature. Further, the investigations here presented demonstrate the capability of multiobjective evolutionary algorithms in a complex, large-scale optimization scenario. Showcasing the approach’s ability to intelligently generate well-performing solutions to a meaningful optimization problem. These investigations advance the concept of multiobjective evolutionary algorithms by addressing a key limitation and demonstrating their efficacy in a challenging real-world scenario. Through enhanced computational efficiency and exhibited specialized application, the utility of this powerful heuristic strategy is made more robust and evident”--Abstract, page iv

    How Good Is Multi-Pivot Quicksort?

    Get PDF
    Multi-Pivot Quicksort refers to variants of classical quicksort where in the partitioning step kk pivots are used to split the input into k+1k + 1 segments. For many years, multi-pivot quicksort was regarded as impractical, but in 2009 a 2-pivot approach by Yaroslavskiy, Bentley, and Bloch was chosen as the standard sorting algorithm in Sun's Java 7. In 2014 at ALENEX, Kushagra et al. introduced an even faster algorithm that uses three pivots. This paper studies what possible advantages multi-pivot quicksort might offer in general. The contributions are as follows: Natural comparison-optimal algorithms for multi-pivot quicksort are devised and analyzed. The analysis shows that the benefits of using multiple pivots with respect to the average comparison count are marginal and these strategies are inferior to simpler strategies such as the well known median-of-kk approach. A substantial part of the partitioning cost is caused by rearranging elements. A rigorous analysis of an algorithm for rearranging elements in the partitioning step is carried out, observing mainly how often array cells are accessed during partitioning. The algorithm behaves best if 3 to 5 pivots are used. Experiments show that this translates into good cache behavior and is closest to predicting observed running times of multi-pivot quicksort algorithms. Finally, it is studied how choosing pivots from a sample affects sorting cost. The study is theoretical in the sense that although the findings motivate design recommendations for multipivot quicksort algorithms that lead to running time improvements over known algorithms in an experimental setting, these improvements are small.Comment: Submitted to a journal, v2: Fixed statement of Gibb's inequality, v3: Revised version, especially improving on the experiments in Section

    VERDICTS: Visual Exploratory Requirements Discovery and Injection for Comprehension and Testing of Software

    Get PDF
    We introduce a methodology and research tools for visual exploratory software analysis. VERDICTS combines exploratory testing, tracing, visualization, dynamic discovery and injection of requirements specifications into a live quick-feedback cycle, without recompilation or restart of the system under test. This supports discovery and verification of software dynamic behavior, software comprehension, testing, and locating the defect origin. At its core, VERDICTS allows dynamic evolution and testing of hypotheses about requirements and behavior, by using contracts as automated component verifiers. We introduce Semantic Mutation Testing as an approach to evaluate concordance of automated verifiers and the functional specifications they represent with respect to existing implementation. Mutation testing has promise, but also has many known issues. In our tests, both black-box and white-box variants of our Semantic Mutation Testing approach performed better than traditional mutation testing as a measure of quality of automated verifiers
    corecore