308 research outputs found

    Bringing communication networks on a chip: test and verification implications

    Full text link

    Virtual Machine-Assisted Collaborative Junk Object Detection

    Get PDF
    Memory leak is unrecoverable software bug that causes performance degradation and re- liability issues to software applications. Although memory management systems exist in modern Object Oriented Language to reclaim unused memory store, memory leak can still happen, and continue to exhaust memory resource. In Java, where there is garbage col- lection for releasing unused objects, memory leaks manifest itself in the form of unused object retention. Since Java Programming language allocates objects on heap, the lifetime of an object deviates from the stack discipline, which can be a challenge in detecting Java memory leak. In this thesis, we propose a collaborative approach in detecting Java memory leaks through verifying Object Lifetime Specification at runtime. We designed a runtime verifier that leverages Java Virtual Machine technologies to monitor and extract annotated infor- mation from the user application, and use that information to verify against Java Virtual Machine events to detect unintentional object retention in the Java application under test. We implemented our runtime verifier with Maxine Virtual Machine, an open source, meta-circular virtual machine developed by Oracle Lab, and conducted experiments and DaCapo benchmark to evaluate its accuracy and performance efficiency. The results show that the runtime verification tool successfully identifies junk objects for different semantic cases proposed in this thesis with certain runtime overhead. Through the research and experimental results, we further make implications on how to improve the performance overhead associated with current design and implementation methods in detecting unused object retention, which in the long term constitute memory leak and performance bug

    Foundations for Automatic, Adaptable Compilation

    Get PDF
    Computational science demands extreme performance because the running time of an application often determines the size of the experiment that a scientist can reasonably compute. Unfortunately, traditional compiler technology is ill-equipped to harness the full potential of today's computing platforms, forcing scientists to spend time manually tuning their application's performance. Although improving compiler technology should alleviate this problem, two challenges obstruct this goal: hardware platforms are rapidly changing and application software is difficult to statically model and predict. To address these problems, this thesis presents two techniques that aim to improve a compiler's adaptability: automatic resource characterization and selective, dynamic optimization. Resource characterization empirically measures a system's performance-critical characteristics, which can be provided to a parameterized compiler that specializes programs accordingly. Measuring these characteristics is important, because a system's physical characteristics do not always match its observed characteristics. Consequently, resource characterization provides an empirical performance model of a system's actual behavior, which is better suited for guiding compiler optimizations than a purely theoretical model. This thesis presents techniques for determining a system's data cache and TLB capacity, line size, and associativity, as well as instruction-cache capacity. Even with a perfect architectural-model, compilers will still often generate suboptimal code because of the difficulty in statically analyzing and predicting a program's behavior. This thesis presents two techniques that enable selective, dynamic-optimization for cases in which static compilation fails to deliver adequate performance. First, intermediate-representation (IR) annotation generates a fully-optimized native binary tagged with a higher-level compiler representation of itself. The native binary benefits from static optimization and code generation, but the IR annotation allows targeted and aggressive dynamic-optimization. Second, adaptive code-selection allows a program to empirically tune its performance throughout execution by automatically identifying and favoring the best performing variant of a routine. This technique can be used for dynamically choosing between different static-compilation strategies; or, it can be used with IR annotation for performing dynamic, feedback-directed optimization

    Architectures for dependable modern microprocessors

    Get PDF
    Η εξέλιξη των ολοκληρωμένων κυκλωμάτων σε συνδυασμό με τους αυστηρούς χρονικούς περιορισμούς καθιστούν την επαλήθευση της ορθής λειτουργίας των επεξεργαστών μία εξαιρετικά απαιτητική διαδικασία. Με κριτήριο το στάδιο του κύκλου ζωής ενός επεξεργαστή, από την στιγμή κατασκευής των πρωτοτύπων και έπειτα, οι τεχνικές ελέγχου ορθής λειτουργίας διακρίνονται στις ακόλουθες κατηγορίες: (1) Silicon Debug: Τα πρωτότυπα ολοκληρωμένα κυκλώματα ελέγχονται εξονυχιστικά, (2) Manufacturing Testing: ο τελικό ποιοτικός έλεγχος και (3) In-field verification: Περιλαμβάνει τεχνικές, οι οποίες διασφαλίζουν την λειτουργία του επεξεργαστή σύμφωνα με τις προδιαγραφές του. Η διδακτορική διατριβή προτείνει τα ακόλουθα: (1) Silicon Debug: Η εργασία αποσκοπεί στην επιτάχυνση της διαδικασίας ανίχνευσης σφαλμάτων και στον αυτόματο εντοπισμό τυχαίων προγραμμάτων που δεν περιέχουν νέα -χρήσιμη- πληροφορία σχετικά με την αίτια ενός σφάλματος. Η κεντρική ιδέα αυτής της μεθόδου έγκειται στην αξιοποίηση της έμφυτης ποικιλομορφίας των αρχιτεκτονικών συνόλου εντολών και στην δυνατότητα από-διαμόρφωσης τμημάτων του κυκλώματος, (2) Manufacturing Testing: προτείνεται μία μέθοδο για την βελτιστοποίηση του έλεγχου ορθής λειτουργίας των πολυνηματικών και πολυπύρηνων επεξεργαστών μέσω της χρήση λογισμικού αυτοδοκιμής, (3) Ιn-field verification: Αναλύθηκε σε βάθος η επίδραση που έχουν τα μόνιμα σφάλματα σε μηχανισμούς αύξησης της απόδοσης. Επιπρόσθετα, προτάθηκαν τεχνικές για την ανίχνευση και ανοχή μόνιμων σφαλμάτων υλικού σε μηχανισμούς πρόβλεψης διακλάδωσης.Technology scaling, extreme chip integration and the compelling requirement to diminish the time-to-market window, has rendered microprocessors more prone to design bugs and hardware faults. Microprocessor validation is grouped into the following categories, based on where they intervene in a microprocessor’s lifecycle: (a) Silicon debug: the first hardware prototypes are exhaustively validated, (b) Μanufacturing testing: the final quality control during massive production, and (c) In-field verification: runtime error detection techniques to guarantee correct operation. The contributions of this thesis are the following: (1) Silicon debug: We propose the employment of deconfigurable microprocessor architectures along with a technique to generate self-checking random test programs to avoid the simulation step and triage the redundant debug sessions, (2) Manufacturing testing: We propose a self-test optimization strategy for multithreaded, multicore microprocessors to speedup test program execution time and enhance the fault coverage of hard errors; and (3) In-field verification: We measure the effect of permanent faults performance components. Then, we propose a set of low-cost mechanisms for the detection, diagnosis and performance recovery in the front-end speculative structures. This thesis introduces various novel methodologies to address the validation challenges posed throughout the life-cycle of a chip

    Scan Based Side Channel Attack on Data Encryption Standard

    Get PDF
    Scan based test is a double edged sword. On one hand, it is a powerful test technique. On the other hand, it is an equally powerful attack tool. In this paper we show that scan chains can be used as a side channel to recover secret keys from a hardware implementation of the Data Encryption Standard (DES). By loading pairs of known plaintexts with one-bit difference in the normal mode and then scanning out the internal state in the test mode, we first determine the position of all scan elements in the scan chain. Then, based on a systematic analysis of the structure of the non-linear substitution boxes, and using three additional plaintexts we discover the DES secret key. Finally, some assumptions in the attack are discussed
    corecore