645 research outputs found

    Machine Learning for Microprocessor Performance Bug Localization

    Full text link
    The validation process for microprocessors is a very complex task that consumes substantial engineering time during the design process. Bugs that degrade overall system performance, without affecting its functional correctness, are particularly difficult to debug given the lack of a golden reference for bug-free performance. This work introduces two automated performance bug localization methodologies based on machine learning that aims to aid the debugging process. Our results show that, the evaluated microprocessor core performance bugs whose average IPC impact is greater than 1%, our best-performing technique is able to localize the exact microarchitectural unit of the bug ∌\sim77\% of the time, while achieving a top-3 unit accuracy (out of 11 possible locations) of over 90% for bugs with the same average IPC impact. The proposed system in our simulation setup requires only a few seconds to perform a bug location inference, which leads to a reduced debugging time.Comment: 12 pages, 6 figure

    A research review of quality assessment for software

    Get PDF
    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness

    Fabrication, structure and properties of epoxy/metal nanocomposites

    Get PDF
    Gd2O3 nanoparticles surface-modiïŹed with IPDI were compounded with epoxy. IPDI provided an anchor into the porous Gd2O3 surface and a bridge into the matrix, thus creating strong bonds between matrix and Gd2O3. 1.7 vol.-% Gd2O3 increased the Young’s modulus of epoxy by 16–19%; the surface-modiïŹed Gd2O3 nanoparticles improved the critical strain energy release rate by 64.3% as compared to 26.4% produced by the unmodiïŹed nanoparticles. The X-ray shielding efïŹciency of neat epoxy was enhanced by 300–360%, independent of the interface modiïŹcation. Interface debonding consumes energy and leads to crack pinning and matrix shear banding; most fracture energy is consumed by matrix shear banding as shown by the large number of ridges on the fracture surface

    Witness-based validation of verification results with applications to software-model checking

    Get PDF
    In the scientific world, formal verification is an established engineering technique to ensure the correctness of hardware and software systems. Because formal verification is an arduous and error-prone endeavor, automated solutions are desirable, and researchers continue to develop new algorithms and optimize existing ones to push the boundaries of what can be verified automatically. These efforts do not go unnoticed by the industry. Hardware-circuit designs, flight-control systems, and operating-system drivers are just a few examples of systems where formal verification is already part of the quality-assurance repertoire. Nevertheless, the primary fields of application for formal verification are mainly those where errors carry a high risk of significant damage, either financial or physical, because the costs of formal verification are considered to be too high for most other projects, despite the fact that the research community has made vast advancements regarding the effectiveness and efficiency of formal verification techniques in the last decades. We present and address two potential reasons for this discrepancy that we identified in the field of automated formal software verification. (1) Even for experts in the field, it is often difficult to decide which of the multitude of available techniques is the most suitable solution they should recommend to solve a given verification problem. Moreover, even if a suitable solution is found for a given system, there is no guarantee that the solution is sustainable as the system evolves. Consequently, the cost of finding and maintaining a suitable approach for applying formal software verification to real-world systems is high. (2) Even assuming that a suitable and maintainable solution for applying formal software verification to a given system is found and verification results could be obtained, developers of the system still require further guidance towards making practical use of these results, which often differ significantly from the results they obtain from classical quality-assurance techniques they are familiar with, such as testing. To mitigate the first issue, using the open-source software-verification framework CPAchecker, we investigate several popular formal software-verification techniques such as predicate abstraction, Impact, bounded model checking, k -induction, and PDR, and perform an extensive and rigorous experimental study to identify their strengths and weaknesses regarding their comparative effectiveness and efficiency when applied to a large and established benchmark set, to provide a basis for choosing the best technique for a given problem. To mitigate the second issue, we propose a concrete standard format for the representation and communication of verification results that raises the bar from plain "yes" or "no" answers to verification witnesses, which are valuable artifacts of the verification process that contain detailed information discovered during the analysis. We then use these verification witnesses for several applications: To increase the trust in verification results, we irst develop several independent validators based on violation witnesses, i.e. verification witnesses that represent bugs detected by a verifier. We then extend our validators to also erify the verification results obtained from a successful verification, which are represented y correctness witnesses. Lastly, we also develop an interactive web service to store and retrieve these verification witnesses, to provide online validation to quickly de-prioritize likely wrong results, and to graphically visualize the witnesses, as an example of how verification can be integrated into a development process. Since the introduction of our proposed standard format for verification witnesses, it has been adopted by over thirty different software verifiers, and our witness-based result-validation tools have become a core component in the scoring process of the International Competition on Software Verification.In der Welt der Wissenschaft gilt die Formale Verifikation als etablierte Methode, die Korrektheit von Hard- und Software zu gewĂ€hrleisten. Da die Anwendung formaler Verifikation jedoch selbst ein beschwerliches und fehlertrĂ€chtiges Unterfangen darstellt, ist es erstrebenswert, automatisierte Lösungen dafĂŒr zu finden. Forscher entwickeln daher immer wieder neue Algorithmen Formaler Verifikation oder verbessern bereits existierende Algorithmen, um die Grenzen der Automatisierbarkeit Formaler Verifikation weiter und weiter zu dehnen. Auch die Industrie ist bereits auf diese Anstrengungen aufmerksam geworden. Flugsteuerungssysteme, Betriebssystemtreiber und EntwĂŒrfe von Hardware-Schaltungen sind nur einzelne Beispiele von Systemen, bei denen Formale Verifikation bereits heute einen festen Stammplatz im Arsenal der QualitĂ€tssicherungsmaßnahmen eingenommen hat. Trotz alledem bleiben die primĂ€ren Einsatzgebiete Formaler Verifikation jene, in denen Fehler ein hohes Risiko finanzieller oder physischer SchĂ€den bergen, da in anderen Projekten die Kosten des Einsatzes Formaler Verifikation in der Regel als zu hoch empfunden werden, unbeachtet der Tatsache, dass es der Forschungsgemeinschaft in den letzten Jahrzehnten gelungen ist, enorme Fortschritte bei der Verbesserung der EffektivitĂ€t und Effizienz Formaler Verifikationstechniken zu machen. Wir prĂ€sentieren und diskutieren zwei potenzielle Ursachen fĂŒr diese Diskrepanz zwischen Forschung und Industrie, die wir auf dem Gebiet der Automatisierten Formalen Softwareverifikation identifiziert haben. (1) Sogar Fachleuten fĂ€llt es oft schwer, zu entscheiden, welche der zahlreichen verfĂŒgbaren Methoden sie als vielversprechendste Lösung eines gegebenen Verifikationsproblems empfehlen sollten. DarĂŒber hinaus gibt es selbst dann, wenn eine passende Lösung fĂŒr ein gegebenes System gefunden wird, keine Garantie, dass sich diese Lösung im Laufe der Evolution des Systems als Nachhaltig erweisen wird. Daher sind sowohl die Wahl als auch der Unterhalt eines passenden Ansatzes zur Anwendung Formaler Softwareverifikation auf reale Systeme kostspielige Unterfangen. (2) Selbst unter der Annahme, dass eine passende und wartbare Lösung zur Anwendung Formaler Softwareverifikation auf ein gegebenes System gefunden und Verifikationsergebnisse erzielt werden, benötigen die Entwickler des Systems immer noch weitere UnterstĂŒtzung, um einen praktischen Nutzen aus den Ergebnissen ziehen zu können, die sich oft maßgeblich unterscheiden von den Ergebnissen jener klassischen QualitĂ€tssicherungssysteme, mit denen sie vertraut sind, wie beispielsweise dem Testen. Um das erste Problem zu entschĂ€rfen, untersuchen wir unter Verwendung des Open-Source-Softwareverifikationsystems CPAchecker mehrere beliebte Formale Softwareverifikationsmethoden, wie beispielsweise PrĂ€dikatenabstraktion, Impact, Bounded-Model-Checking, k-Induktion und PDR, und fĂŒhren umfangreiche und grĂŒndliche experimentelle Studien auf einem großen und etablierten Konvolut an Beispielprogrammen durch, um die StĂ€rken und SchwĂ€chen dieser Methoden hinsichtlich ihrer relativen EffektivitĂ€t und Effizienz zu ermitteln und daraus eine Entscheidungsgrundlage fĂŒr die Wahl der besten Lösung fĂŒr ein gegebenes Problem abzuleiten. Um das zweite Problem zu entschĂ€rfen, schlagen wir ein konkretes Standardformat zur Modellierung und zum Austausch von Verifikationsergebnissen vor, welches die AnsprĂŒche an Verifikationsergebnisse anhebt, weg von einfachen "ja/nein"-Antworten und hin zu Verifikationszeugen (Verification Witnesses), bei denen es sich um wertvolle Produkte des Verifikationsprozesses handelt und die detaillierte, wĂ€hrend der Analyse entdeckte Informationen enthalten. Wir stellen mehrere Anwendungsbeispiele fĂŒr diese Verifikationszeugen vor: Um das Vertrauen in Verifikationsergebnisse zu erhöhen, entwickeln wir zunĂ€chst mehrere, voneinander unabhĂ€ngige Validatoren, die Verletzungszeugen (Violation Witnesses) verwenden, also Verifikationszeugen, welche von einem Verifikationswerkzeug gefundene Spezifikationsverletzungen darstellen, Diese Validatoren erweitern wir anschließend so, dass sie auch in der Lage sind, die Verifikationsergebnisse erfolgreicher Verifikationen, also Korrektheitsbehauptungen, die durch Korrektheitszeugen (Correctness Witnesses) dokumentiert werden, nachzuvollziehen. Schlussendlich entwickeln wir als Beispiel fĂŒr die Integrierbarkeit Formaler Verifikation in den Entwicklungsprozess einen interaktiven Webservice fĂŒr die Speicherung und den Abruf von Verifikationzeugen, um einen Online-Validierungsdienst zur schnellen Depriorisierung mutmaßlich falscher Verifikationsergebnisse anzubieten und Verifikationszeugen graphisch darzustellen. Unser Vorschlag fĂŒr ein Standardformat fĂŒr Verifikationszeugen wurde inzwischen von mehr als dreißig verschiedenen Softwareverifikationswerkzeugen ĂŒbernommen und unsere zeugen-basierten Validierungswerkzeuge sind zu einer Kernkomponente des Bewertungsschemas des Internationalen Softwareverifikationswettbewerbs geworden

    Leveraging Explanations in Interactive Machine Learning: An Overview

    Get PDF
    Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model. However, explanations can go beyond this one way communication as a mechanism to elicit user control, because once users understand, they can then provide feedback. The goal of this paper is to present an overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones. To this end, we draw a conceptual map of the state-of-the-art, grouping relevant approaches based on their intended purpose and on how they structure the interaction, highlighting similarities and differences between them. We also discuss open research issues and outline possible directions forward, with the hope of spurring further research on this blooming research topic

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Software Testing with Large Language Model: Survey, Landscape, and Vision

    Full text link
    Pre-trained large language models (LLMs) have recently emerged as a breakthrough technology in natural language processing and artificial intelligence, with the ability to handle large-scale datasets and exhibit remarkable performance across a wide range of tasks. Meanwhile, software testing is a crucial undertaking that serves as a cornerstone for ensuring the quality and reliability of software products. As the scope and complexity of software systems continue to grow, the need for more effective software testing techniques becomes increasingly urgent, and making it an area ripe for innovative approaches such as the use of LLMs. This paper provides a comprehensive review of the utilization of LLMs in software testing. It analyzes 52 relevant studies that have used LLMs for software testing, from both the software testing and LLMs perspectives. The paper presents a detailed discussion of the software testing tasks for which LLMs are commonly used, among which test case preparation and program repair are the most representative ones. It also analyzes the commonly used LLMs, the types of prompt engineering that are employed, as well as the accompanied techniques with these LLMs. It also summarizes the key challenges and potential opportunities in this direction. This work can serve as a roadmap for future research in this area, highlighting potential avenues for exploration, and identifying gaps in our current understanding of the use of LLMs in software testing.Comment: 20 pages, 11 figure

    Large Language Models for Software Engineering: A Systematic Literature Review

    Full text link
    Large Language Models (LLMs) have significantly impacted numerous domains, notably including Software Engineering (SE). Nevertheless, a well-rounded understanding of the application, effects, and possible limitations of LLMs within SE is still in its early stages. To bridge this gap, our systematic literature review takes a deep dive into the intersection of LLMs and SE, with a particular focus on understanding how LLMs can be exploited in SE to optimize processes and outcomes. Through a comprehensive review approach, we collect and analyze a total of 229 research papers from 2017 to 2023 to answer four key research questions (RQs). In RQ1, we categorize and provide a comparative analysis of different LLMs that have been employed in SE tasks, laying out their distinctive features and uses. For RQ2, we detail the methods involved in data collection, preprocessing, and application in this realm, shedding light on the critical role of robust, well-curated datasets for successful LLM implementation. RQ3 allows us to examine the specific SE tasks where LLMs have shown remarkable success, illuminating their practical contributions to the field. Finally, RQ4 investigates the strategies employed to optimize and evaluate the performance of LLMs in SE, as well as the common techniques related to prompt optimization. Armed with insights drawn from addressing the aforementioned RQs, we sketch a picture of the current state-of-the-art, pinpointing trends, identifying gaps in existing research, and flagging promising areas for future study
    • 

    corecore