1,062 research outputs found
Automatic Repair of Buggy If Conditions and Missing Preconditions with SMT
We present Nopol, an approach for automatically repairing buggy if conditions
and missing preconditions. As input, it takes a program and a test suite which
contains passing test cases modeling the expected behavior of the program and
at least one failing test case embodying the bug to be repaired. It consists of
collecting data from multiple instrumented test suite executions, transforming
this data into a Satisfiability Modulo Theory (SMT) problem, and translating
the SMT result -- if there exists one -- into a source code patch. Nopol
repairs object oriented code and allows the patches to contain nullness checks
as well as specific method calls.Comment: CSTVA'2014, India (2014
Automatic Software Repair: a Bibliography
This article presents a survey on automatic software repair. Automatic
software repair consists of automatically finding a solution to software bugs
without human intervention. This article considers all kinds of repairs. First,
it discusses behavioral repair where test suites, contracts, models, and
crashing inputs are taken as oracle. Second, it discusses state repair, also
known as runtime repair or runtime recovery, with techniques such as checkpoint
and restart, reconfiguration, and invariant restoration. The uniqueness of this
article is that it spans the research communities that contribute to this body
of knowledge: software engineering, dependability, operating systems,
programming languages, and security. It provides a novel and structured
overview of the diversity of bug oracles and repair operators used in the
literature
Formalization and Runtime Verification of Invariants for Robotic Systems
Tese de mestrado, Engenharia Informática (Interação e Conhecimento), 2022, Universidade de Lisboa, Faculdade de CiênciasRobotic systems are critical in today’s society, be it in manufacturing, medicine, or agriculture. A potential failure in a robot may have extraordinary costs, not only financial but can
also cost lives.
Current practices in robot testing are vast and involve methods like simulation, log checking,
or field testing. However, current practices often require human monitoring to determine the
correctness of a given behavior. Automating this analysis can not only relieve the burden from
a high-skilled engineer but also allow for massive parallel executions of tests that can detect
behavioral faults in the robots. These faults could otherwise not be found due to human error
or a lack of time.
I have developed a Domain Specific Language to specify the properties of robotic systems in
the Robot Operating System (ROS). Developer written specifications in this language compile
to a monitor ROS module that detects violations of those properties at runtime. I have used
this language to express the temporal and positional properties of robots using Linear Temporal
Logic as a basis for the language stipulation. I have also automated the monitoring of some
behavioral violations of robots in relation to their state or events during a simulation, resorting
to relations between the internal information of the system and the corresponding information
in the simulator.
To evaluate the developed work, I went through a list of documented ROS bugs and identified
some that happen at runtime. Using these bugs as a basis I specified the robot’s properties in
the developed language that should be capable of detecting an error, in order to test both the
expressiveness and the monitoring while running the system
Learning representations for effective and explainable software bug detection and fixing
Software has an integral role in modern life; hence software bugs, which undermine software quality and reliability, have substantial societal and economic implications. The advent of machine learning and deep learning in software engineering has led to major advances in bug detection and fixing approaches, yet they fall short of desired precision and recall. This shortfall arises from the absence of a \u27bridge,\u27 known as learning code representations, that can transform information from source code into a suitable representation for effective processing via machine and deep learning.
This dissertation builds such a bridge. Specifically, it presents solutions for effectively learning code representations using four distinct methods?context-based, testing results-based, tree-based, and graph-based?thus improving bug detection and fixing approaches, as well as providing developers insight into the foundational reasoning. The experimental results demonstrate that using learning code representations can significantly enhance explainable bug detection and fixing, showcasing the practicability and meaningfulness of the approaches formulated in this dissertation toward improving software quality and reliability
On Oracles for Automated Diagnosis and Repair of Software Bugs
This HDR focuses on my work on automatic diagnosis and repair done over the past years. Among my past publications, it highlights three contributions on this topic, respectively published in ACM Transactions on Software Engineering and Methodology (TOSEM), IEEE Transactions on Software Engineering (TSE) and Elsevier Information & Software Technology (IST). My goal is to show that those three contributions share something deep, that they are founded on a unifying concept, which is the one of oracle. The first contribution is about statistical oracles. In the context of object-oriented software, we have defined a notion of context and normality that is specific to a fault class: missing method calls. Those inferred regularities act as oracle and their violations are considered as bugs. The second contribution is about test case based oracles for automatic repair. We describe an automatic repair system that fixes failing test cases by generating a patch. It is founded on the idea of refining the knowledge given by the violation of the oracle of the failing test case into finer-grain information, which we call a “micro-oracle”. By considering micro-oracles, we are capable of obtaining at the same time a precise fault localization diagnostic and a well-formed input-output specification to be used for program synthesis in order to repair a bug. The third contribution discusses a novel generic oracle in the context of exception handling. A generic oracle states properties that hold for many domains. Our technique verifies the compliance to this new oracle using test suite execution and exception injection. This document concludes with a research agenda about the future of engineering ultra-dependable and antifragile software systems
Explainable Automated Debugging via Large Language Model-driven Scientific Debugging
Automated debugging techniques have the potential to reduce developer effort
in debugging, and have matured enough to be adopted by industry. However, one
critical issue with existing techniques is that, while developers want
rationales for the provided automatic debugging results, existing techniques
are ill-suited to provide them, as their deduction process differs
significantly from that of human developers. Inspired by the way developers
interact with code when debugging, we propose Automated Scientific Debugging
(AutoSD), a technique that given buggy code and a bug-revealing test, prompts
large language models to automatically generate hypotheses, uses debuggers to
actively interact with buggy code, and thus automatically reach conclusions
prior to patch generation. By aligning the reasoning of automated debugging
more closely with that of human developers, we aim to produce intelligible
explanations of how a specific patch has been generated, with the hope that the
explanation will lead to more efficient and accurate developer decisions. Our
empirical analysis on three program repair benchmarks shows that AutoSD
performs competitively with other program repair baselines, and that it can
indicate when it is confident in its results. Furthermore, we perform a human
study with 20 participants, including six professional developers, to evaluate
the utility of explanations from AutoSD. Participants with access to
explanations could judge patch correctness in roughly the same time as those
without, but their accuracy improved for five out of six real-world bugs
studied: 70% of participants answered that they wanted explanations when using
repair tools, while 55% answered that they were satisfied with the Scientific
Debugging presentation
- …