1,759 research outputs found

    Software Engineers' Information Seeking Behavior in Change Impact Analysis - An Interview Study

    Get PDF
    Software engineers working in large projects must navigate complex information landscapes. Change Impact Analysis (CIA) is a task that relies on engineers' successful information seeking in databases storing, e.g., source code, requirements, design descriptions, and test case specifications. Several previous approaches to support information seeking are task-specific, thus understanding engineers' seeking behavior in specific tasks is fundamental. We present an industrial case study on how engineers seek information in CIA, with a particular focus on traceability and development artifacts that are not source code. We show that engineers have different information seeking behavior, and that some do not consider traceability particularly useful when conducting CIA. Furthermore, we observe a tendency for engineers to prefer less rigid types of support rather than formal approaches, i.e., engineers value support that allows flexibility in how to practically conduct CIA. Finally, due to diverse information seeking behavior, we argue that future CIA support should embrace individual preferences to identify change impact by empowering several seeking alternatives, including searching, browsing, and tracing.Comment: Accepted for publication in the proceedings of the 25th International Conference on Program Comprehensio

    Current Practices for Product Usability Testing in Web and Mobile Applications

    Get PDF
    Software usability testing is a key methodology that ensures applications are intuitive and easy to use for the target audience. Usability testing has direct benefits for companies as usability improvements often are fundamental to the success of a product. A standard usability test study includes the following five steps: obtain suitable participants, design test scripts, conduct usability sessions, interpret test outcomes, and produce recommendations. Due to the increasing importance for more usable applications, effective techniques to develop usable products, as well as technologies to improve usability testing, have been widely utilized. However, as companies are developing more cross-platform web and mobile apps, traditional single-platform usability testing has shortcomings with respect to ensuring a uniform user experience. In this report, a new strategy is proposed to promote a consistent user experience across all application versions and platforms. This method integrates the testing of different application versions, e.g., the website, mobile app, mobile website. Participants are recruited with a better-defined criterion according to their preferred devices. The usability session is conducted iteratively on several different devices, and the test results of individual application versions are compared on a per-device basis to improve the test outcomes. This strategy is expected to extend on current practices for usability testing by incorporating cross-platform consistency of software versions on most devices

    Automated Fixing of Programs with Contracts

    Full text link
    This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability. In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.Comment: Minor changes after proofreadin

    Automatically Fixing Syntax Errors Using the Levenshtein Distance

    Get PDF
    Abstract:To ensure high quality software, much emphasis is laid on software testing. While a number of techniques and tools already exist to identify and locate syntax errors, it is still the duty of programmers to manually fix each of these uncovered syntax errors. In this paper we propose an approach to automate the task of fixing syntax errors by using existing compilers and the levenshtein distance between the identified bug and the possible fixes. The levenshtein distance is a measure of the similarity between two strings. A prototype, called ASBF, has also been built and a number of tests carried out which show that the technique works well in most cases. ASBF is able to automatically fix syntax errors in any erroneous source file and can also process several erroneous files in a source folder. The tests carried out also show that the technique can also be applied to multiple programming languages. Currently ASBF can automatically fix software bugs in the Java and the Python programming languages. The tool also has auto-learning capabilities where it can automatically learn from corrections made manually by a user. It can thereafter couple this learning process with the levenshtein distance to improve its software bugcorrection capabilities.Keywords: Automatically fixing syntax errors, bug fixing, auto-learn, levenshtein distance, Java, Python(Article history: Received 16 September 2016 and accepted 9 December 2016

    Empirically-Grounded Construction of Bug Prediction and Detection Tools

    Get PDF
    There is an increasing demand on high-quality software as software bugs have an economic impact not only on software projects, but also on national economies in general. Software quality is achieved via the main quality assurance activities of testing and code reviewing. However, these activities are expensive, thus they need to be carried out efficiently. Auxiliary software quality tools such as bug detection and bug prediction tools help developers focus their testing and reviewing activities on the parts of software that more likely contain bugs. However, these tools are far from adoption as mainstream development tools. Previous research points to their inability to adapt to the peculiarities of projects and their high rate of false positives as the main obstacles of their adoption. We propose empirically-grounded analysis to improve the adaptability and efficiency of bug detection and prediction tools. For a bug detector to be efficient, it needs to detect bugs that are conspicuous, frequent, and specific to a software project. We empirically show that the null-related bugs fulfill these criteria and are worth building detectors for. We analyze the null dereferencing problem and find that its root cause lies in methods that return null. We propose an empirical solution to this problem that depends on the wisdom of the crowd. For each API method, we extract the nullability measure that expresses how often the return value of this method is checked against null in the ecosystem of the API. We use nullability to annotate API methods with nullness annotation and warn developers about missing and excessive null checks. For a bug predictor to be efficient, it needs to be optimized as both a machine learning model and a software quality tool. We empirically show how feature selection and hyperparameter optimizations improve prediction accuracy. Then we optimize bug prediction to locate the maximum number of bugs in the minimum amount of code by finding the most cost-effective combination of bug prediction configurations, i.e., dependent variables, machine learning model, and response variable. We show that using both source code and change metrics as dependent variables, applying feature selection on them, then using an optimized Random Forest to predict the number of bugs results in the most cost-effective bug predictor. Throughout this thesis, we show how empirically-grounded analysis helps us achieve efficient bug prediction and detection tools and adapt them to the characteristics of each software project
    • …
    corecore