162,507 research outputs found

    Improving software quality with programming patterns

    Get PDF
    Software systems and services are increasingly important, involving and improving the work and lives of billions people. However, software development is still human-intensive and error-prone. Established studies report that software failures cost the global economy $312 billion annually and software vendors often spend 50-75% of the total development cost for finding and fixing bugs, i.e. subtle programming errors that cause software failures. People rarely develop software from scratch, but frequently reuse existing software artifacts. In this dissertation, we focus on programming patterns, i.e. frequently occurring code resulted from reuse, and explore their potential for improving software quality. Specially, we develop techniques for recovering programming patterns and using them to find, fix, and prevent bugs more effectively. This dissertation has two main contributions. One is Graph-based Object Usage Model (GROUM), a graph-based representation of source code. A GROUM abstracts a fragment of code as a graph representing its object usages. In a GROUM, nodes correspond to the function calls and control structures while edges capture control and data relationships between them. Based on GROUM, we developed a graph mining technique that could recover programming patterns of API usage and use them for detecting bugs. GROUM is also used to find similar bugs and recommend similar bug fixes. The other main contribution of this dissertation is SLAMC, a Statistical Semantic LAnguage Model for Source Code. SLAMC represents code as sequences of code elements of different roles, e.g. data types, variables, or functions and annotate those elements with sememes, a text-based annotation of their semantic information. SLAMC models the regularities over the sememe sequences code-based factors like local code context, global concerns, and pair-wise associations, thus, implicitly captures programming idioms and patterns as sequences with high probabilities. Based on SLAMC, we developed a technique for recommending most likely next code sequences, which could improve programming productivity and might reduce the odds of programming errors. Empirical evaluation shows that our approaches can detect meaningful programming patterns and anomalies that might cause bugs or maintenance issues, thus could improve software quality. In addition, our models have been successfully used for several other problems, from library adaptation, code migration, to bug fix generation. They also have several other potential applications, which we will explore in the future work

    Leachate treatment by conventional coagulation, electrocoagulation and two-stage coagulation (conventional coagulation and electrocoagulation)

    Get PDF
    Leachate is widely explored and investigated due to highly polluted and difficult to treat. Leachate treatment commonly involves advanced, complicated and high cost activities. Conventional coagulation is widely used in the treatment of wastewater but the sludge production becomes the biggest constraint in this treatment. Electrocoagulation is an alternative to conventional method because it has the same application but produce less sludge and requires simple equipment. Thus, combination of conventional coagulation and electrocoagulation can improve the efficiency of coagulation process in leachate treatment. This article is focusing on the efficiency of single and combined treatment as well as the improvement made by combined treatment. Based on review, the percentage reduction of current density and dose of coagulant was perceptible. As much 50% reduction of current density, duration of treatment, and dose of coagulant able to be obtained by using combined treatment. This combined treatment is able to reduce the cost and at the same time reduce the duration of treatment. Hence, the combined treatment offers an alternative technique for landfill leachate treatment on the removal of pollutants

    AutoBayes: A System for Generating Data Analysis Programs from Statistical Models

    No full text
    Data analysis is an important scientific task which is required whenever information needs to be extracted from raw data. Statistical approaches to data analysis, which use methods from probability theory and numerical analysis, are well-founded but difficult to implement: the development of a statistical data analysis program for any given application is time-consuming and requires substantial knowledge and experience in several areas. In this paper, we describe AutoBayes, a program synthesis system for the generation of data analysis programs from statistical models. A statistical model specifies the properties for each problem variable (i.e., observation or parameter) and its dependencies in the form of a probability distribution. It is a fully declarative problem description, similar in spirit to a set of differential equations. From such a model, AutoBayes generates optimized and fully commented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Code is produced by a schema-guided deductive synthesis process. A schema consists of a code template and applicability constraints which are checked against the model during synthesis using theorem proving technology. AutoBayes augments schema-guided synthesis by symbolic-algebraic computation and can thus derive closed-form solutions for many problems. It is well-suited for tasks like estimating best-fitting model parameters for the given data. Here, we describe AutoBayes's system architecture, in particular the schema-guided synthesis kernel. Its capabilities are illustrated by a number of advanced textbook examples and benchmarks
    • …
    corecore