87,125 research outputs found

    Investigating Automatic Static Analysis Results to Identify Quality Problems: an Inductive Study

    Get PDF
    Background: Automatic static analysis (ASA) tools examine source code to discover "issues", i.e. code patterns that are symptoms of bad programming practices and that can lead to defective behavior. Studies in the literature have shown that these tools find defects earlier than other verification activities, but they produce a substantial number of false positive warnings. For this reason, an alternative approach is to use the set of ASA issues to identify defect prone files and components rather than focusing on the individual issues. Aim: We conducted an exploratory study to investigate whether ASA issues can be used as early indicators of faulty files and components and, for the first time, whether they point to a decay of specific software quality attributes, such as maintainability or functionality. Our aim is to understand the critical parameters and feasibility of such an approach to feed into future research on more specific quality and defect prediction models. Method: We analyzed an industrial C# web application using the Resharper ASA tool and explored if significant correlations exist in such a data set. Results: We found promising results when predicting defect-prone files. A set of specific Resharper categories are better indicators of faulty files than common software metrics or the collection of issues of all issue categories, and these categories correlate to different software quality attributes. Conclusions: Our advice for future research is to perform analysis on file rather component level and to evaluate the generalizability of categories. We also recommend using larger datasets as we learned that data sparseness can lead to challenges in the proposed analysis proces

    Development of Computer Science Disciplines - A Social Network Analysis Approach

    Full text link
    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and workshop proceedings. That results in an imprecise and incomplete analysis of the computer science knowledge. This paper presents an analysis on the computer science knowledge network constructed from all types of publications, aiming at providing a complete view of computer science research. Based on the combination of two important digital libraries (DBLP and CiteSeerX), we study the knowledge network created at journal/conference level using citation linkage, to identify the development of sub-disciplines. We investigate the collaborative and citation behavior of journals/conferences by analyzing the properties of their co-authorship and citation subgraphs. The paper draws several important conclusions. First, conferences constitute social structures that shape the computer science knowledge. Second, computer science is becoming more interdisciplinary. Third, experts are the key success factor for sustainability of journals/conferences

    Atmospheric effects and spurious signals in GPS analyses

    Get PDF
    Improvements in the analyses of Global Positioning System (GPS) observations yield resolvable millimeter to submillimeter differences in coordinate estimates, thus providing sufficient resolution to distinguish subtle differences in analysis methodologies. Here we investigate the effects on site coordinates of using different approaches to modeling atmospheric loading deformation (ATML) and handling of tropospheric delays. The rigorous approach of using the time-varying Vienna Mapping Function 1 yields solutions with lower noise at a range of frequencies compared with solutions generated using empirical mapping functions. This is particularly evident when ATML is accounted for. Some improvement also arises from using improved a priori zenith hydrostatic delays (ZHD), with the combined effect being site-specific. Importantly, inadequacies in both mapping functions and a priori ZHDs not only introduce time-correlated noise but significant periodic terms at solar annual and semiannual periods. We find no significant difference between solutions where nontidal ATML is applied at the observation level rather than as a daily averaged value, but failing to model diurnal and semidiurnal tidal ATML at the observation level can introduce anomalous propagated signals with periods that closely match the GPS draconitic annual (∼351.4 days) and semiannual period (∼175.7 days). Exacerbated by not fixing ambiguities, these signals are evident in both stacked and single-site power spectra, with each tide contributing roughly equally to the dominant semiannual peak. The amplitude of the propagated signal reaches a maximum of 0.8 mm with a clear latitudinal dependence that is not correlated directly with locations of maximum tidal amplitude.Australian Research Council’s Discovery Project

    Evaluation Criteria for Object-oriented Metrics

    Get PDF
    In this paper an evaluation model for object-oriented (OO) metrics is proposed. We have evaluated the existing evaluation criteria for OO metrics, and based on the observations, a model is proposed which tries to cover most of the features for the evaluation of OO metrics. The model is validated by applying it to existing OO metrics. In contrast to the other existing criteria, the proposed model is simple in implementation and includes the practical and important aspects of evaluation; hence it suitable to evaluate and validate any OO complexity metric
    • …
    corecore