212,355 research outputs found

    A Survey of Software Metric Use in Research Software Development

    Get PDF
    Background: Breakthroughs in research increasingly depend on complex software libraries, tools, and applications aimed at supporting specific science, engineering, business, or humanities disciplines. The complexity and criticality of this software motivate the need for ensuring quality and reliability. Software metrics are a key tool for assessing, measuring, and understanding software quality and reliability. Aims: The goal of this work is to better understand how research software developers use traditional software engineering concepts, like metrics, to support and evaluate both the software and the software development process. One key aspect of this goal is to identify how the set of metrics relevant to research software corresponds to the metrics commonly used in traditional software engineering. Method: We surveyed research software developers to gather information about their knowledge and use of code metrics and software process metrics. We also analyzed the influence of demographics (project size, development role, and development stage) on these metrics. Results: The survey results, from 129 respondents, indicate that respondents have a general knowledge of metrics. However, their knowledge of specific SE metrics is lacking, their use even more limited. The most used metrics relate to performance and testing. Even though code complexity often poses a significant challenge to research software development, respondents did not indicate much use of code metrics. Conclusions: Research software developers appear to be interested and see some value in software metrics but may be encountering roadblocks when trying to use them. Further study is needed to determine the extent to which these metrics could provide value in continuous process improvement

    An Introduction to Slice-Based Cohesion and Coupling Metrics

    Get PDF
    This report provides an overview of slice-based software metrics. It brings together information about the development of the metrics from Weiser’s original idea that program slices may be used in the measurement of program complexity, with alternative slice-based measures proposed by other researchers. In particular, it details two aspects of slice-based metric calculation not covered elsewhere in the literature: output variables and worked examples of the calculations. First, output variables are explained, their use explored and standard reference terms and usage proposed. Calculating slice-based metrics requires a clear understanding of ‘output variables’ because they form the basis for extracting the program slices on which the calculations depend. This report includes a survey of the variation in the definition of output variables used by different research groups and suggests standard terms of reference for these variables. Our study identifies four elements which are combined in the definition of output variables. These are the function return value, modified global variables, modified reference parameters and variables printed or otherwise output by the module. Second, slice-based metric calculations are explained with the aid of worked examples, to assist newcomers to the field. Step-by-step calculations of slice-based cohesion and coupling metrics based on the vertices output by the static analysis tool CodeSurfer (R) are presented and compared with line-based calculations

    Toward Data-Driven Discovery of Software Vulnerabilities

    Get PDF
    Over the years, Software Engineering, as a discipline, has recognized the potential for engineers to make mistakes and has incorporated processes to prevent such mistakes from becoming exploitable vulnerabilities. These processes span the spectrum from using unit/integration/fuzz testing, static/dynamic/hybrid analysis, and (automatic) patching to discover instances of vulnerabilities to leveraging data mining and machine learning to collect metrics that characterize attributes indicative of vulnerabilities. Among these processes, metrics have the potential to uncover systemic problems in the product, process, or people that could lead to vulnerabilities being introduced, rather than identifying specific instances of vulnerabilities. The insights from metrics can be used to support developers and managers in making decisions to improve the product, process, and/or people with the goal of engineering secure software. Despite empirical evidence of metrics\u27 association with historical software vulnerabilities, their adoption in the software development industry has been limited. The level of granularity at which the metrics are defined, the high false positive rate from models that use the metrics as explanatory variables, and, more importantly, the difficulty in deriving actionable intelligence from the metrics are often cited as factors that inhibit metrics\u27 adoption in practice. Our research vision is to assist software engineers in building secure software by providing a technique that generates scientific, interpretable, and actionable feedback on security as the software evolves. In this dissertation, we present our approach toward achieving this vision through (1) systematization of vulnerability discovery metrics literature, (2) unsupervised generation of metrics-informed security feedback, and (3) continuous developer-in-the-loop improvement of the feedback. We systematically reviewed the literature to enumerate metrics that have been proposed and/or evaluated to be indicative of vulnerabilities in software and to identify the validation criteria used to assess the decision-informing ability of these metrics. In addition to enumerating the metrics, we implemented a subset of these metrics as containerized microservices. We collected the metric values from six large open-source projects and assessed metrics\u27 generalizability across projects, application domains, and programming languages. We then used an unsupervised approach from literature to compute threshold values for each metric and assessed the thresholds\u27 ability to classify risk from historical vulnerabilities. We used the metrics\u27 values, thresholds, and interpretation to provide developers natural language feedback on security as they contributed changes and used a survey to assess their perception of the feedback. We initiated an open dialogue to gain an insight into their expectations from such feedback. In response to developer comments, we assessed the effectiveness of an existing vulnerability discovery approach—static analysis—and that of vulnerability discovery metrics in identifying risk from vulnerability contributing commits

    A new framework and learning tool to enhance the usability of software

    Get PDF
    Edited version embargoed until 01.03.2018 Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 01.03.2017 by SC, Graduate schoolDue to technological developments, apps (mobile applications) and web-based applications are now used daily by millions of people worldwide. Accordingly, such applications need to be usable by all groups of users, regardless of individual attributes. Thus, software usability measurement is fundamental metric that needs to be evaluated in order to assess software efficiency, effectiveness, learnability and user satisfaction. Consequently, a new approach is required that both educates software novice developers in software evaluation methods and promotes the use of usability evaluation methods to create usable products. This research devised a development framework and learning tool in order to enhance overall awareness and assessment practice. Furthermore, the research also focuses on Usability Evaluation Methods (UEMs) with the objective of providing novice developers with support when making decisions pertaining to the use of learning resources. The proposed development framework and its associated learning resources is titled dEv (Design Evaluation), and it has been devised in order to address the three key challenges identified in the literature review and reinforce by the studies. These three challenges are: (i) the involvement of users in the initial phases of the development process, (ii) the mindset and perspectives of novice developers with regard to various issues as a result of their lack of UEMs or the provision of too many, and (iii) the general lack of knowledge and awareness concerning the importance and value of UEMs. The learning tool was created in line with investigation studies, feedback and novice developers requirements in the initial stages of the development process. An iterative experimental approach was adapted which incorporated the use of interviews and survey-based questionnaires. It was geared towards analysing the framework, learning tool and their various effects. Two subsequent studies were carried out in order to test the approach adopted and provide insight into its results. The studies also reported on their ability to affect novice developers using assessment methods and also to overcome a number of the difficulties associated with UEM application. This suggested approach is valuable when considering two different contributions: primarily, the integration of software evaluation and software development in the dEv framework, which encourages professionals to evaluate across all phases of the development; secondly, it is able to enhance developer awareness and insight with regard to evaluation techniques and their application

    GEOMATICS AS A SURVEY TOOL TO DOCUMENT AND ENHANCE THE CULTURALAND LANDSCAPED HERITAGE OF THE MONUMENTAL COMPLEXES IN THEMOUNTAINS OF ABRUZZO

    Get PDF
    Abstract. The themes of the conference provide an opportunity to exchange views on topics of study in which multidisciplinary contributions of geomatics and restoration contribute to the cognitive process aimed at the conservation of cultural Heritage. In this regard, the contribution exposes research aimed at understanding the documentation and the enhancement of unique architectural – landscape patrimony kept in the Abruzzo mountains. It is about the numerous spiritual retreats established by Pietro da Morrone, Pope Celestino V, mounted among unpassable rocky walls, where the architecture blends with its natural environment camouflaging with it. The analysis refers, specifically to the aspects of survey conducted during the years with the aid of integrated methodologies, able to allow the acquisition, management and comparison of the data. The analysis refers, specifically, to recent digital acquisitions involving the development of San Bartolomeo in Legio, on the slopes of Majella near Roccamorice detected with the use of comparative Agisoft Photoscan and Pix4d software, with shots taken with drones of different sizes, able to mount professional photographic cameras and associate to each picture the coordinates Gps of the point of shooting. Follows a confrontation between a survey carried out with 3d laser scanner, Faro Ls1105, and described acquisitions, obtained from ground and from drone with Photoscan, in order to compare the two scans and the metric differences obtained with the two methods

    Software Measurement Activities in Small and Medium Enterprises: an Empirical Assessment

    Get PDF
    An empirical study for evaluating the proper implementation of measurement/metric programs in software companies in one area of Turkey is presented. The research questions are discussed and validated with the help of senior software managers (more than 15 years’ experience) and then used for interviewing a variety of medium and small scale software companies in Ankara. Observations show that there is a common reluctance/lack of interest in utilizing measurements/metrics despite the fact that they are well known in the industry. A side product of this research is that internationally recognized standards such as ISO and CMMI are pursued if they are a part of project/job requirements; without these requirements, introducing those standards to the companies remains as a long-term target to increase quality

    An Approach for the Empirical Validation of Software Complexity Measures

    Get PDF
    Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics

    The Reality of Measuring Human Service Programs: Results of a Survey

    Get PDF
    In the summer of 2013, Idealware created and distributed a survey to learn how human service organizations from their own mailing list are actually using technology to measure and evaluate the outcomes of their programs. The suvey looked at a general overview of outcomes measurement and program evaluation topics, from how frequently they look at data and how much time they spend doing so to what types of metrics the organizations were tracking. To further understand the realities of measuring program effectiveness, Idealware conducted a site visit and interview of three human service organizations in Portland, Maine. The results clearly show that the respondents are struggling to measure their programs

    Predicting and Evaluating Software Model Growth in the Automotive Industry

    Full text link
    The size of a software artifact influences the software quality and impacts the development process. In industry, when software size exceeds certain thresholds, memory errors accumulate and development tools might not be able to cope anymore, resulting in a lengthy program start up times, failing builds, or memory problems at unpredictable times. Thus, foreseeing critical growth in software modules meets a high demand in industrial practice. Predicting the time when the size grows to the level where maintenance is needed prevents unexpected efforts and helps to spot problematic artifacts before they become critical. Although the amount of prediction approaches in literature is vast, it is unclear how well they fit with prerequisites and expectations from practice. In this paper, we perform an industrial case study at an automotive manufacturer to explore applicability and usability of prediction approaches in practice. In a first step, we collect the most relevant prediction approaches from literature, including both, approaches using statistics and machine learning. Furthermore, we elicit expectations towards predictions from practitioners using a survey and stakeholder workshops. At the same time, we measure software size of 48 software artifacts by mining four years of revision history, resulting in 4,547 data points. In the last step, we assess the applicability of state-of-the-art prediction approaches using the collected data by systematically analyzing how well they fulfill the practitioners' expectations. Our main contribution is a comparison of commonly used prediction approaches in a real world industrial setting while considering stakeholder expectations. We show that the approaches provide significantly different results regarding prediction accuracy and that the statistical approaches fit our data best
    • …
    corecore