46,125 research outputs found
Software Measurement Activities in Small and Medium Enterprises: an Empirical Assessment
An empirical study for evaluating the proper implementation of measurement/metric programs in software companies in one area of Turkey is presented. The research questions are discussed and validated with the help of senior software
managers (more than 15 years’ experience) and then used for interviewing a variety of medium and small scale software companies in Ankara. Observations show that there is a
common reluctance/lack of interest in utilizing measurements/metrics despite the fact that they are well known in the industry. A side product of this research is that internationally recognized standards such as ISO and CMMI are pursued if they are a part of project/job
requirements; without these requirements, introducing those standards to the companies remains as a long-term target to increase quality
Annotated bibliography of Software Engineering Laboratory literature
An annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory is given. More than 100 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. All materials have been grouped into eight general subject areas for easy reference: The Software Engineering Laboratory; The Software Engineering Laboratory: Software Development Documents; Software Tools; Software Models; Software Measurement; Technology Evaluations; Ada Technology; and Data Collection. Subject and author indexes further classify these documents by specific topic and individual author
A research review of quality assessment for software
Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness
Recommended from our members
Lessons Learned and Next Steps in Energy Efficiency Measurement and Attribution: Energy Savings, Net to Gross, Non-Energy Benefits, and Persistence of Energy Efficiency Behavior
This white paper examines four topics addressing evaluation, measurement, and attribution of direct and indirect effects to energy efficiency and behavioral programs: Estimates of program savings (gross); Net savings derivation through free ridership / net to gross analyses; Indirect non-energy benefits / impacts (e.g., comfort, convenience, emissions, jobs); and, Persistence of savings
Annotated bibliography of software engineering laboratory literature
An annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory is given. More than 100 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. This document has been updated and reorganized substantially since the original version (SEL-82-006, November 1982). All materials have been grouped into eight general subject areas for easy reference: the Software Engineering Laboratory; the Software Engineering Laboratory-software development documents; software tools; software models; software measurement; technology evaluations; Ada technology; and data collection. Subject and author indexes further classify these documents by specific topic and individual author
Recommended from our members
A systematic review of software development cost estimation studies
This paper aims to provide a basis for the improvement of software estimation research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research topic, estimation approach, research approach, study context and data set. A web-based library of these cost estimation papers is provided to ease the identification of relevant estimation research results. The review results combined with other knowledge provide support for recommendations for future software cost estimation research, including: 1) Increase the breadth of the search for relevant studies, 2) Search manually for relevant papers within a carefully selected set of journals when completeness is essential, 3) Conduct more studies on estimation methods commonly used by the software industry, and, 4) Increase the awareness of how properties of the data sets impact the results when evaluating estimation methods
Trustworthy Experimentation Under Telemetry Loss
Failure to accurately measure the outcomes of an experiment can lead to bias
and incorrect conclusions. Online controlled experiments (aka AB tests) are
increasingly being used to make decisions to improve websites as well as mobile
and desktop applications. We argue that loss of telemetry data (during upload
or post-processing) can skew the results of experiments, leading to loss of
statistical power and inaccurate or erroneous conclusions. By systematically
investigating the causes of telemetry loss, we argue that it is not practical
to entirely eliminate it. Consequently, experimentation systems need to be
robust to its effects. Furthermore, we note that it is nontrivial to measure
the absolute level of telemetry loss in an experimentation system. In this
paper, we take a top-down approach towards solving this problem. We motivate
the impact of loss qualitatively using experiments in real applications
deployed at scale, and formalize the problem by presenting a theoretical
breakdown of the bias introduced by loss. Based on this foundation, we present
a general framework for quantitatively evaluating the impact of telemetry loss,
and present two solutions to measure the absolute levels of loss. This
framework is used by well-known applications at Microsoft, with millions of
users and billions of sessions. These general principles can be adopted by any
application to improve the overall trustworthiness of experimentation and
data-driven decision making.Comment: Proceedings of the 27th ACM International Conference on Information
and Knowledge Management, October 201
- …