72,655 research outputs found
Measuring Software Process: A Systematic Mapping Study
Context: Measurement is essential to reach predictable performance and high capability processes. It provides
support for better understanding, evaluation, management, and control of the development process
and project, as well as the resulting product. It also enables organizations to improve and predict its processâs
performance, which places organizations in better positions to make appropriate decisions. Objective:
This study aims to understand the measurement of the software development process, to identify studies,
create a classification scheme based on the identified studies, and then to map such studies into the scheme
to answer the research questions. Method: Systematic mapping is the selected research methodology for this
study. Results: A total of 462 studies are included and classified into four topics with respect to their focus
and into three groups based on the publishing date. Five abstractions and 64 attributes were identified,
25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the
most measured process attributes, while effort and performance were the most measured project attributes.
Goal Question Metric and Capability Maturity Model Integration were the main methods and models used
in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently
identified research contexts.Ministerio de EconomĂa y Competitividad TIN2013-46928-C3-3-RMinisterio de EconomĂa y Competitividad TIN2016-76956-C3-2- RMinisterio de EconomĂa y Competitividad TIN2015-71938-RED
Evolution of statistical analysis in empirical software engineering research: Current state and steps forward
Software engineering research is evolving and papers are increasingly based
on empirical data from a multitude of sources, using statistical tests to
determine if and to what degree empirical evidence supports their hypotheses.
To investigate the practices and trends of statistical analysis in empirical
software engineering (ESE), this paper presents a review of a large pool of
papers from top-ranked software engineering journals. First, we manually
reviewed 161 papers and in the second phase of our method, we conducted a more
extensive semi-automatic classification of papers spanning the years 2001--2015
and 5,196 papers. Results from both review steps was used to: i) identify and
analyze the predominant practices in ESE (e.g., using t-test or ANOVA), as well
as relevant trends in usage of specific statistical methods (e.g.,
nonparametric tests and effect size measures) and, ii) develop a conceptual
model for a statistical analysis workflow with suggestions on how to apply
different statistical methods as well as guidelines to avoid pitfalls. Lastly,
we confirm existing claims that current ESE practices lack a standard to report
practical significance of results. We illustrate how practical significance can
be discussed in terms of both the statistical analysis and in the
practitioner's context.Comment: journal submission, 34 pages, 8 figure
Technical Debt Prioritization: State of the Art. A Systematic Literature Review
Background. Software companies need to manage and refactor Technical Debt
issues. Therefore, it is necessary to understand if and when refactoring
Technical Debt should be prioritized with respect to developing features or
fixing bugs. Objective. The goal of this study is to investigate the existing
body of knowledge in software engineering to understand what Technical Debt
prioritization approaches have been proposed in research and industry. Method.
We conducted a Systematic Literature Review among 384 unique papers published
until 2018, following a consolidated methodology applied in Software
Engineering. We included 38 primary studies. Results. Different approaches have
been proposed for Technical Debt prioritization, all having different goals and
optimizing on different criteria. The proposed measures capture only a small
part of the plethora of factors used to prioritize Technical Debt qualitatively
in practice. We report an impact map of such factors. However, there is a lack
of empirical and validated set of tools. Conclusion. We observed that technical
Debt prioritization research is preliminary and there is no consensus on what
are the important factors and how to measure them. Consequently, we cannot
consider current research conclusive and in this paper, we outline different
directions for necessary future investigations
Research Findings on Empirical Evaluation of Requirements Specifications Approaches
Numerous software requirements specification (SRS) approaches have been proposed in software engineering. However, there has been little empirical evaluation of the use of these approaches in specific contexts. This paper describes the results of a mapping study, a key instrument of the evidence-based paradigm, in an effort to understand what aspects of SRS are evaluated, in which context, and by using which research method. On the basis of 46 identified and categorized primary studies, we found that understandability is the most commonly evaluated aspect of SRS, experiments are the most commonly used research method, and the academic environment is where most empirical evaluation takes place
A plea for minimally biased naturalistic philosophy
Naturalistic philosophers rely on literature search and review in a number of ways and for different purposes. Yet this article shows how processes of literature search and review are likely to be affected by widespread and systematic biases. A solution to this problem is offered here. Whilst the tradition of systematic reviews of literature from scientific disciplines has been neglected in philosophy, systematic reviews are important tools that minimize bias in literature search and review and allow for greater reproducibility and transparency. If naturalistic philosophers wish to reduce bias in their research, they should then supplement their traditional tools for literature search and review by including systematic methodologies
Characterizing Service Level Objectives for Cloud Services: Motivation of Short-Term Cache Allocation Performance Modeling
Service level objectives (SLOs) stipulate performance goals for cloud applications, microservices, and infrastructure. SLOs are widely used, in part, because system managers can tailor goals to their products, companies, and workloads. Systems research intended to support strong SLOs should target realistic performance goals used by system managers in the field. Evaluations conducted with uncommon SLO goals may not translate to real systems. Some textbooks discuss the structure of SLOs but (1) they only sketch SLO goals and (2) they use outdated examples. We mined real SLOs published on the web, extracted their goals and characterized them. Many web documents discuss SLOs loosely but few provide details and reflect real settings. Systematic literature review (SLR) prunes results and reduces bias by (1) modeling expected SLO structure and (2) detecting and removing outliers. We collected 75 SLOs where response time, query percentile and reporting period were specified. We used these SLOs to confirm and refute common perceptions. For example, we found few SLOs with response time guarantees below 10 ms for 90% or more queries. This reality bolsters perceptions that single digit SLOs face fundamental research challenges.This work was funded by NSF Grants 1749501 and 1350941.No embargoAcademic Major: Computer Science and EngineeringAcademic Major: Financ
Recommended from our members
THERACOM: a systematic review of the evidence base for interventions to improve Therapeutic Communications between black and minority ethnic populations and staff in specialist mental health services.
PMCID: PMC3599664This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.BACKGROUND: Black and Minority Ethnic (BME) groups in receipt of specialist mental health care have reported higher rates of detention under the mental health act, less use of psychological therapies, and more dissatisfaction. Although many explanations have been put forward to explain this, a failure of therapeutic communications may explain poorer satisfaction, disengagement from services and ethnic variations in access to less coercive care. Interventions that improve therapeutic communications may offer new approaches to tackle ethnic inequalities in experiences and outcomes. METHODS: The THERACOM project is an HTA-funded evidence synthesis review of interventions to improve therapeutic communications between black and minority ethnic patients in contact with specialist mental health services and staff providing those services. This article sets out the protocol methods for a necessarily broad review topic, including appropriate search strategies, dilemmas for classifying different types of therapeutic communications and expectations of the types of interventions to improve them. The review methods will accommodate unexpected types of study and interventions. The findings will be reported in 2013, including a synthesis of the quantitative and grey literature. DISCUSSION: A particular methodological challenge is to identify and rate the quality of many different study types, for example, randomised controlled trials, observational quantitative studies, qualitative studies and case studies, which comprise the full range of hierarchies of evidence. We discuss the preliminary methodological challenges and some solutions. (PROSPERO registration number: CRD42011001661)
Patient-reported outcome measures for chronic obstructive pulmonary disease: the exclusion of people with low literacy skills and learning disabilities
<p>Background: Patient-reported outcome measures (PROMs)
are intended to reïŹect outcomes relevant to patients. They are
increasingly used for healthcare quality improvement. To
produce valid measures, patients should be involved in the
development process but it is unclear whether this usually
includes people with low literacy skills or learning disabilities.
This potential exclusion raises concerns about whether these
groups will be able to use these measures and participate in
quality improvement practices.</p>
<p>Methods: Taking PROMs for chronic obstructive pulmonary disease (COPD) as an exemplar condition, our review
determined the inclusion of people with low literacy skills and
learning disabilities in research developing, validating, and
using 12 PROMs for COPD patients. The studies included in
our review were based on those identiïŹed in two existing
systematic reviews and our update of this search.
Results People with low literacy skills and/or learning
disabilities were excluded from the development of
PROMs in two ways: explicitly through the participant
eligibility criteria and, more commonly, implicitly through
recruitment or administration methods that would require
high-level reading and cognitive abilities. None of the
studies mentioned efforts to include people with low literacy skills or learning disabilities.</p>
<p>Conclusion: Our ïŹndings suggest that people with low
literacy skills or learning disabilities are left out of the
development of PROMs. Given that implicit exclusion was
most common, researchers and those who administer
PROMs may not even be aware of this problem. Without
effort to improve inclusion, unequal quality improvement
practices may become embedded in the health system.</p>
- âŠ