34,654 research outputs found
Technical Debt Prioritization: State of the Art. A Systematic Literature Review
Background. Software companies need to manage and refactor Technical Debt
issues. Therefore, it is necessary to understand if and when refactoring
Technical Debt should be prioritized with respect to developing features or
fixing bugs. Objective. The goal of this study is to investigate the existing
body of knowledge in software engineering to understand what Technical Debt
prioritization approaches have been proposed in research and industry. Method.
We conducted a Systematic Literature Review among 384 unique papers published
until 2018, following a consolidated methodology applied in Software
Engineering. We included 38 primary studies. Results. Different approaches have
been proposed for Technical Debt prioritization, all having different goals and
optimizing on different criteria. The proposed measures capture only a small
part of the plethora of factors used to prioritize Technical Debt qualitatively
in practice. We report an impact map of such factors. However, there is a lack
of empirical and validated set of tools. Conclusion. We observed that technical
Debt prioritization research is preliminary and there is no consensus on what
are the important factors and how to measure them. Consequently, we cannot
consider current research conclusive and in this paper, we outline different
directions for necessary future investigations
Policy Analysis for Natural Hazards: Some Cautionary Lessons From Environmental Policy Analysis
How should agencies and legislatures evaluate possible policies to mitigate the impacts of earthquakes, floods, hurricanes and other natural hazards? In particular, should governmental bodies adopt the sorts of policy-analytic and risk assessment techniques that are widely used in the area of environmental hazards (chemical toxins and radiation)? Environmental hazards policy analysis regularly employs proxy tests, in particular tests of technological feasibility, rather than focusing on a policy\u27s impact on well-being. When human welfare does enter the analysis, particular aspects of well-being, such as health and safety, are often given priority over others. Individual risk tests and other features of environmental policy analysis sometimes make policy choice fairly insensitive to the size of the exposed population. Seemingly arbitrary numerical cutoffs, such as the one-in-one million incremental risk level, help structure policy evaluation. Risk assessment techniques are often deterministic rather than probabilistic, and in estimating point values often rely on conservative rather than central-tendency estimates. The Article argues that these sorts of features of environmental policy analysis may be justifiable, but only on institutional grounds-if they sufficiently reduce decision costs or bureaucratic error or shirking-and should not be reflexively adopted by natural hazards policymakers. Absent persuasive. institutional justification, natural hazards policy analysis should be welfare-focused, multidimensional, and sensitive to population size, and natural hazards risk assessment techniques should provide information suitable for policy-analytic techniques of this sort
Policy Analysis for Natural Hazards: Some Cautionary Lessons From Environmental Policy Analysis
How should agencies and legislatures evaluate possible policies to mitigate the impacts of earthquakes, floods, hurricanes and other natural hazards? In particular, should governmental bodies adopt the sorts of policy-analytic and risk assessment techniques that are widely used in the area of environmental hazards (chemical toxins and radiation)? Environmental hazards policy analysis regularly employs proxy tests, in particular tests of technological feasibility, rather than focusing on a policy\u27s impact on well-being. When human welfare does enter the analysis, particular aspects of well-being, such as health and safety, are often given priority over others. Individual risk tests and other features of environmental policy analysis sometimes make policy choice fairly insensitive to the size of the exposed population. Seemingly arbitrary numerical cutoffs, such as the one-in-one million incremental risk level, help structure policy evaluation. Risk assessment techniques are often deterministic rather than probabilistic, and in estimating point values often rely on conservative rather than central-tendency estimates. The Article argues that these sorts of features of environmental policy analysis may be justifiable, but only on institutional grounds-if they sufficiently reduce decision costs or bureaucratic error or shirking-and should not be reflexively adopted by natural hazards policymakers. Absent persuasive. institutional justification, natural hazards policy analysis should be welfare-focused, multidimensional, and sensitive to population size, and natural hazards risk assessment techniques should provide information suitable for policy-analytic techniques of this sort
Technical Debt Prioritization: State of the Art. A Systematic Literature Review
Background. Software companies need to manage and refactor Technical Debt issues. Therefore, it is necessary to understand if and when refactoring of Technical Debt should be prioritized with respect to developing features or fixing bugs.Objective. The goal of this study is to investigate the existing body of knowledge in software engineering to understand what Technical Debt prioritization approaches have been proposed in research and industry. Method. We conducted a Systematic Literature Review of 557 unique papers published until 2019, following a consolidated methodology applied in software engineering. We included 44 primary studies.Results. Different approaches have been proposed for Technical Debt prioritization, all having different goals and proposing optimization regarding different criteria. The proposed measures capture only a small part of the plethora of factors used to prioritize Technical Debt qualitatively in practice. We present an impact map of such factors. However, there is a lack of empirical and validated set of tools.Conclusion. We observed that Technical Debt prioritization research is preliminary and there is no consensus on what the important factors are and how to measure them. Consequently, we cannot consider current research\ua0conclusive. In this paper, we therefore outline different directions for necessary future investigations
A Generic Synthesis Algorithm for Well-Defined Parametric Design
This paper aims to improve the way synthesis tools can be built by formalizing: 1) the design artefact, 2) related knowledge and 3) an algorithm to generate solutions. This paper focuses on well-defined parametric engineering design, ranging from machine elements to industrial products. A design artefact is formalized in terms of parameters and topology elements. The knowledge is classified in three types: resolving rules to determine parameter values, constraining rules to restrict parameter values and expansion rules to add elements to the topology. A synthesis algorithm, based on an opportunistic design strategy, is described and tested for three design cases
How Time-Fault Ratio helps in Test Case Prioritization for Regression Testing
Regression testing analyzes whether the maintenance of the software has adversely affected its normal functioning. Regression testing is generally performed under the strict time constraints. Due to limited time budget, it is not possible to test the software with all available test cases. Thus, the reordering of the test cases, on the basis of their effectiveness, is always needed. A test prioritization technique, which prioritizes the test cases on the basis of their Time -Fault Ratio (TFR), has been proposed in this paper. The technique tends to maximize the fault detection as the faults are exposed in the ascending order of their detection times. The proposed technique may be used at any stage of software development
- …