42,338 research outputs found
Technical Debt Prioritization: State of the Art. A Systematic Literature Review
Background. Software companies need to manage and refactor Technical Debt
issues. Therefore, it is necessary to understand if and when refactoring
Technical Debt should be prioritized with respect to developing features or
fixing bugs. Objective. The goal of this study is to investigate the existing
body of knowledge in software engineering to understand what Technical Debt
prioritization approaches have been proposed in research and industry. Method.
We conducted a Systematic Literature Review among 384 unique papers published
until 2018, following a consolidated methodology applied in Software
Engineering. We included 38 primary studies. Results. Different approaches have
been proposed for Technical Debt prioritization, all having different goals and
optimizing on different criteria. The proposed measures capture only a small
part of the plethora of factors used to prioritize Technical Debt qualitatively
in practice. We report an impact map of such factors. However, there is a lack
of empirical and validated set of tools. Conclusion. We observed that technical
Debt prioritization research is preliminary and there is no consensus on what
are the important factors and how to measure them. Consequently, we cannot
consider current research conclusive and in this paper, we outline different
directions for necessary future investigations
Search algorithms for regression test case prioritization
Regression testing is an expensive, but important, process. Unfortunately, there may be insufficient resources to allow for the re-execution of all test cases during regression testing. In this situation, test case prioritisation techniques aim to improve the effectiveness of regression testing, by ordering the test cases so that the most beneficial are executed first. Previous work on regression test case prioritisation has focused on Greedy Algorithms. However, it is known that these algorithms may produce sub-optimal results, because they may construct results that denote only local minima within the search space. By contrast, meta-heuristic and evolutionary search algorithms aim to avoid such problems. This paper presents results from an empirical study of the application of several greedy, meta-heuristic and evolutionary search algorithms to six programs, ranging from 374 to 11,148 lines of code for 3 choices of fitness metric. The paper addresses the problems of choice of fitness metric, characterisation of landscape modality and determination of the most suitable search technique to apply. The empirical results replicate previous results concerning Greedy Algorithms. They shed light on the nature of the regression testing search space, indicating that it is multi-modal. The results also show that Genetic Algorithms perform well, although Greedy approaches are surprisingly effective, given the multi-modal nature of the landscape
Policy Analysis for Natural Hazards: Some Cautionary Lessons From Environmental Policy Analysis
How should agencies and legislatures evaluate possible policies to mitigate the impacts of earthquakes, floods, hurricanes and other natural hazards? In particular, should governmental bodies adopt the sorts of policy-analytic and risk assessment techniques that are widely used in the area of environmental hazards (chemical toxins and radiation)? Environmental hazards policy analysis regularly employs proxy tests, in particular tests of technological feasibility, rather than focusing on a policy\u27s impact on well-being. When human welfare does enter the analysis, particular aspects of well-being, such as health and safety, are often given priority over others. Individual risk tests and other features of environmental policy analysis sometimes make policy choice fairly insensitive to the size of the exposed population. Seemingly arbitrary numerical cutoffs, such as the one-in-one million incremental risk level, help structure policy evaluation. Risk assessment techniques are often deterministic rather than probabilistic, and in estimating point values often rely on conservative rather than central-tendency estimates. The Article argues that these sorts of features of environmental policy analysis may be justifiable, but only on institutional grounds-if they sufficiently reduce decision costs or bureaucratic error or shirking-and should not be reflexively adopted by natural hazards policymakers. Absent persuasive. institutional justification, natural hazards policy analysis should be welfare-focused, multidimensional, and sensitive to population size, and natural hazards risk assessment techniques should provide information suitable for policy-analytic techniques of this sort
Policy Analysis for Natural Hazards: Some Cautionary Lessons From Environmental Policy Analysis
How should agencies and legislatures evaluate possible policies to mitigate the impacts of earthquakes, floods, hurricanes and other natural hazards? In particular, should governmental bodies adopt the sorts of policy-analytic and risk assessment techniques that are widely used in the area of environmental hazards (chemical toxins and radiation)? Environmental hazards policy analysis regularly employs proxy tests, in particular tests of technological feasibility, rather than focusing on a policy\u27s impact on well-being. When human welfare does enter the analysis, particular aspects of well-being, such as health and safety, are often given priority over others. Individual risk tests and other features of environmental policy analysis sometimes make policy choice fairly insensitive to the size of the exposed population. Seemingly arbitrary numerical cutoffs, such as the one-in-one million incremental risk level, help structure policy evaluation. Risk assessment techniques are often deterministic rather than probabilistic, and in estimating point values often rely on conservative rather than central-tendency estimates. The Article argues that these sorts of features of environmental policy analysis may be justifiable, but only on institutional grounds-if they sufficiently reduce decision costs or bureaucratic error or shirking-and should not be reflexively adopted by natural hazards policymakers. Absent persuasive. institutional justification, natural hazards policy analysis should be welfare-focused, multidimensional, and sensitive to population size, and natural hazards risk assessment techniques should provide information suitable for policy-analytic techniques of this sort
Addressing the Crisis in Fundamental Physics
I present the case for fundamental physics experiments in space playing an
important role in addressing the current "dark energy'' crisis. If cosmological
observations continue to favor a value of the dark energy equation of state
parameter w=-1, with no change over cosmic time, then we will have difficulty
understanding this new fundamental physics. We will then face a very real risk
of stagnation unless we detect some other experimental anomaly. The advantages
of space-based experiments could prove invaluable in the search for the a more
complete understanding of dark energy. This talk was delivered at the start of
the Fundamental Physics Research in Space Workshop in May 2006.Comment: 11 pages, Opening talk presented at the 2006 Workshop on Fundamental
Physics in Space. Submitted to Int'l Journal of Modern Physics,
Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration
Testing in Continuous Integration (CI) involves test case prioritization,
selection, and execution at each cycle. Selecting the most promising test cases
to detect bugs is hard if there are uncertainties on the impact of committed
code changes or, if traceability links between code and tests are not
available. This paper introduces Retecs, a new method for automatically
learning test case selection and prioritization in CI with the goal to minimize
the round-trip time between code commits and developer feedback on failed test
cases. The Retecs method uses reinforcement learning to select and prioritize
test cases according to their duration, previous last execution and failure
history. In a constantly changing environment, where new test cases are created
and obsolete test cases are deleted, the Retecs method learns to prioritize
error-prone test cases higher under guidance of a reward function and by
observing previous CI cycles. By applying Retecs on data extracted from three
industrial case studies, we show for the first time that reinforcement learning
enables fruitful automatic adaptive test case selection and prioritization in
CI and regression testing.Comment: Spieker, H., Gotlieb, A., Marijan, D., & Mossige, M. (2017).
Reinforcement Learning for Automatic Test Case Prioritization and Selection
in Continuous Integration. In Proceedings of 26th International Symposium on
Software Testing and Analysis (ISSTA'17) (pp. 12--22). AC
Enhancing Coastal Resilience: Perspectives on Valuing RI Coastal Lands
This paper discusses coastal resilience as an organizing framework for future policymaking, coastal planning, and insurance decisions, and explores the different perspectives of the value of ecosystems held by various stakeholders in Rhode Island’s coastal communities. A grounded theory approach was used in an effort to abstract general insights from the substantive but isolated areas of coastal management and economics. Special attention is given to the perspectives of municipal decision makers, the National Flood Insurance Program, natural economists, and real estate developers. We have (1) conducted a statistical analysis of environmental spending of RI towns, (2) identified key models for ecosystem services valuation, (3) researched the major threats to coastal ecosystems, and (4) explored how the coastal resilience theme might shape the future of the coast. Elements of the study rely on the formulation and testing of hypotheses. However, the analysis was primarily a demonstration of the inter-disciplinary emergent thinking that this paper proposes will provide solutions for coastal communities’ most pressing issues. The framing question is how social, personal, and environmental goals align when coastal resilience is enhanced, and how stakeholders can utilize these new decision-making tools to achieve increased communication and a more accurate understanding of the perceived value of ecosystem services
- …