1,116,602 research outputs found

    Experimenting with generic algorithms to resolve the next release problem

    Get PDF
    This thesis consists in determining which genetic algorithm performs better to solve the Next Release Problem. It contains a reformulation of the problem, its implementation with the jMetal library and the final experimenting

    Evolutionary Approaches for Multi-Objective Next Release Problem

    Get PDF
    In software industry, a common problem that the companies face is to decide what requirements should be implemented in the next release of the software. This paper aims to address the multi-objective next release problem using search based methods such as multi-objective evolutionary algorithms for empirical studies. In order to achieve the above goal, a requirement-dependency-based multi-objective next release model (MONRP/RD) is formulated firstly. The two objectives we are interested in are customers' satisfaction and requirement cost. A popular multi-objective evolutionary approach (MOEA), NSGA-II, is applied to provide the feasible solutions that balance between the two objectives aimed. The scalability of the formulated MONRP/RD and the influence of the requirement dependencies are investigated through simulations as well. This paper proposes an improved version of the multi-objective invasive weed optimization and compares it with various state-of-the-art multi-objective approaches on both synthetic and real-world data sets to find the most suitable algorithm for the problem

    Effect of Introduction of Fault and Imperfect Debugging on Release Time

    Get PDF
    One of the most important decisions related to the efficient management of testing phase of software development life cycle is to determine when to stop testing and release the software in the market. Most of the testing processes are imperfect once. In this paper first we have discussed an optimal release time problem for an imperfect faultdebugging model due to Kapur et al considering effect of perfect and imperfect debugging separately on the total expected software cost. Next, we proposed a SRGM incorporating the effect of imperfect fault debugging and error generation. The proposed model is validated on a data set cited in literature and a release time problem is formulated minimizing the expected cost subject to a minimum reliability level to be achieved by the release time using the proposed model. Solution method is discussed to solve such class of problem. A numerical illustration is given for both type of release problem and finally a sensitivity analysis is performed

    An Integer Linear Programming approach to the single and bi-objective Next Release Problem

    Get PDF
    Context The Next Release Problem involves determining the set of requirements to implement in the next release of a software project. When the problem was first formulated in 2001, Integer Linear Programming, an exact method, was found to be impractical because of large execution times. Since then, the problem has mainly been addressed by employing metaheuristic techniques.  Objective In this paper, we investigate if the single-objective and bi-objective Next Release Problem can be solved exactly and how to better approximate the results when exact resolution is costly.  Methods We revisit Integer Linear Programming for the single-objective version of the problem. In addition, we integrate it within the Epsilon-constraint method to address the bi-objective problem. We also investigate how the Pareto front of the bi-objective problem can be approximated through an anytime deterministic Integer Linear Programming-based algorithm when results are required within strict runtime constraints. Comparisons are carried out against NSGA-II. Experiments are performed on a combination of synthetic and real-world datasets. Findings We show that a modern Integer Linear Programming solver is now a viable method for this problem. Large single objective instances and small bi-objective instances can be solved exactly very quickly. On large bi-objective instances, execution times can be significant when calculating the complete Pareto front. However, good approximations can be found effectively.  Conclusion This study suggests that (1) approximation algorithms can be discarded in favor of the exact method for the single-objective instances and small bi-objective instances, (2) the Integer Linear Programming-based approximate algorithm outperforms the NSGA-II genetic approach on large bi-objective instances, and (3) the run times for both methods are low enough to be used in real-world situations

    Revisiting the Economics of Privacy: Population Statistics and Confidentiality Protection as Public Goods

    Get PDF
    This paper has been replaced with http://digitalcommons.ilr.cornell.edu/ldi/37. We consider the problem of the public release of statistical information about a population–explicitly accounting for the public-good properties of both data accuracy and privacy loss. We first consider the implications of adding the public-good component to recently published models of private data publication under differential privacy guarantees using a Vickery-Clark-Groves mechanism and a Lindahl mechanism. We show that data quality will be inefficiently under-supplied. Next, we develop a standard social planner’s problem using the technology set implied by (ε, δ)-differential privacy with (α, β)-accuracy for the Private Multiplicative Weights query release mechanism to study the properties of optimal provision of data accuracy and privacy loss when both are public goods. Using the production possibilities frontier implied by this technology, explicitly parameterized interdependent preferences, and the social welfare function, we display properties of the solution to the social planner’s problem. Our results directly quantify the optimal choice of data accuracy and privacy loss as functions of the technology and preference parameters. Some of these properties can be quantified using population statistics on marginal preferences and correlations between income, data accuracy preferences, and privacy loss preferences that are available from survey data. Our results show that government data custodians should publish more accurate statistics with weaker privacy guarantees than would occur with purely private data publishing. Our statistical results using the General Social Survey and the Cornell National Social Survey indicate that the welfare losses from under-providing data accuracy while over-providing privacy protection can be substantial

    Exact Scalable Sensitivity Analysis for the Next Release Problem

    Get PDF
    The nature of the requirements analysis problem, based as it is on uncertain and often inaccurate estimates of costs and effort, makes sensitivity analysis important. Sensitivity analysis allows the decision maker to identify those requirements and budgets that are particularly sensitive to misestimation. However, finding scalable sensitivity analysis techniques is not easy because the underlying optimization problem is NP-hard. This article introduces an approach to sensitivity analysis based on exact optimization. We implemented this approach as a tool, OATSAC, which allowed us to experimentally evaluate the scalability and applicability of Requirements Sensitivity Analysis (RSA). Our results show that OATSAC scales sufficiently well for practical applications in Requirements Sensitivity Analysis. We also show how the sensitivity analysis can yield insights into difficult and otherwise obscure interactions between budgets, requirements costs, and estimate inaccuracies using a real-world case study

    An innovative framework for real-time monitoring of pollutant point sources in river networks

    Get PDF
    Simultaneous identification of the location and release history of pollutant sources in river networks is an ill-posed and complicated problem, particularly in the case of multiple sources with time-varying release patterns. This study presents an innovative method for solving this problem using minimum observational data. To do so, a procedure is proposed in which, the number and the suspected reaches to the existence of pollutant sources are determined. This is done by defining two different types of monitoring stations with an adaptive arrangement in addition to real-time data collection and reliable flow and transport mathematical models. In the next step, the sources’ location and their release history are identified by solving the inverse source problem employing a geostatistical approach. Different scenarios are discussed for different conditions of number, release history and location of pollutant sources in the river network. Results indicated the capability of the proposed method in identifying the characteristics of the sources in complicated cases. Hence, it can be effectively used for the comprehensive monitoring of river networks for different purposes
    • …
    corecore