13 research outputs found

    An Integer Linear Programming approach to the single and bi-objective Next Release Problem

    Get PDF
    Context The Next Release Problem involves determining the set of requirements to implement in the next release of a software project. When the problem was first formulated in 2001, Integer Linear Programming, an exact method, was found to be impractical because of large execution times. Since then, the problem has mainly been addressed by employing metaheuristic techniques.  Objective In this paper, we investigate if the single-objective and bi-objective Next Release Problem can be solved exactly and how to better approximate the results when exact resolution is costly.  Methods We revisit Integer Linear Programming for the single-objective version of the problem. In addition, we integrate it within the Epsilon-constraint method to address the bi-objective problem. We also investigate how the Pareto front of the bi-objective problem can be approximated through an anytime deterministic Integer Linear Programming-based algorithm when results are required within strict runtime constraints. Comparisons are carried out against NSGA-II. Experiments are performed on a combination of synthetic and real-world datasets. Findings We show that a modern Integer Linear Programming solver is now a viable method for this problem. Large single objective instances and small bi-objective instances can be solved exactly very quickly. On large bi-objective instances, execution times can be significant when calculating the complete Pareto front. However, good approximations can be found effectively.  Conclusion This study suggests that (1) approximation algorithms can be discarded in favor of the exact method for the single-objective instances and small bi-objective instances, (2) the Integer Linear Programming-based approximate algorithm outperforms the NSGA-II genetic approach on large bi-objective instances, and (3) the run times for both methods are low enough to be used in real-world situations

    Searching the Efficient Frontier for the Coherent Covering Location Problem

    Get PDF
    In this article, we will try to find an efficient boundary approximation for the bi-objective location problem with coherent coverage for two levels of hierarchy (CCLP). We present the mathematical formulation of the model used. Supported efficient solutions and unsupported efficient solutions are obtained by solving the bi-objective combinatorial problem through the weights method using a Lagrangean heuristic. Subsequently, the results are validated through the DEA analysis with the GEM index (Global efficiency measurement)

    Optimal Emergency Response Shelter Placement based on Population Vulnerabilities

    Get PDF
    Hurricane Florence was a category 4 storm which caused an estimated $24 billion in damages and the loss of 53 lives. During and immediately following Florence, there were 235 shelters operating in and around the North Carolina (NC) area. These were used as temporary housing for storm victims and by emergency responders to distribute relief supplies and provide medical services. Emergency officials consider several factors when deciding where to open shelters, including, for example, proximity of victims and their levels of medical needs. Access disparities, or factors creating barriers that limit entry to shelters, put certain populations and regions at higher risk of not receiving the necessary supports during a response effort. Despite the recognized need for universal accessibility, the explicit consideration of populations of increased risk, such as the elderly and those with Access and Functional Needs (AFN), is missing from the disaster response shelter location literature. Methods to maximize the potential spatial accessibility of shelter locations, inclusive of vulnerability factors, are demonstrated for a case study based on Hurricane Florence. The recommended shelter locations were compared to the locations of actual shelters used during Florence as well as other shelter placement methods on the basis of accessibility scores

    Efficient anytime algorithms to solve the bi-objective Next Release Problem

    Full text link
    The Next Release Problem consists in selecting a subset of requirements to develop in the next release of a software product. The selection should be done in a way that maximizes the satisfaction of the stakeholders while the development cost is minimized and the constraints of the requirements are fulfilled. Recent works have solved the problem using exact methods based on Integer Linear Programming. In practice, there is no need to compute all the efficient solutions of the problem; a well-spread set in the objective space is more convenient for the decision maker. The exact methods used in the past to find the complete Pareto front explore the objective space in a lexicographic order or use a weighted sum of the objectives to solve a single-objective problem, finding only supported solutions. In this work, we propose five new methods that maintain a well-spread set of solutions at any time during the search, so that the decision maker can stop the algorithm when a large enough set of solutions is found. The methods are called anytime due to this feature. They find both supported and non-supported solutions, and can complete the whole Pareto front if the time provided is long enough

    Dependency-Aware Software Requirements Selection using Fuzzy Graphs and Integer Programming

    Full text link
    Software requirements selection aims to find an optimal subset of the requirements with the highest value while respecting the project constraints. But the value of a requirement may depend on the presence or absence of other requirements in the optimal subset. Such Value Dependencies, however, are imprecise and hard to capture. In this paper, we propose a method based on integer programming and fuzzy graphs to account for value dependencies and their imprecision in software requirements selection. The proposed method, referred to as Dependency-Aware Software Requirements Selection (DARS), is comprised of three components: (i) an automated technique for the identification of value dependencies from user preferences, (ii) a modeling technique based on fuzzy graphs that allows for capturing the imprecision of value dependencies, and (iii) an Integer Linear Programming (ILP) model that takes into account user preferences and value dependencies identified from those preferences to reduce the risk of value loss in software projects. Our work is verified by studying a real-world software project. The results show that our proposed method reduces the value loss in software projects and is scalable to large requirement sets.Comment: arXiv admin note: text overlap with arXiv:2003.0480

    Asymmetric Release Planning-Compromising Satisfaction against Dissatisfaction

    Full text link
    Maximizing satisfaction from offering features as part of the upcoming release(s) is different from minimizing dissatisfaction gained from not offering features. This asymmetric behavior has never been utilized for product release planning. We study Asymmetric Release Planning (ARP) by accommodating asymmetric feature evaluation. We formulated and solved ARP as a bi-criteria optimization problem. In its essence, it is the search for optimized trade-offs between maximum stakeholder satisfaction and minimum dissatisfaction. Different techniques including a continuous variant of Kano analysis are available to predict the impact on satisfaction and dissatisfaction with a product release from offering or not offering a feature. As a proof of concept, we validated the proposed solution approach called Satisfaction-Dissatisfaction Optimizer (SDO) via a real-world case study project. From running three replications with varying effort capacities, we demonstrate that SDO generates optimized trade-off solutions being (i) of a different value profile and different structure, (ii) superior to the application of random search and heuristics in terms of quality and completeness, and (iii) superior to the usage of manually generated solutions generated from managers of the case study company. A survey with 20 stakeholders evaluated the applicability and usefulness of the generated results

    Locating Emergency Shelters While Incorporating Spatial Factors

    Get PDF
    In the immediate response phase of a natural disaster, local governments and nonprofit agencies often establish shelters for affected populations. Decisions regarding at which locations to open shelters are made ad hoc based on available building inventory, and may result in high travel impedance to reach shelters and congestion. This thesis presents a shelter location optimization model based on the two-step floating catchment area (2SFCA) method. The 2SFCA method creates a shelter accessibility score for each areal unit (e.g., census block group) which represents the ability for persons in the unit to access shelter capacity with low travel impedance, relative to persons in other units competing for the same shelter capacity. A distance decay function within the 2SFCA method models the propensity of a person to visit a shelter based on the distance to the shelter. The optimization model recommends locations at which to open shelters so as to optimize some function of the 2SFCA accessibility scores. Three single-objective models and one bi-objective model are considered. Across all areal units, the alternative models: (i) maximize the sum of accessibility scores; (ii) minimize the disparity in accessibility scores; (iii) maximize the minimum accessibility score; and (iv) maximize the sum of all scores and minimize disparity. These models are demonstrated via a case study based on Hurricane Florence, which struck North Carolina in 2018. The optimization model outputs are compared with actual shelter openings during Hurricane Florence in four North Carolina cities, and also with outputs of classic p-Median and p-Center facility location models. Case study results demonstrate that, across the range of parameter values included in a sensitivity analysis, the bi-objective model achieves the best tradeoff between efficient and equitable shelter locations, while also achieving a higher minimum accessibility score than either of the two single objective models on their own
    corecore