75 research outputs found

    Computing leximin-optimal solutions in constraint networks

    Get PDF
    AbstractIn many real-world multiobjective optimization problems one needs to find solutions or alternatives that provide a fair compromise between different conflicting objective functions—which could be criteria in a multicriteria context, or agent utilities in a multiagent context—while being efficient (i.e. informally, ensuring the greatest possible overall agents' satisfaction). This is typically the case in problems implying human agents, where fairness and efficiency requirements must be met. Preference handling, resource allocation problems are another examples of the need for balanced compromises between several conflicting objectives. A way to characterize good solutions in such problems is to use the leximin preorder to compare the vectors of objective values, and to select the solutions which maximize this preorder. In this article, we describe five algorithms for finding leximin-optimal solutions using constraint programming. Three of these algorithms are original. Other ones are adapted, in constraint programming settings, from existing works. The algorithms are compared experimentally on three benchmark problems

    Leximin Approximation: From Single-Objective to Multi-Objective

    Full text link
    Leximin is a common approach to multi-objective optimization, frequently employed in fair division applications. In leximin optimization, one first aims to maximize the smallest objective value; subject to this, one maximizes the second-smallest objective; and so on. Often, even the single-objective problem of maximizing the smallest value cannot be solved accurately. What can we hope to accomplish for leximin optimization in this situation? Recently, Henzinger et al. (2022) defined a notion of \emph{approximate} leximin optimality. Their definition, however, considers only an additive approximation. In this work, we first define the notion of approximate leximin optimality, allowing both multiplicative and additive errors. We then show how to compute, in polynomial time, such an approximate leximin solution, using an oracle that finds an approximation to a single-objective problem. The approximation factors of the algorithms are closely related: an (α,ϵ)(\alpha,\epsilon)-approximation for the single-objective problem (where α∈(0,1]\alpha \in (0,1] and ϵ≥0\epsilon \geq 0 are the multiplicative and additive factors respectively) translates into an (α21−α+α2,ϵ1−α+α2)\left(\frac{\alpha^2}{1-\alpha + \alpha^2}, \frac{\epsilon}{1-\alpha +\alpha^2}\right)-approximation for the multi-objective leximin problem, regardless of the number of objectives. Finally, we apply our algorithm to obtain an approximate leximin solution for the problem of \emph{stochastic allocations of indivisible goods}. For this problem, assuming sub-modular objectives functions, the single-objective egalitarian welfare can be approximated, with only a multiplicative error, to an optimal 1−1e≈0.6321-\frac{1}{e}\approx 0.632 factor w.h.p. We show how to extend the approximation to leximin, over all the objective functions, to a multiplicative factor of (e−1)2e2−e+1≈0.52\frac{(e-1)^2}{e^2-e+1} \approx 0.52 w.h.p or 13\frac{1}{3} deterministically

    Trusted content-based publish/subscribe trees

    Get PDF
    Publish/Subscribe systems hold strong assumptions of the expected behaviour of clients and routers, as it is assumed they all abide by the matching and routing protocols. Assumptions of implicit trust between the components of the publish/subscribe infrastructure are acceptable where the underlying event distribution service is under the control of a single or multiple co-operating administrative entities and contracts between clients and these authorities exist, however there are application contexts where these presumptions do not hold. In such environments, such as ad hoc networks, there is the possibility of selfish and malicious behaviour that can lead to disruption of the routing and matching algorithms. The most commonly researched approach to security in publish/subscribe systems is role-based access control (RBAC). RBAC is suitable for ensuring confidentiality, but due to the assumption of strong identities associated with well defined roles and the absence of monitoring systems to allow for adaptable policies in response to the changing behaviour of clients, it is not appropriate for environments where: identities can not be assigned to roles in the absence of a trusted administrative entity; long-lived identities of entities do not exist; and where the threat model consists of highly adaptable malicious and selfish entities. Motivated by recent work in the application of trust and reputation to Peer-to-Peer networks, where past behaviour is used to generate trust opinions that inform future transactions, we propose an approach where the publish/subscribe infrastructure is constructed and re-configured with respect to the trust preferences of clients and routers. In this thesis, we show how Publish/Subscribe trees (PSTs) can be constructed with respect to the trust preferences of publishers and subscribers, and the overhead costs of event dissemination. Using social welfare theory, it is shown that individual trust preferences over clients and routers, which are informed by a variety of trust sources, can be aggregated to give a social preference over the set of feasible PSTs. By combining this and the existing work on PST overheads, the Maximum Trust PST with Overhead Budget problem is defined and is shown to be in NP-complete. An exhaustive search algorithm is proposed that is shown to be suitable only for very small problem sizes. To improve scalability, a faster tabu search algorithm is presented, which is shown to scale to larger problem instances and gives good approximations of the optimal solutions. The research contributions of this work are: the use of social welfare theory to provide a mechanism to establish the trustworthiness of PSTs; the finding that individual trust is not interpersonal comparable as is considered to be the case in much of the trust literature; the Maximum Trust PST with Overhead Budget problem; and algorithms to solve this problem

    Monotonicity and Competitive Equilibrium in Cake-cutting

    Full text link
    We study the monotonicity properties of solutions in the classic problem of fair cake-cutting --- dividing a heterogeneous resource among agents with different preferences. Resource- and population-monotonicity relate to scenarios where the cake, or the number of participants who divide the cake, changes. It is required that the utility of all participants change in the same direction: either all of them are better-off (if there is more to share or fewer to share among) or all are worse-off (if there is less to share or more to share among). We formally introduce these concepts to the cake-cutting problem and examine whether they are satisfied by various common division rules. We prove that the Nash-optimal rule, which maximizes the product of utilities, is resource-monotonic and population-monotonic, in addition to being Pareto-optimal, envy-free and satisfying a strong competitive-equilibrium condition. Moreover, we prove that it is the only rule among a natural family of welfare-maximizing rules that is both proportional and resource-monotonic.Comment: Revised versio

    Sorted-pareto dominance: an extension to pareto dominance and its application in soft constraints

    Get PDF
    The Pareto dominance relation compares decisions with each other over multiple aspects, and any decision that is not dominated by another is called Pareto optimal, which is a desirable property in decision making. However, the Pareto dominance relation is not very discerning, and often leads to a large number of non-dominated or Pareto optimal decisions. By strengthening the relation, we can narrow down this nondominated set of decisions to a smaller set, e.g., for presenting a smaller number of more interesting decisions to a decision maker. In this paper, we look at a particular strengthening of the Pareto dominance called Sorted-Pareto dominance, giving some properties that characterise the relation, and giving a semantics in the context of decision making under uncertainty. We then examine the use of the relation in a Soft Constraints setting, and explore some algorithms for generating Sorted-Pareto optimal solutions to Soft Constraints problems

    Proportional and maxmin fairness for the sensor location problem with chance constraints

    Get PDF
    International audienceIn this paper we present a study on the Equitable Sensor Location Problem and we focus on the stochastic version of the problem where the surveying capacity of some sensors is measured as probability of intrusions detection. The Equitable Sensor Location Problem, which is an extension of the Equitable Facility Location Problem, considers installing surveying facilities as cameras/sensors in order to monitor and protect some important locations. Each location can be simultaneously protected by multiple facilities. Clearly this problem falls into the category of Maximal Coverage Location Problem and we focus on the equitable variant. The objective of the Equitable Sensor Location Problem is to provide equitable protection to all locations when the number of sensors that can be placed is limited. We study the resilient and ambiguous versions of this problem. The resilient sensor location problem considers the case when some sensors are assumed to fail partially or completely. The ambiguous version studies the case when the surveying probabilities are uncertain and represented by independent Bernouilli random variables with the corresponding ambiguity set containing the Bernouilli probability distributions. For each problem we consider two popular fairness measures which are the lexicographic optimal and proportionally fair solutions and provide an integer linear formulation together with the solution methodology. Numerical results for each studied problem are provided at the end of the paper

    Balancing schedules using maximum leximin

    Get PDF
    We consider the problem of assigning, in a fair way, time limits for processes in manufacturing a product, subject to a deadline where the duration of each activity can be uncertain. We focus on an approach based on choosing the maximum element according to a leximin ordering, and we prove the correctness of a simple iterative procedure for generating this maximally preferred element. Our experimental testing illustrates the efficiency of our approach

    Sorted-pareto dominance and qualitative notions of optimality

    Get PDF
    Pareto dominance is often used in decision making to compare decisions that have multiple preference values – however it can produce an unmanageably large number of Pareto optimal decisions. When preference value scales can be made commensurate, then the Sorted-Pareto relation produces a smaller, more manageable set of decisions that are still Pareto optimal. Sorted-Pareto relies only on qualitative or ordinal preference information, which can be easier to obtain than quantitative information. This leads to a partial order on the decisions, and in such partially-ordered settings, there can be many different natural notions of optimality. In this paper, we look at these natural notions of optimality, applied to the Sorted-Pareto and min-sum of weights case; the Sorted-Pareto ordering has a semantics in decision making under uncertainty, being consistent with any possible order-preserving function that maps an ordinal scale to a numerical one. We show that these optimality classes and the relationships between them provide a meaningful way to categorise optimal decisions for presenting to a decision maker

    Fair Resource Allocation in Macroscopic Evacuation Planning Using Mathematical Programming: Modeling and Optimization

    Get PDF
    Evacuation is essential in the case of natural and manmade disasters such as hurricanes, nuclear disasters, fire accidents, and terrorism epidemics. Random evacuation plans can increase risks and incur more losses. Hence, numerous simulation and mathematical programming models have been developed over the past few decades to help transportation planners make decisions to reduce costs and protect lives. However, the dynamic transportation process is inherently complex. Thus, modeling this process can be challenging and computationally demanding. The objective of this dissertation is to build a balanced model that reflects the realism of the dynamic transportation process and still be computationally tractable to be implemented in reality by the decision-makers. On the other hand, the users of the transportation network require reasonable travel time within the network to reach their destinations. This dissertation introduces a novel framework in the fields of fairness in network optimization and evacuation to provide better insight into the evacuation process and assist with decision making. The user of the transportation network is a critical element in this research. Thus, fairness and efficiency are the two primary objectives addressed in the work by considering the limited capacity of roads of the transportation network. Specifically, an approximation approach to the max-min fairness (MMF) problem is presented that provides lower computational time and high-quality output compared to the original algorithm. In addition, a new algorithm is developed to find the MMF resource allocation output in nonconvex structure problems. MMF is the fairness policy used in this research since it considers fairness and efficiency and gives priority to fairness. In addition, a new dynamic evacuation modeling approach is introduced that is capable of reporting more information about the evacuees compared to the conventional evacuation models such as their travel time, evacuation time, and departure time. Thus, the contribution of this dissertation is in the two areas of fairness and evacuation. The first part of the contribution of this dissertation is in the field of fairness. The objective in MMF is to allocate resources fairly among multiple demands given limited resources while utilizing the resources for higher efficiency. Fairness and efficiency are contradicting objectives, so they are translated into a bi-objective mathematical programming model and solved using the ϵ-constraint method, introduced by Vira and Haimes (1983). Although the solution is an approximation to the MMF, the model produces quality solutions, when ϵ is properly selected, in less computational time compared to the progressive-filling algorithm (PFA). In addition, a new algorithm is developed in this research called the θ progressive-filling algorithm that finds the MMF in resource allocation for general problems and works on problems with the nonconvex structure problems. The second part of the contribution is in evacuation modeling. The common dynamic evacuation models lack a piece of essential information for achieving fairness, which is the time each evacuee or group of evacuees spend in the network. Most evacuation models compute the total time for all evacuees to move from the endangered zone to the safe destination. Lack of information about the users of the transportation network is the motivation to develop a new optimization model that reports more information about the users of the network. The model finds the travel time, evacuation time, departure time, and the route selected for each group of evacuees. Given that the travel time function is a non-linear convex function of the traffic volume, the function is linearized through a piecewise linear approximation. The developed model is a mixed-integer linear programming (MILP) model with high complexity. Hence, the model is not capable of solving large scale problems. The complexity of the model was reduced by introducing a linear programming (LP) version of the full model. The complexity is significantly reduced while maintaining the exact output. In addition, the new θ-progressive-filling algorithm was implemented on the evacuation model to find a fair and efficient evacuation plan. The algorithm is also used to identify the optimal routes in the transportation network. Moreover, the robustness of the evacuation model was tested against demand uncertainty to observe the model behavior when the demand is uncertain. Finally, the robustness of the model is tested when the traffic flow is uncontrolled. In this case, the model's only decision is to distribute the evacuees on routes and has no control over the departure time
    • …
    corecore