275 research outputs found

    Tight upper bounds for the expected loss of lexicographic heuristics in binary multiattribute choice

    Get PDF
    Tight upper bounds for the expected loss of the DEBA (Deterministic-Elimination-By-Aspects) lexicographic selection heuristic are obtained for the case of an additive separable utility function with unknown non-negative, non-increasing attribute weights for numbers of alternatives and attributes as large as 10 under two probabilistic models: one in which attributes are assumed to be independent Bernouilli random variables and another one with positive inter-attribute correlation. The upper bounds improve substantially previous bounds and extend significantly the cases in which a good performance of DEBA can be guaranteed under the assumed cognitive limitations.Postprint (published version

    Cumulative dominance and heuristic performance in binary multi-attribute choice

    Get PDF
    Working paper 895, Department of Economics and Business, Universitat Pompeu FabraSeveral studies have reported high performance of simple decision heuristics in multi-attribute decision making. In this paper, we focus on situations where attributes are binary and analyze the performance of Deterministic-Elimination-By-Aspects (DEBA) and similar decision heuristics. We consider non-increasing weights and two probabilistic models for the attribute values: one where attribute values are independent Bernoulli randomvariables; the other one where they are binary random variables with inter-attribute positive correlations. Using these models, we show that good performance of DEBA is explained by the presence of cumulative as opposed to simple dominance. We therefore introduce the concepts of cumulative dominance compliance and fully cumulative dominance compliance and show that DEBA satisfies those properties. We derive a lower bound with which cumulative dominance compliant heuristics will choose a best alternative and show that, even with many attributes, this is not small. We also derive an upper bound for the expected loss of fully cumulative compliance heuristics and show that this is moderate even when the number of attributes is large. Both bounds are independent of the values of the weights.Postprint (author’s final draft

    Cumulative dominance and heuristic performance in binary multi-attribute choice

    Get PDF
    Several studies have reported high performance of simple decision heuristics multi-attribute decision making. In this paper, we focus on situations where attributes are binary and analyze the performance of Deterministic-Elimination-By-Aspects (DEBA) and similar decision heuristics. We consider non-increasing weights and two probabilistic models for the attribute values: one where attribute values are independent Bernoulli randomvariables; the other one where they are binary random variables with inter-attribute positive correlations. Using these models, we show that good performance of DEBA is explained by the presence of cumulative as opposed to simple dominance. We therefore introduce the concepts of cumulative dominance compliance and fully cumulative dominance compliance and show that DEBA satisfies those properties. We derive a lower bound with which cumulative dominance compliant heuristics will choose a best alternative and show that, even with many attributes, this is not small. We also derive an upper bound for the expected loss of fully cumulative compliance heuristics and show that this is moderate even when the number of attributes is large. Both bounds are independent of the values of the weights.Multi-attribute decision making, binary attributes, DEBA, cumulative dominance, performance bounds, Leex

    Of keyboards and beyond - optimization in human-computer interaction

    Get PDF
    In this thesis, we present optimization frameworks in the area of Human-Computer Interaction. At first, we discuss keyboard layout problems with a special focus on a project we participated in, which aimed at designing the new French keyboard standard. The special nature of this national-scale project and its optimization ingredients are discussed in detail; we specifically highlight our algorithmic contribution to this project. Exploiting the special structure of this design problem, we propose an optimization framework that was efficiently computes keyboard layouts and provides very good optimality guarantees in form of tight lower bounds. The optimized layout that we showed to be nearly optimal was the basis of the new French keyboard standard recently published in the National Assembly in Paris. Moreover, we propose a relaxation for the quadratic assignment problem (a generalization of keyboard layouts) that is based on semidefinite programming. In a branch-and-bound framework, this relaxation achieves competitive results compared to commonly used linear programming relaxations for this problem. Finally, we introduce a modeling language for mixed integer programs that especially focuses on the challenges and features that appear in participatory optimization problems similar to the French keyboard design process.Diese Arbeit behandelt Ansätze zu Optimierungsproblemen im Bereich Human-Computer Interaction. Zuerst diskutieren wir Tastaturbelegungsprobleme mit einem besonderen Fokus auf einem Projekt, an dem wir teilgenommen haben: die Erstellung eines neuen Standards für die französische Tastatur. Wir gehen auf die besondere Struktur dieses Problems und unseren algorithmischen Beitrag ein: ein Algorithmus, der mit Optimierungsmethoden die Struktur dieses speziellen Problems ausnutzt. Mithilfe dieses Algorithmus konnten wir effizient Tastaturbelegungen berechnen und die Qualität dieser Belegungen effektiv (in Form von unteren Schranken) nachweisen. Das finale optimierte Layout, welches mit unserer Methode bewiesenermaßen nahezu optimal ist, diente als Grundlage für den kürzlich in der französischen Nationalversammlung veröffentlichten neuen französischen Tastaturstandard. Darüberhinaus beschreiben wir eine Relaxierung für das quadratische Zuweisungsproblem (eine Verallgemeinerung des Tastaturbelegungsproblems), die auf semidefinieter Programmierung basiert. Wir zeigen, dass unser Algorithmus im Vergleich zu üblich genutzten linearen Relaxierung gut abschneidet. Abschließend definieren und diskutieren wir eine Modellierungssprache für gemischt integrale Programme. Diese Sprache ist speziell auf die besonderen Herausforderungen abgestimmt, die bei interaktiven Optimierungsproblemen auftreten, welche einen ähnlichen Charakter haben wie der Prozess des Designs der französischen Tastatur

    Optimization algorithms for decision tree induction

    Get PDF
    Aufgrund der guten Interpretierbarkeit gehören Entscheidungsbäume zu den am häufigsten verwendeten Modellen des maschinellen Lernens zur Lösung von Klassifizierungs- und Regressionsaufgaben. Ihre Vorhersagen sind oft jedoch nicht so genau wie die anderer Modelle. Der am weitesten verbreitete Ansatz zum Lernen von Entscheidungsbäumen ist die Top-Down-Methode, bei der rekursiv neue Aufteilungen anhand eines einzelnen Merkmals eingefuhrt werden, die ein bestimmtes Aufteilungskriterium minimieren. Eine Möglichkeit diese Strategie zu verbessern und kleinere und genauere Entscheidungsbäume zu erzeugen, besteht darin, andere Arten von Aufteilungen zuzulassen, z.B. welche, die mehrere Merkmale gleichzeitig berücksichtigen. Solche zu bestimmen ist allerdings deutlich komplexer und es sind effektive Optimierungsalgorithmen notwendig um optimale Lösungen zu finden. Für numerische Merkmale sind Aufteilungen anhand affiner Hyperebenen eine Alternative zu univariaten Aufteilungen. Leider ist das Problem der optimalen Bestimmung der Hyperebenparameter im Allgemeinen NP-schwer. Inspiriert durch die zugrunde liegende Problemstruktur werden in dieser Arbeit daher zwei Heuristiken zur näherungsweisen Lösung dieses Problems entwickelt. Die erste ist eine Kreuzentropiemethode, die iterativ Stichproben von der von-Mises-Fisher-Verteilung zieht und deren Parameter mithilfe der besten Elemente daraus verbessert. Die zweite ist ein Simulated-Annealing-Verfahren, das eine Pivotstrategie zur Erkundung des Lösungsraums nutzt. Aufgrund der gleichzeitigen Verwendung aller numerischen Merkmale sind generelle Hyperebenenaufteilungen jedoch schwer zu interpretieren. Als Alternative wird in dieser Arbeit daher die Verwendung von bivariaten Hyperebenenaufteilungen vorgeschlagen, die Linien in dem von zwei Merkmalen aufgespannten Unterraum entsprechen. Mit diesen ist es möglich, den Merkmalsraum deutlich effizienter zu unterteilen als mit univariaten Aufteilungen. Gleichzeitig sind sie aufgrund der Beschränkung auf zwei Merkmale gut interpretierbar. Zur optimalen Bestimmung der bivariaten Hyperebenenaufteilungen wird ein Branch-and-Bound-Verfahren vorgestellt. Darüber hinaus wird ein Branch-and-Bound-Verfahren zur Bestimmung optimaler Kreuzaufteilungen entwickelt. Diese können als Kombination von zwei standardmäßigen univariaten Aufteilung betrachtet werden und sind in Situationen nützlich, in denen die Datenpunkte nur schlecht durch einzelne lineare Aufteilungen separiert werden können. Die entwickelten unteren Schranken für verunreinigungsbasierte Aufteilungskriterien motivieren ebenfalls ein einfaches, aber effektives Branch-and-Bound-Verfahren zur Bestimmung optimaler Aufteilungen nominaler Merkmale. Aufgrund der Komplexität des zugrunde liegenden Optimierungsproblems musste man bisher nominale Merkmale mittels Kodierungsschemata in numerische umwandeln oder Heuristiken nutzen, um suboptimale nominale Aufteilungen zu bestimmen. Das vorgeschlagene Branch-and-Bound-Verfahren bietet eine nützliche Alternative für viele praktische Anwendungsfälle. Schließlich wird ein genetischer Algorithmus zur Induktion von Entscheidungsbäumen als Alternative zur Top-Down-Methode vorgestellt.Decision trees are among the most commonly used machine learning models for solving classification and regression tasks due to their major advantage of being easy to interpret. However, their predictions are often not as accurate as those of other models. The most widely used approach for learning decision trees is to build them in a top-down manner by introducing splits on a single variable that minimize a certain splitting criterion. One possibility of improving this strategy to induce smaller and more accurate decision trees is to allow different types of splits which, for example, consider multiple features simultaneously. However, finding such splits is usually much more complex and effective optimization methods are needed to determine optimal solutions. An alternative to univarate splits for numerical features are oblique splits which employ affine hyperplanes to divide the feature space. Unfortunately, the problem of determining such a split optimally is known to be NP-hard in general. Inspired by the underlying problem structure, two new heuristics are developed for finding near-optimal oblique splits. The first one is a cross-entropy optimization method which iteratively samples points from the von Mises-Fisher distribution and updates its parameters based on the best performing samples. The second one is a simulated annealing algorithm that uses a pivoting strategy to explore the solution space. As general oblique splits employ all of the numerical features simultaneously, they are hard to interpret. As an alternative, in this thesis, the usage of bivariate oblique splits is proposed. These splits correspond to lines in the subspace spanned by two features. They are capable of dividing the feature space much more efficiently than univariate splits while also being fairly interpretable due to the restriction to two features only. A branch and bound method is presented to determine these bivariate oblique splits optimally. Furthermore, a branch and bound method to determine optimal cross-splits is presented. These splits can be viewed as combinations of two standard univariate splits on numeric attributes and they are useful in situations where the data points cannot be separated well linearly. The cross-splits can either be introduced directly to induce quaternary decision trees or, which is usually better, they can be used to provide a certain degree of foresight, in which case only the better of the two respective univariate splits is introduced. The developed lower bounds for impurity based splitting criteria also motivate a simple but effective branch and bound algorithm for splits on nominal features. Due to the complexity of determining such splits optimally when the number of possible values for the feature is large, one previously had to use encoding schemes to transform the nominal features into numerical ones or rely on heuristics to find near-optimal nominal splits. The proposed branch and bound method may be a viable alternative for many practical applications. Lastly, a genetic algorithm is proposed as an alternative to the top-down induction strategy

    Metareasoning about propagators for constraint satisfaction

    Get PDF
    Given the breadth of constraint satisfaction problems (CSPs) and the wide variety of CSP solvers, it is often very difficult to determine a priori which solving method is best suited to a problem. This work explores the use of machine learning to predict which solving method will be most effective for a given problem. We use four different problem sets to determine the CSP attributes that can be used to determine which solving method should be applied. After choosing an appropriate set of attributes, we determine how well j48 decision trees can predict which solving method to apply. Furthermore, we take a cost sensitive approach such that problem instances where there is a great difference in runtime between algorithms are emphasized. We also attempt to use information gained on one class of problems to inform decisions about a second class of problems. Finally, we show that the additional costs of deciding which method to apply are outweighed by the time savings compared to applying the same solving method to all problem instances

    A two-level local search heuristic for pickup and delivery problems in express freight trucking

    Get PDF
    We consider a multiattribute vehicle routing problem inspired by a freight transportation company operating a fleet of heterogeneous trucks. The company offers an express service for requests including multiple pickup and multiple delivery positions spread in a regional area, with associated soft or hard time windows often falling in the same working day. Routes are planned on a daily basis and reoptimized on-the-fly to fit new requests, taking into account constraints and preferences on capacities, hours of service, route termination points. The objective is to maximize the difference between the revenue from satisfied orders and the operational costs. The problem mixes attributes from both intercity less-than-truckload and express couriers operations, and we propose a two-level local search heuristic. The first level assigns orders to vehicles through a variable neighborhood stochastic tabu search; the second level optimizes the route service sequences. The algorithm, enhanced by neighborhood filtering and parallel exploration, is embedded in a decision support tool currently in use in a small trucking company. Results have been compared to bounds obtained from a mathematical programming model solved by column generation. Experience on the field and test on literature instances attest to the quality of results and the efficiency of the proposed approach

    Ethical Machine Learning: Fairness, Privacy, And The Right To Be Forgotten

    Get PDF
    Large-scale algorithmic decision making has increasingly run afoul of various social norms, laws, and regulations. A prominent concern is when a learned model exhibits discrimination against some demographic group, perhaps based on race or gender. Concerns over such algorithmic discrimination have led to a recent flurry of research on fairness in machine learning, which includes new tools for designing fair models, and studies the tradeoffs between predictive accuracy and fairness. We address algorithmic challenges in this domain. Preserving privacy of data when performing analysis on it is not only a basic right for users, but it is also required by laws and regulations. How should one preserve privacy? After about two decades of fruitful research in this domain, differential privacy (DP) is considered by many the gold standard notion of data privacy. We focus on how differential privacy can be useful beyond preserving data privacy. In particular, we study the connection between differential privacy and adaptive data analysis. Users voluntarily provide huge amounts of personal data to businesses such as Facebook, Google, and Amazon, in exchange for useful services. But a basic principle of data autonomy asserts that users should be able to revoke access to their data if they no longer find the exchange of data for services worthwhile. The right for users to request the erasure of personal data appears in regulations such as the Right to be Forgotten of General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA). We provide algorithmic solutions to the the problem of removing the influence of data points from machine learning models
    corecore