5 research outputs found

    On the chairman assignment problem

    Get PDF
    AbstractGiven m states, which form a union, every year a chairman has to be selected in such a way that at any time the accumulated number of chairmen from each state is proportional to its weight. In this paper an algorithm for a chairman assignment is given which, depending on the weights, guarantees a small discrepancy

    Optimization opportunities in human in the loop computational paradigm

    Get PDF
    An emerging trend is to leverage human capabilities in the computational loop at different capacities, ranging from tapping knowledge from a richly heterogeneous pool of knowledge resident in the general population to soliciting expert opinions. These practices are, in general, termed human-in-the-loop (HITL) computations. A HITL process requires holistic treatment and optimization from multiple standpoints considering all stakeholders: a. applications, b. platforms, c. humans. In application-centric optimization, the factors of interest usually are latency (how long it takes for a set of tasks to finish), cost (the monetary or computational expenses incurred in the process), and quality of the completed tasks. Platform-centric optimization studies throughput, or revenue maximization, while human-centric optimization deals with the characteristics of the human workers, referred to as human factors, such as their skill improvement and learning, to name a few. Finally, fairness and ethical consideration are also of utmost importance in these processes./p\u3e This dissertation aims to design solutions for each of the aforementioned stakeholders. The first contribution of this dissertation is the study of recommending deployment strategies for applications consistent with task requesters’ deployment parameters. From the worker’s standpoint, this dissertation focuses on investigating online group formation where members seek to increase their learning potential via collaboration. Finally, it studies how to consolidate preferences from different workers/applications in a fair manner, such that the final order is both consistent with individual preferences and complies with a group fairness criteria. The technical contributions of this dissertation are to rigorously study these problems from theoretical standpoints, present principled algorithms with theoretical guarantees, and conduct extensive experimental analysis using large-scale real-world datasets to demonstrate their effectiveness and scalability

    Models and algorithms for promoting diverse and fair query results

    Get PDF
    Ensuring fairness and diversity in search results are two key concerns in compelling search and recommendation applications. This work explicitly studies these two aspects given multiple users\u27 preferences as inputs, in an effort to create a single ranking or top-k result set that satisfies different fairness and diversity criteria. From group fairness standpoint, it adapts demographic parity like group fairness criteria and proposes new models that are suitable for ranking or producing top-k set of results. This dissertation also studies equitable exposure of individual search results in long tail data, a concept related to individual fairness. First, the dissertation focuses on aggregating ranks while achieving proportionate fairness (ensures proportionate representation of every group) for multiple protected groups. Then, the dissertation explores how to minimally modify original users\u27 preferences under plurality voting, aiming to produce top-k result set that satisfies complex fairness constraints. A concept referred to as manipulation by modifications is introduced, which involves making minimal changes to the original user preferences to ensure query satisfaction. This problem is formalized as the margin finding problem. A follow up work studies this problem considering a popular ranked choice voting mechanism, namely, the Instant Run-off Voting or IRV, as the preference aggregation method. From the standpoint of individual fairness, this dissertation studies an exposure concern that top-k set based algorithms exhibit when the underlying data has long tail properties, and designs techniques to make those results equitable. For result diversification, the work studies efficiency opportunities in existing diversification algorithms, and designs a generic access primitive called DivGetBatch() to enable that. The contributions of this dissertation lie in (a) formalizing principal problems and studying them analytically. (b) designing scalable algorithms with theoretical guarantees, and (c) extensive experimental study to evaluate the efficacy and scalability of the designed solutions by comparing them with the state-of-the-art solutions using large-scale datasets
    corecore