9 research outputs found

    On the capacity of information processing systems

    Full text link
    We propose and analyze a family of information processing systems, where a finite set of experts or servers are employed to extract information about a stream of incoming jobs. Each job is associated with a hidden label drawn from some prior distribution. An inspection by an expert produces a noisy outcome that depends both on the job's hidden label and the type of the expert, and occupies the expert for a finite time duration. A decision maker's task is to dynamically assign inspections so that the resulting outcomes can be used to accurately recover the labels of all jobs, while keeping the system stable. Among our chief motivations are applications in crowd-sourcing, diagnostics, and experiment designs, where one wishes to efficiently learn the nature of a large number of items, using a finite pool of computational resources or human agents. We focus on the capacity of such an information processing system. Given a level of accuracy guarantee, we ask how many experts are needed in order to stabilize the system, and through what inspection architecture. Our main result provides an adaptive inspection policy that is asymptotically optimal in the following sense: the ratio between the required number of experts under our policy and the theoretical optimal converges to one, as the probability of error in label recovery tends to zero

    Adaptive Matching for Expert Systems with Uncertain Task Types

    Full text link
    A matching in a two-sided market often incurs an externality: a matched resource may become unavailable to the other side of the market, at least for a while. This is especially an issue in online platforms involving human experts as the expert resources are often scarce. The efficient utilization of experts in these platforms is made challenging by the fact that the information available about the parties involved is usually limited. To address this challenge, we develop a model of a task-expert matching system where a task is matched to an expert using not only the prior information about the task but also the feedback obtained from the past matches. In our model the tasks arrive online while the experts are fixed and constrained by a finite service capacity. For this model, we characterize the maximum task resolution throughput a platform can achieve. We show that the natural greedy approaches where each expert is assigned a task most suitable to her skill is suboptimal, as it does not internalize the above externality. We develop a throughput optimal backpressure algorithm which does so by accounting for the `congestion' among different task types. Finally, we validate our model and confirm our theoretical findings with data-driven simulations via logs of Math.StackExchange, a StackOverflow forum dedicated to mathematics.Comment: A part of it presented at Allerton Conference 2017, 18 page

    Adaptive Matching for Expert Systems with Uncertain Task Types

    Get PDF
    International audienceA matching in a two-sided market often incurs an externality: a matched resource maybecome unavailable to the other side of the market, at least for a while. This is especiallyan issue in online platforms involving human experts as the expert resources are often scarce.The efficient utilization of experts in these platforms is made challenging by the fact that theinformation available about the parties involved is usually limited.To address this challenge, we develop a model of a task-expert matching system where atask is matched to an expert using not only the prior information about the task but alsothe feedback obtained from the past matches. In our model the tasks arrive online while theexperts are fixed and constrained by a finite service capacity. For this model, we characterizethe maximum task resolution throughput a platform can achieve. We show that the naturalgreedy approaches where each expert is assigned a task most suitable to her skill is suboptimal,as it does not internalize the above externality. We develop a throughput optimal backpressurealgorithm which does so by accounting for the ‘congestion’ among different task types. Finally,we validate our model and confirm our theoretical findings with data-driven simulations vialogs of Math.StackExchange, a StackOverflow forum dedicated to mathematic

    On the Capacity of Information Processing Systems

    No full text
    International audienceWe propose and analyze a family of information processing systems, where a finite set of experts or servers are employed to extract information about a stream of incoming jobs. Each job is associated with a hidden label drawn from some prior distribution. An inspection by an expert produces a noisy outcome that depends both on the job’s hidden label and the type of the expert and occupies the expert for a finite time duration. A decision-maker’s task is to dynamically assign inspections so that the resulting out-comes can be used to accurately recover the labels of all jobs, while keeping the system stable. Among our chief motivations are applications in crowd sourcing, diagnostics, andexperiment designs, where one wishes to efficiently learn the nature of a large number of items, using a finite pool of computational resources or human agents. We focus on the capacity of such an information processing system. Given a level of accuracy guarantee, we ask how many experts are needed in order to stabilize the system, and through what inspection architecture. Our main result provides an adaptive inspection policy that is asymptotically optimal in the following sense: the ratio between the required number of experts under our policy and the theoretical optimal converges to one, as the probability of error in label recovery, δδ , tends to zero
    corecore