33,051 research outputs found
OPEB: Open Physical Environment Benchmark for Artificial Intelligence
Artificial Intelligence methods to solve continuous- control tasks have made
significant progress in recent years. However, these algorithms have important
limitations and still need significant improvement to be used in industry and
real- world applications. This means that this area is still in an active
research phase. To involve a large number of research groups, standard
benchmarks are needed to evaluate and compare proposed algorithms. In this
paper, we propose a physical environment benchmark framework to facilitate
collaborative research in this area by enabling different research groups to
integrate their designed benchmarks in a unified cloud-based repository and
also share their actual implemented benchmarks via the cloud. We demonstrate
the proposed framework using an actual implementation of the classical
mountain-car example and present the results obtained using a Reinforcement
Learning algorithm.Comment: Accepted in 3rd IEEE International Forum on Research and Technologies
for Society and Industry 201
Efficient Candidate Screening Under Multiple Tests and Implications for Fairness
When recruiting job candidates, employers rarely observe their underlying
skill level directly. Instead, they must administer a series of interviews
and/or collate other noisy signals in order to estimate the worker's skill.
Traditional economics papers address screening models where employers access
worker skill via a single noisy signal. In this paper, we extend this
theoretical analysis to a multi-test setting, considering both Bernoulli and
Gaussian models. We analyze the optimal employer policy both when the employer
sets a fixed number of tests per candidate and when the employer can set a
dynamic policy, assigning further tests adaptively based on results from the
previous tests. To start, we characterize the optimal policy when employees
constitute a single group, demonstrating some interesting trade-offs.
Subsequently, we address the multi-group setting, demonstrating that when the
noise levels vary across groups, a fundamental impossibility emerges whereby we
cannot administer the same number of tests, subject candidates to the same
decision rule, and yet realize the same outcomes in both groups
- …