26,183 research outputs found
Optimality in Goal-Dependent Analysis of Sharing
We face the problems of correctness, optimality and precision for the static
analysis of logic programs, using the theory of abstract interpretation. We
propose a framework with a denotational, goal-dependent semantics equipped with
two unification operators for forward unification (calling a procedure) and
backward unification (returning from a procedure). The latter is implemented
through a matching operation. Our proposal clarifies and unifies many different
frameworks and ideas on static analysis of logic programming in a single,
formal setting. On the abstract side, we focus on the domain Sharing by Jacobs
and Langen and provide the best correct approximation of all the primitive
semantic operators, namely, projection, renaming, forward and backward
unification. We show that the abstract unification operators are strictly more
precise than those in the literature defined over the same abstract domain. In
some cases, our operators are more precise than those developed for more
complex domains involving linearity and freeness.
To appear in Theory and Practice of Logic Programming (TPLP
Meta Inverse Reinforcement Learning via Maximum Reward Sharing for Human Motion Analysis
This work handles the inverse reinforcement learning (IRL) problem where only
a small number of demonstrations are available from a demonstrator for each
high-dimensional task, insufficient to estimate an accurate reward function.
Observing that each demonstrator has an inherent reward for each state and the
task-specific behaviors mainly depend on a small number of key states, we
propose a meta IRL algorithm that first models the reward function for each
task as a distribution conditioned on a baseline reward function shared by all
tasks and dependent only on the demonstrator, and then finds the most likely
reward function in the distribution that explains the task-specific behaviors.
We test the method in a simulated environment on path planning tasks with
limited demonstrations, and show that the accuracy of the learned reward
function is significantly improved. We also apply the method to analyze the
motion of a patient under rehabilitation.Comment: arXiv admin note: text overlap with arXiv:1707.0939
Towards Optimality in Parallel Scheduling
To keep pace with Moore's law, chip designers have focused on increasing the
number of cores per chip rather than single core performance. In turn, modern
jobs are often designed to run on any number of cores. However, to effectively
leverage these multi-core chips, one must address the question of how many
cores to assign to each job. Given that jobs receive sublinear speedups from
additional cores, there is an obvious tradeoff: allocating more cores to an
individual job reduces the job's runtime, but in turn decreases the efficiency
of the overall system. We ask how the system should schedule jobs across cores
so as to minimize the mean response time over a stream of incoming jobs.
To answer this question, we develop an analytical model of jobs running on a
multi-core machine. We prove that EQUI, a policy which continuously divides
cores evenly across jobs, is optimal when all jobs follow a single speedup
curve and have exponentially distributed sizes. EQUI requires jobs to change
their level of parallelization while they run. Since this is not possible for
all workloads, we consider a class of "fixed-width" policies, which choose a
single level of parallelization, k, to use for all jobs. We prove that,
surprisingly, it is possible to achieve EQUI's performance without requiring
jobs to change their levels of parallelization by using the optimal fixed level
of parallelization, k*. We also show how to analytically derive the optimal k*
as a function of the system load, the speedup curve, and the job size
distribution.
In the case where jobs may follow different speedup curves, finding a good
scheduling policy is even more challenging. We find that policies like EQUI
which performed well in the case of a single speedup function now perform
poorly. We propose a very simple policy, GREEDY*, which performs near-optimally
when compared to the numerically-derived optimal policy
- …