7 research outputs found
On User Behaviour Adaptation Under Interface Change
Different interfaces allow a user to achieve the same end goal through different action sequences, e.g., command lines vs. drop down menus. Interface efficiency can be described in terms of a cost incurred, e.g., time taken, by the user in typical tasks. Realistic users arrive at evaluations of efficiency, hence making choices about which interface to use, over time, based on trial and error experience. Their choices are also determined by prior experience, which determines how much learning time is required. These factors have substantial effect on the adoption of new interfaces. In this paper, we aim at understanding how users adapt under interface change, how much time it takes them to learn to interact optimally with an interface, and how this learning could be expedited through intermediate interfaces. We present results from a series o
Adapting Interaction Environments to Diverse Users through Online Action Set Selection
Interactive interfaces are a common feature of many systems ranging from field robotics to video games. In most applica-tions, these interfaces must be used by a heterogeneous set of users, with substantial variety in effectiveness with the same interface when configured differently. We address the issue of personalizing such an interface, adapting parame-ters to present the user with an environment that is optimal with respect to their individual traits- enabling that partic-ular user to achieve their personal optimum. We introduce a new class of problem in interface personalization where the task of the adaptive interface is to choose the subset of ac-tions of the full interface to present to the user. In formalizing this problem, we model the user as a Markov decision pro-cess (MDP), wherein the transition dynamics within a task depends on the type (e.g., skill or dexterity) of the user, where the type parametrizes the MDP. The action set of the MDP is divided into disjoint set of actions, with different action-sets optimal for different type (transition dynamics). The task of the adaptive interface is then to choose the right action-set. Given this formalization, we present experiments with sim-ulated and human users in a video game domain to show that (a) action set selection is an interesting class of problems (b) adaptively choosing the right action set improves perfor-mance over sticking to a fixed action set and (c) immediately applicable approaches such as bandits can be improved upon
Engineering Adaptive Model-Driven User Interfaces
Software applications that are very large-scale, can encompass hundreds of complex user interfaces (UIs). Such applications are commonly sold as feature-bloated off-the-shelf products to be used by people with variable needs in the required features and layout preferences. Although many UI adaptation approaches were proposed, several gaps and limitations including: extensibility and integration in legacy systems, still need to be addressed in the state-of-the-art adaptive UI development systems. This paper presents Role-Based UI Simplification (RBUIS) as a mechanism for increasing usability through adaptive behaviour by providing end-users with a minimal feature-set and an optimal layout, based on the context-of- use. RBUIS uses an interpreted runtime model-driven approach based on the Cedar Architecture, and is supported by the integrated development environment (IDE), Cedar Studio. RBUIS was evaluated by integrating it into OFBiz, an open-source ERP system. The integration method was assessed and measured by establishing and applying technical metrics. Afterwards, a usability study was carried out to evaluate whether UIs simplified with RBUIS show an improvement over their initial counterparts. This study leveraged questionnaires, checking task completion times and output quality, and eye-tracking. The results showed that UIs simplified with RBUIS significantly improve end-user efficiency, effectiveness, and perceived usability
Bayesian Policy Reuse
A long-lived autonomous agent should be able to respond online to novel
instances of tasks from a familiar domain. Acting online requires 'fast'
responses, in terms of rapid convergence, especially when the task instance has
a short duration, such as in applications involving interactions with humans.
These requirements can be problematic for many established methods for learning
to act. In domains where the agent knows that the task instance is drawn from a
family of related tasks, albeit without access to the label of any given
instance, it can choose to act through a process of policy reuse from a
library, rather than policy learning from scratch. In policy reuse, the agent
has prior knowledge of the class of tasks in the form of a library of policies
that were learnt from sample task instances during an offline training phase.
We formalise the problem of policy reuse, and present an algorithm for
efficiently responding to a novel task instance by reusing a policy from the
library of existing policies, where the choice is based on observed 'signals'
which correlate to policy performance. We achieve this by posing the problem as
a Bayesian choice problem with a corresponding notion of an optimal response,
but the computation of that response is in many cases intractable. Therefore,
to reduce the computation cost of the posterior, we follow a Bayesian
optimisation approach and define a set of policy selection functions, which
balance exploration in the policy library against exploitation of previously
tried policies, together with a model of expected performance of the policy
library on their corresponding task instances. We validate our method in
several simulated domains of interactive, short-duration episodic tasks,
showing rapid convergence in unknown task variations.Comment: 32 pages, submitted to the Machine Learning Journa