13 research outputs found
Using the Journalistic Metaphor to Design User Interfaces That Explain Sensor Data
Facilitating general access to data from sensor networks (including traffic, hydrology and other domains) increases their utility. In this paper we argue that the journalistic metaphor can be effectively used to automatically generate multimedia presentations that help non-expert users analyze and understand sensor data. The journalistic layout and style are familiar to most users. Furthermore, the journalistic approach of ordering information from most general to most specific helps users obtain a high-level understanding while providing them the freedom to choose the depth of analysis to which they want to go. We describe the general characteristics and architectural requirements for an interactive intelligent user interface for exploring sensor data that uses the journalistic metaphor. We also describe our experience in developing this interface in real-world domains (e.g., hydrology)
Combining Text and Graphics for Interactive Exploration of Behavior Datasets
Modern sensor technologies and simulators applied to large and complex dynamic systems (such as road traffic networks, sets of river channels, etc.) produce large amounts of behavior data that are difficult for users to interpret and analyze. Software tools that generate presentations combining text and graphics can help users understand this data. In this paper we describe the results of our research on automatic multimedia presentation generation (including text, graphics, maps, images, etc.) for interactive exploration of behavior datasets. We designed a novel user interface that combines automatically generated text and graphical resources. We describe the general knowledge-based design of our presentation generation tool. We also present applications that we developed to validate the method, and a comparison with related work
Individual and Domain Adaptation in Sentence Planning for Dialogue
One of the biggest challenges in the development and deployment of spoken
dialogue systems is the design of the spoken language generation module. This
challenge arises from the need for the generator to adapt to many features of
the dialogue domain, user population, and dialogue context. A promising
approach is trainable generation, which uses general-purpose linguistic
knowledge that is automatically adapted to the features of interest, such as
the application domain, individual user, or user group. In this paper we
present and evaluate a trainable sentence planner for providing restaurant
information in the MATCH dialogue system. We show that trainable sentence
planning can produce complex information presentations whose quality is
comparable to the output of a template-based generator tuned to this domain. We
also show that our method easily supports adapting the sentence planner to
individuals, and that the individualized sentence planners generally perform
better than models trained and tested on a population of individuals. Previous
work has documented and utilized individual preferences for content selection,
but to our knowledge, these results provide the first demonstration of
individual preferences for sentence planning operations, affecting the content
order, discourse structure and sentence structure of system responses. Finally,
we evaluate the contribution of different feature sets, and show that, in our
application, n-gram features often do as well as features based on higher-level
linguistic representations
Using the Journalistic Metaphor to Design User Interfaces that Explain Sensor Data
Abstract. Facilitating general access to data from sensor networks (including traffic, hydrology and other domains) increases their utility. In this paper we argue that the journalistic metaphor can be effectively used to automatically generate multimedia presentations that help non-expert users analyze and understand sensor data. The journalistic layout and style are familiar to most users. Furthermore, the journalistic approach of ordering information from most general to most specific helps users obtain a high-level understanding while providing them the freedom to choose the depth of analysis to which they want to go. We describe the general characteristics and architectural requirements for an interactive intelligent user interface for exploring sensor data that uses the journalistic metaphor. We also describe our experience in developing this interface in real-world domains (e.g., hydrology)
Definition and development of a measurement instrument for compellingness in human computer interaction
Overly compelling displays may cause users to under or overestimate the validity of data that is presented, leading to faulty decision making, distractions and missed information. However, no measure currently exists to determine the level of compellingness of an interface. The goal of this research was to develop an empirically determined measurement instrument of the compellingness of an interface. Literature review and a semantics survey were used to develop a pool of items that relate or contribute to compellingness, and two expert reviews of the list resulted in 28 potential questions. These 28 questions were fielded in study with a map-based task. Exploratory Factor Analysis and Cronbach’s Alpha were used on the results to eliminate questions, identify factor groupings, and quantify the amount each question loaded on the factor groupings. That analysis resulted in a final compellingness survey with 22 questions across six sub-factors and a final Cronbach’s Alpha value of 0.92. Additionally, the survey is organized into three factors of compellingness: human, computer, and interaction, resulting in a two-level survey. An empirically-based measure of compellingness can be used in evaluations of human factors issues in domains such as aviation, weather, and game design. Understanding the underlying aspects of compellingness in an interface will enable researchers to understand the interaction between compellingness and other human factors issues such as trust, attention allocation, information quality, performance, error, and workload
Learning to adapt in dialogue systems : data-driven models for personality recognition and generation.
Dialogue systems are artefacts that converse with human users in order to achieve
some task. Each step of the dialogue requires understanding the user's input, deciding
on what to reply, and generating an output utterance. Although there are
many ways to express any given content, most dialogue systems do not take linguistic
variation into account in both the understanding and generation phases,
i.e. the user's linguistic style is typically ignored, and the style conveyed by the
system is chosen once for all interactions at development time. We believe that
modelling linguistic variation can greatly improve the interaction in dialogue systems,
such as in intelligent tutoring systems, video games, or information retrieval
systems, which all require specific linguistic styles. Previous work has shown that
linguistic style affects many aspects of users' perceptions, even when the dialogue
is task-oriented. Moreover, users attribute a consistent personality to machines,
even when exposed to a limited set of cues, thus dialogue systems manifest personality
whether designed into the system or not. Over the past few years, psychologists
have identified the main dimensions of individual differences in human
behaviour: the Big Five personality traits. We hypothesise that the Big Five provide
a useful computational framework for modelling important aspects of linguistic
variation. This thesis first explores the possibility of recognising the user's personality
using data-driven models trained on essays and conversational data. We then
test whether it is possible to generate language varying consistently along each
personality dimension in the information presentation domain. We present PERSONAGE:
a language generator modelling findings from psychological studies to
project various personality traits. We use PERSONAGE to compare various generation
paradigms: (1) rule-based generation, (2) overgenerate and select and (3)
generation using parameter estimation models-a novel approach that learns to
produce recognisable variation along meaningful stylistic dimensions without the
computational cost incurred by overgeneration techniques. We also present the
first human evaluation of a data-driven generation method that projects multiple
stylistic dimensions simultaneously and on a continuous scale