243,246 research outputs found
Mapping Tasks to Interactions for Graph Exploration and Graph Editing on Interactive Surfaces
Graph exploration and editing are still mostly considered independently and
systems to work with are not designed for todays interactive surfaces like
smartphones, tablets or tabletops. When developing a system for those modern
devices that supports both graph exploration and graph editing, it is necessary
to 1) identify what basic tasks need to be supported, 2) what interactions can
be used, and 3) how to map these tasks and interactions. This technical report
provides a list of basic interaction tasks for graph exploration and editing as
a result of an extensive system review. Moreover, different interaction
modalities of interactive surfaces are reviewed according to their interaction
vocabulary and further degrees of freedom that can be used to make interactions
distinguishable are discussed. Beyond the scope of graph exploration and
editing, we provide an approach for finding and evaluating a mapping from tasks
to interactions, that is generally applicable. Thus, this work acts as a
guideline for developing a system for graph exploration and editing that is
specifically designed for interactive surfaces.Comment: 21 pages, minor corrections (typos etc.
Establishing the design knowledge for emerging interaction platforms
While awaiting a variety of innovative interactive products and services to appear in the market in the near future such as interactive tabletops, interactive TVs, public multi-touch walls, and other embedded appliances, this paper calls for preparation for the arrival of such interactive platforms based on their interactivity. We advocate studying, understanding and establishing the foundation for interaction characteristics and affordances and design implications for these platforms which we know will soon emerge and penetrate our everyday lives. We review some of the archetypal interaction platform categories of the future and highlight the current status of the design knowledge-base accumulated to date and the current rate of growth for each of these. We use example designs illustrating design issues and considerations based on the authors’ 12-year experience in pioneering novel applications in various forms and styles
Multi-Moji: Combining Thermal, Vibrotactile and Visual Stimuli to Expand the Affective Range of Feedback
This paper explores the combination of multiple concurrent
modalities for conveying emotional information in HCI:
temperature, vibration and abstract visual displays. Each modality
has been studied individually, but can only convey a
limited range of emotions within two-dimensional valencearousal
space. This paper is the first to systematically combine
multiple modalities to expand the available affective
range. Three studies were conducted: Study 1 measured the
emotionality of vibrotactile feedback by itself; Study 2 measured
the perceived emotional content of three bimodal combinations:
vibrotactile + thermal, vibrotactile + visual and
visual + thermal. Study 3 then combined all three modalities.
Results show that combining modalities increases the available
range of emotional states, particularly in the problematic
top-right and bottom-left quadrants of the dimensional
model. We also provide a novel lookup resource for designers
to identify stimuli to convey a range of emotions
Improving the Scalability of DPWS-Based Networked Infrastructures
The Devices Profile for Web Services (DPWS) specification enables seamless
discovery, configuration, and interoperability of networked devices in various
settings, ranging from home automation and multimedia to manufacturing
equipment and data centers. Unfortunately, the sheer simplicity of event
notification mechanisms that makes it fit for resource-constrained devices,
makes it hard to scale to large infrastructures with more stringent
dependability requirements, ironically, where self-configuration would be most
useful. In this report, we address this challenge with a proposal to integrate
gossip-based dissemination in DPWS, thus maintaining compatibility with
original assumptions of the specification, and avoiding a centralized
configuration server or custom black-box middleware components. In detail, we
show how our approach provides an evolutionary and non-intrusive solution to
the scalability limitations of DPWS and experimentally evaluate it with an
implementation based on the the Web Services for Devices (WS4D) Java Multi
Edition DPWS Stack (JMEDS).Comment: 28 pages, Technical Repor
Factors influencing visual attention switch in multi-display user interfaces: a survey
Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin
LeviSense: a platform for the multisensory integration in levitating food and insights into its effect on flavour perception
Eating is one of the most multisensory experiences in everyday life. All of our five senses (i.e. taste, smell, vision, hearing and touch) are involved, even if we are not aware of it. However, while multisensory integration has been well studied in psychology, there is not a single platform for testing systematically the effects of different stimuli. This lack of platform results in unresolved design challenges for the design of taste-based immersive experiences. Here, we present LeviSense: the first system designed for multisensory integration in gustatory experiences based on levitated food. Our system enables the systematic exploration of different sensory effects on eating experiences. It also opens up new opportunities for other professionals (e.g., molecular gastronomy chefs) looking for innovative taste-delivery platforms. We describe the design process behind LeviSense and conduct two experiments to test a subset of the crossmodal combinations (i.e., taste and vision, taste and smell). Our results show how different lighting and smell conditions affect the perceived taste intensity, pleasantness, and satisfaction. We discuss how LeviSense creates a new technical, creative, and expressive possibilities in a series of emerging design spaces within Human-Food Interaction
- …