53 research outputs found
Pseudo-haptics survey: Human-computer interaction in extended reality & teleoperation
Pseudo-haptic techniques are becoming increasingly popular in human-computer interaction. They replicate haptic sensations by leveraging primarily visual feedback rather than mechanical actuators. These techniques bridge the gap between the real and virtual worlds by exploring the brainâs ability to integrate visual and haptic information. One of the many advantages of pseudo-haptic techniques is that they are cost-effective, portable, and flexible. They eliminate the need for direct attachment of haptic devices to the body, which can be heavy and large and require a lot of power and maintenance. Recent research has focused on applying these techniques to extended reality and mid-air interactions. To better understand the potential of pseudo-haptic techniques, the authors developed a novel taxonomy encompassing tactile feedback, kinesthetic feedback, and combined categories in multimodal approaches, ground not covered by previous surveys. This survey highlights multimodal strategies and potential avenues for future studies, particularly regarding integrating these techniques into extended reality and collaborative virtual environments.info:eu-repo/semantics/publishedVersio
AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION
Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application
Interactive maps for visually impaired people : design, usability and spatial cognition
Connaßtre la géographie de son environnement urbain est un enjeu important pour les personnes déficientes visuelles. Des cartes tactiles en relief sont généralement utilisées mais elles présentent des limitations importantes (nombre limité d'informations, recours à une légende braille). Les nouvelles technologies permettent d'envisager des solutions innovantes. Nous avons conçu et développé une carte interactive accessible, en suivant un processus de conception participative. Cette carte est basée sur un dispositif multi-touch, une carte tactile en relief et une sortie sonore. Ce dispositif permet au sujet de recueillir des informations en double-cliquant sur certains objets de la carte. Nous avons démontré expérimentalement que ce prototype était plus efficace et plus satisfaisant pour des utilisateurs déficients visuels qu'une carte tactile simple. Nous avons également exploré et testé différents types d'interactions avancées accessibles pour explorer la carte. Cette thÚse démontre l'importance des cartes tactiles interactives pour les déficients visuels et leur cognition spatiale.Knowing the geography of an urban environment is crucial for visually impaired people. Tactile relief maps are generally used, but they retain significant limitations (limited amount of information, use of braille legend, etc.). Recent technological progress allows the development of innovative solutions which overcome these limitations. In this thesis, we present the design of an accessible interactive map through a participatory design process. This map is composed by a multi-touch screen with tactile map overlay and speech output. It provides auditory information when tapping on map elements. We have demonstrated in an experiment that our prototype was more effective and satisfactory for visually impaired users than a simple raised-line map. We also explored and tested different types of advanced non-visual interaction for exploring the map. This thesis demonstrates the importance of interactive tactile maps for visually impaired people and their spatial cognition
Measuring user experience for virtual reality
In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an PopularitĂ€t gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von RealitĂ€t und VirtualitĂ€t kombinieren. WĂ€hrend die Technologie sowohl fĂŒr Eingabe- als auch fĂŒr AusgabegerĂ€te marktreif ist, existieren nur wenige Lösungen fĂŒr den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen ĂŒber Leistung und BenutzerprĂ€ferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten BenutzeroberflĂ€chen fĂŒr VR zu einer groĂen Herausforderung. Diese Arbeit beschĂ€ftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingefĂŒhrt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und BenutzerprĂ€ferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und MenĂŒsteuerung im Kontext des tĂ€glichen VR. Die Ergebnisse werden auĂerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst
Pictorial Primates: A Search for Iconic Abilities in Great Apes
Pictures and other iconic media are used extensively in psychological experiments on nonhuman primate perception, categorisation, etc. They are also used in everyday interaction with primates, and as pure entertainment. But in what ways do primates understand iconic artefacts? What implications do these different ways have for the conclusions we can draw from those studies on perception and categorisation? What can pictures tell us about primate cognition, and what can primates tell us about pictures? The bulk of the thesis is a critical review of the primatological literature concerned with iconic artefacts. Drawing on work in developmental psychology, cross-cultural research, and semiotics, distinctions between different kinds of pictorial competence are made. The alternatives to viewing pictures as depictions, are to view them as the real world is viewed, in which case only realistic pictures evoke recognition, or to view them as a set of disjoint properties, in which case recognition of categorisable motifs fails. It is argued that approaching a picture as a depiction entails a set of expectations on the picture, which affects attention to e.g. part - whole relationships, "filling in," and integration into context. This in turn allows recognition also of non-realistic similarity. The question, then, is whether such expectations can be formed in other brains than an exclusively human one. The different forms of pictorial competence are discussed in relation to research on similarity judgements, abstraction, and categorisation, as well as applied to other iconic media than the picture, such as scale-models, mirrors, toy replicas, and video. Two lines of original empirical investigation are presented: A study of photographic recognition in picture-naĂŻve gorillas, and recognition of line drawings in picture-experienced and language-competent bonobos. Only the latter study yielded evidence for recognition. The failures in the former study are discussed in terms of experimental shortcomings, and suggestions for future improvements are made
Bringing the Physical to the Digital
This dissertation describes an exploration of digital tabletop interaction styles, with the ultimate goal of informing the design of a new model for tabletop interaction. In the context of this thesis the term digital tabletop refers to an emerging class of devices that afford many novel ways of interaction with the digital. Allowing users to directly touch information presented on large,
horizontal displays. Being a relatively young field, many developments are in flux; hardware and software change at a fast pace and many interesting alternative approaches are available at the same time. In our research we are especially interested in systems that are capable of sensing multiple contacts (e.g., fingers) and richer information such as the outline of whole hands or other physical objects. New sensor hardware enable new ways to interact with the digital. When embarking into the research for this thesis, the question which interaction styles could
be appropriate for this new class of devices was a open question, with many equally promising answers.
Many everyday activities rely on our hands ability to skillfully control and manipulate physical objects. We seek to open up different possibilities to exploit our manual dexterity and provide users with richer interaction possibilities. This could be achieved through the use of physical objects as input mediators or through virtual interfaces that behave in a more realistic fashion.
In order to gain a better understanding of the underlying design space we choose an approach organized into two phases. First, two different prototypes, each representing a specific interaction style â namely gesture-based interaction and tangible interaction â have been implemented. The flexibility of use afforded by the interface and the level of physicality afforded by the interface elements are introduced as criteria for evaluation. Each approachesâ suitability to support the
highly dynamic and often unstructured interactions typical for digital tabletops is analyzed based
on these criteria.
In a second stage the learnings from these initial explorations are applied to inform the design of a novel model for digital tabletop interaction. This model is based on the combination of rich multi-touch sensing and a three dimensional environment enriched by a gaming physics simulation. The proposed approach enables users to interact with the virtual through richer quantities such as collision and friction. Enabling a variety of fine-grained interactions using multiple fingers, whole hands and physical objects.
Our model makes digital tabletop interaction even more ânaturalâ. However, because the interaction â the sensed input and the displayed output â is still bound to the surface, there is a fundamental limitation in manipulating objects using the third dimension. To address this issue,
we present a technique that allows users to â conceptually â pick objects off the surface and control their position in 3D. Our goal has been to define a technique that completes our model for on-surface interaction and allows for âas-direct-as possibleâ interactions. We also present
two hardware prototypes capable of sensing the usersâ interactions beyond the tableâs surface.
Finally, we present visual feedback mechanisms to give the users the sense that they are actually lifting the objects off the surface.
This thesis contributes on various levels. We present several novel prototypes that we built and evaluated. We use these prototypes to systematically explore the design space of digital tabletop interaction. The flexibility of use afforded by the interaction style is introduced as criterion alongside the user interface elementsâ physicality. Each approachesâ suitability to support the
highly dynamic and often unstructured interactions typical for digital tabletops are analyzed. We present a new model for tabletop interaction that increases the fidelity of interaction possible in
such settings. Finally, we extend this model so to enable as direct as possible interactions with
3D data, interacting from above the tableâs surface
Practical, appropriate, empirically-validated guidelines for designing educational games
There has recently been a great deal of interest in the
potential of computer games to function as innovative
educational tools. However, there is very little evidence of
games fulfilling that potential. Indeed, the process of
merging the disparate goals of education and games design
appears problematic, and there are currently no practical
guidelines for how to do so in a coherent manner. In this
paper, we describe the successful, empirically validated
teaching methods developed by behavioural psychologists
and point out how they are uniquely suited to take
advantage of the benefits that games offer to education. We
conclude by proposing some practical steps for designing
educational games, based on the techniques of Applied
Behaviour Analysis. It is intended that this paper can both
focus educational games designers on the features of games
that are genuinely useful for education, and also introduce a
successful form of teaching that this audience may not yet
be familiar with
Screen Space Reconfigured
Screen Space Reconfigured is the first edited volume that critically and theoretically examines the many novel renderings of space brought to us by 21st century screens. Exploring key cases such as post-perspectival space, 3D, vertical framing, haptics, and layering, this volume takes stock of emerging forms of screen space and spatialities as they move from the margins to the centre of contemporary media practice.Recent years have seen a marked scholarly interest in spatial dimensions and conceptions of moving image culture, with some theorists claiming that a 'spatial turn' has taken place in media studies and screen practices alike. Yet this is the first book-length study dedicated to on-screen spatiality as such.Spanning mainstream cinema, experimental film, video art, mobile screens, and stadium entertainment, the volume includes contributions from such acclaimed authors as Giuliana Bruno and Tom Gunning as well as a younger generation of scholars
Somatic ABC's: A Theoretical Framework for Designing, Developing and Evaluating the Building Blocks of Touch-Based Information Delivery
abstract: Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.Dissertation/ThesisPh.D. Computer Science 201
Expanding tangible tabletop interfaces beyond the display
Lâaugment
de
popularitat
de
les
taules
i
superfĂcies
interactives
estĂ
impulsant
la
recerca
i
la
innovaciĂł
en
una
gran
varietat
dâĂ rees,
incloent-Ââhi
maquinari,
programari,
disseny
de
la
interacciĂł
i
noves
tĂšcniques
dâinteracciĂł.
Totes,
amb
lâobjectiu
de
promoure
noves
interfĂcies
dotades
dâun
llenguatge
més
ric,
potent
i
natural.
Entre
totes
aquestes
modalitats,
la
interacciĂł
combinada
a
sobre
i
per
damunt
de
la
superfĂcie
de
la
taula
mitjançant
tangibles
i
gestos
Ă©s
actualment
una
Ă rea
molt
prometedora.
Aquest
document
tracta
dâexpandir
les
taules
interactives
més
enllĂ
de
la
superfĂcie
per
mitjĂ
de
lâexploraciĂł
i
el
desenvolupament
dâun
sistema
o
dispositiu
enfocat
des
de
tres
vessants
diferents:
maquinari,
programari
i
disseny
de
la
interacciĂł.
Durant
lâinici
dâaquest
document
sâestudien
i
es
resumeixen
els
diferents
trets
caracterĂstics
de
les
superfĂcies
interactives
tangibles
convencionals
o
2D
i
es
presenten
els
treballs
previs
desenvolupats
per
lâautor
en
solucions
de
programari
que
acaben
resultant
en
aplicacions
que
suggereixen
lâĂșs
de
la
tercera
dimensiĂł
a
les
superfĂcies
tangibles.
Seguidament,
es
presenta
un
repĂ s
del
maquinari
existent
en
aquest
tipus
dâinterfĂcies
per
tal
de
concebre
un
dispositiu
capaç
de
detectar
gestos
i
generar
visuals
per
sobre
de
la
superfĂcie,
per
introduir
els
canvis
realitzats
a
un
dispositiu
existent,
desenvolupat
i
cedit
per
Microsoft
Reseach
Cambridge.
Per
tal
dâexplotar
tot
el
potencial
dâaquest
nou
dispositiu,
es
desenvolupa
un
nou
sistema
de
visiĂł
per
ordinador
que
estén
el
seguiment
dâobjectes
i
mans
en
una
superfĂcie
2D
a
la
detecciĂł
de
mans,
dits
i
etiquetes
amb
sis
graus
de
llibertat
per
sobre
la
superfĂcie
incloent-Ââhi
la
interacciĂł
tangible
i
tĂ ctil
convencional
a
la
superfĂcie.
Finalment,
es
presenta
una
eina
de
programari
per
a
generar
aplicacions
per
al
nou
sistema
i
es
presenten
un
seguit
dâaplicacions
per
tal
de
provar
tot
el
desenvolupament
generat
al
llarg
de
la
tesi
que
es
conclou
presentant
un
seguit
de
gestos
tant
a
la
superfĂcie
com
per
sobre
dâaquesta
i
situant-Ââlos
en
una
nova
classificaciĂł
que
alhora
recull
la
interacciĂł
convencional
2D
i
la
interacciĂł
estesa
per
damunt
de
la
superfĂcie
desenvolupada.The
rising
popularity
of
interactive
tabletops
and
surfaces
is
spawning
research
and
innovation
in
a
wide
variety
of
areas,
including
hardware
and
software
technologies,
interaction
design
and
novel
interaction
techniques,
all
of
which
seek
to
promote
richer,
more
powerful
and
more
natural
interaction
modalities.
Among
these
modalities,
combined
interaction
on
and
above
the
surface,
both
with
gestures
and
with
tangible
objects,
is
a
very
promising
area.
This
dissertation
is
about
expanding
tangible
and
tabletops
surfaces
beyond
the
display
by
exploring
and
developing
a
system
from
the
three
different
perspectives:
hardware,
software,
and
interaction
design.
This
dissertation,
studies
and
summarizes
the
distinctive
affordances
of
conventional
2D
tabletop
devices,
with
a
vast
literature
review
and
some
additional
use
cases
developed
by
the
author
for
supporting
these
findings,
and
subsequently
explores
the
novel
and
not
yet
unveiled
potential
affordances
of
3D-Ââaugmented
tabletops.
It
overviews
the
existing
hardware
solutions
for
conceiving
such
a
device,
and
applies
the
needed
hardware
modifications
to
an
existing
prototype
developed
and
rendered
to
us
by
Microsoft
Research
Cambridge.
For
accomplishing
the
interaction
purposes,
it
is
developed
a
vision
system
for
3D
interaction
that
extends
conventional
2D
tabletop
tracking
for
the
tracking
of
hand
gestures,
6DoF
markers
and
on-Ââsurface
finger
interaction.
It
finishes
by
conceiving
a
complete
software
framework
solution,
for
the
development
and
implementation
of
such
type
of
applications
that
can
benefit
from
these
novel
3D
interaction
techniques,
and
implements
and
test
several
software
prototypes
as
proof
of
concepts,
using
this
framework.
With
these
findings,
it
concludes
presenting
continuous
tangible
interaction
gestures
and
proposing
a
novel
classification
for
3D
tangible
and
tabletop
gestures
- âŠ