34 research outputs found
The Walking Talking Stick: Understanding Automated Note-Taking in Walking Meetings
While walking meetings offer a healthy alternative to sit-down meetings, they
also pose practical challenges. Taking notes is difficult while walking, which
limits the potential of walking meetings. To address this, we designed the
Walking Talking Stick -- a tangible device with integrated voice recording,
transcription, and a physical highlighting button to facilitate note-taking
during walking meetings. We investigated our system in a three-condition
between-subjects user study with thirty pairs of participants (=60) who
conducted 15-minute outdoor walking meetings. Participants either used clip-on
microphones, the prototype without the button, or the prototype with the
highlighting button. We found that the tangible device increased task focus,
and the physical highlighting button facilitated turn-taking and resulted in
more useful notes. Our work demonstrates how interactive artifacts can
incentivize users to hold meetings in motion and enhance conversation dynamics.
We contribute insights for future systems which support conducting work tasks
in mobile environments.Comment: In CHI 202
Designing for self-transcendent experiences in virtual reality
This thesis contributes to Psychology and Human-Computer Interaction (HCI) research with a focus on the design of immersive experiences that support self-transcendence. Self-transcendence is defined as a decrease in a sense of self and a increase in unity with the world. It can change what individuals know and value, their perspective on the world and life, evolving them as a grown person. Consequently, self-transcendence is gaining attention in Psychology, Philosophy, and Neuroscience. But, we are still far from understanding the complex phenomenological and neurocognitive aspects of self-transcendence, as well as its implications for individual growth and psychological well-being. In reviewing the methods for studying self-transcendence, we found differing conceptual models determine different ways for understanding and studying self-transcendence. Understanding self-transcendence is made especially challenging because of its ineffable qualities and extraordinary conditions in which it takes place. For that reason, researchers have began to look at technological solutions for both eliciting self-transcendence to better study it under controlled and replicable conditions as well as giving people greater access to the experience. We reviewed immersive, interactive technologies that aim to support positive experiences such as self-transcendence and extracted a set of design considerations that were prevalent across experiences. We then explored two different focuses of self-transcendence: awe and lucid dreaming. First, we took an existing VR experience designed specifically to support the self-transcendent experience of awe and looked at how the mindset and physical setting surrounding that VR experience might better support the experience of and accommodation of awe. Second, we delved deep into lucid dreaming to better understand the aspects that could help inform the design of an immersive experience that supports self-transcendence. We put those design ideas into practice by developing a neurofeedback system that aims to support lucid dreaming practices in an immersive experience. Through these review papers and design explorations, we contribute to the understanding of how one might design and evaluate immersive technological experiences that support varieties of self-transcendence. We hope to inspire more work in this area that holds promise in better understanding human nature and living our best lives
Practical, appropriate, empirically-validated guidelines for designing educational games
There has recently been a great deal of interest in the
potential of computer games to function as innovative
educational tools. However, there is very little evidence of
games fulfilling that potential. Indeed, the process of
merging the disparate goals of education and games design
appears problematic, and there are currently no practical
guidelines for how to do so in a coherent manner. In this
paper, we describe the successful, empirically validated
teaching methods developed by behavioural psychologists
and point out how they are uniquely suited to take
advantage of the benefits that games offer to education. We
conclude by proposing some practical steps for designing
educational games, based on the techniques of Applied
Behaviour Analysis. It is intended that this paper can both
focus educational games designers on the features of games
that are genuinely useful for education, and also introduce a
successful form of teaching that this audience may not yet
be familiar with
Steps to an Ecology of Networked Knowledge and Innovation: Enabling new forms of collaboration among sciences, engineering, arts, and design
SEAD network White Papers ReportThe final White Papers (posted at http://seadnetwork.wordpress.com/white-paper- abstracts/final-white-papers/) represent a spectrum of interests in advocating for transdisciplinarity among arts, sciences, and technologies. All authors submitted plans of action and identified stakeholders they perceived as instrumental in carrying out such plans. The individual efforts led to an international scope. One of the important characteristics of this collection is that the papers do not represent a collective aim toward an explicit initiative. Rather, they offer a broad array of views on barriers faced and prospective solutions. In summary, the collected White Papers and associated Meta- analyses began as an effort to take the pulse of the SEAD community as broadly as possible. The ideas they generated provide a fruitful basis for gauging trends and challenges in facilitating the growth of the network and implementing future SEAD initiatives.National Science Foundation Grant No.1142510. Additional funding was provided by the ATEC program at the University of Texas at Dallas and the Institute for Applied Creativity at Texas A&M University
Harding Magazine Spring 1999 (vol. 7, no. 2)
Publication distributed to alumni and friends of the university
Improving command selection in smart environments by exploiting spatial constancy
With the a steadily increasing number of digital devices, our environments are becoming increasingly smarter: we can now use our tablets to control our TV, access our recipe database while cooking, and remotely turn lights on and off. Currently, this Human-Environment Interaction (HEI) is limited to in-place interfaces, where people have to walk up to a mounted set of switches and buttons, and navigation-based interaction, where people have to navigate on-screen menus, for example on a smart-phone, tablet, or TV screen. Unfortunately, there are numerous scenarios in which neither of these two interaction paradigms provide fast and convenient access to digital artifacts and system commands. People, for example, might not want to touch an interaction device because their hands are dirty from cooking: they want device-free interaction. Or people might not want to have to look at a screen because it would interrupt their current task: they want system-feedback-free interaction. Currently, there is no interaction paradigm for smart environments that allows people for these kinds of interactions.
In my dissertation, I introduce Room-based Interaction to solve this problem of HEI. With room-based interaction, people associate digital artifacts and system commands with real-world objects in the environment and point toward these real-world proxy objects for selecting the associated digital artifact. The design of room-based interaction is informed by a theoretical analysis of navigation- and pointing-based selection techniques, where I investigated the cognitive systems involved in executing a selection. An evaluation of room-based interaction in three user studies and a comparison with existing HEI techniques revealed that room-based interaction solves many shortcomings of existing HEI techniques: the use of real-world proxy objects makes it easy for people to learn the interaction technique and to perform accurate pointing gestures, and it allows for system-feedback-free interaction; the use of the environment as flat input space makes selections fast; the use of mid-air full-arm pointing gestures allows for device-free interaction and increases awareness of other’s interactions with the environment.
Overall, I present an alternative selection paradigm for smart environments that is superior to existing techniques in many common HEI-scenarios. This new paradigm can make HEI more user-friendly, broaden the use cases of smart environments, and increase their acceptance for the average user
Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◦ Video
EXtended Reality systems have resurged as a medium for work and entertainment. While
360o video has been characterized as less immersive than computer-generated VR, its
realism, ease of use and affordability mean it is in widespread commercial use. Based
on the prevalence and potential of the 360o video format, this research is focused on
improving and augmenting the user experience of watching 360o video. By leveraging
knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI),
this research addresses two issues affecting user experience in 360o video: Attention
Guidance and Visually Induced Motion Sickness (VIMS).
This research work relies on the construction of multiple artifacts to answer the de-
fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe-
riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as
well as enabling the collection and analysis of captured metrics emerging from the user
experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including
optical flow and visual SLAM among others) that control parameters for visual modi-
fications such as a restricted Field of View (FoV). These artifacts are accompanied by
evaluation studies targeting the defined research questions. Through Cue Control, this
research shows that non-diegetic music can be spatialized to act as orientation for users.
A partial spatialization of music was deemed ineffective when used for orientation. Addi-
tionally, our results also demonstrate that diegetic sounds are used for notification rather
than orientation. Through VIMS mitigation pipeline, this research shows that dynamic
restricted FoV is statistically significant in mitigating VIMS, while mantaining desired
levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a
Research through Design (RtD) approach, where the IVRUX artifact is the product of de-
sign knowledge and gave direction to research. The research presented in this thesis is
of interest to practitioners and researchers working on 360o video and helps delineate
future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra-
balho e entretenimento. Enquanto que o vídeo 360o tem sido caracterizado como sendo
menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili-
dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na
prevalência e potencial do formato de vídeo 360o, esta pesquisa está focada em melhorar e
aumentar a experiência de utilizador ao ver vídeos 360o. Impulsionado por conhecimento
de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta
pesquisa aborda dois problemas que afetam a experiência de utilizador em vídeo 360o:
Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS).
Este trabalho de pesquisa é apoiado na construção de múltiplos artefactos para res-
ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para análise de
experiências narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação
de bandas sonoras de áudio espacial, enquanto permite a recolha e análise de métricas
capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS,
uma sequência linear de módulos (incluindo fluxo ótico e SLAM visual entre outros) que
controla parâmetros para modificações visuais como o campo de visão restringido. Estes
artefactos estão acompanhados por estudos de avaliação direcionados para às perguntas
de pesquisa definidas. Através do Cue Control, esta pesquisa mostra que música não-
diegética pode ser espacializada para servir como orientação para os utilizadores. Uma
espacialização parcial da música foi considerada ineficaz quando usada para a orientação.
Adicionalmente, os nossos resultados demonstram que sons diegéticos são usados para
notificação em vez de orientação. Através do canal para a mitigação de VIMS, esta pesquisa
mostra que o campo de visão restrito e dinâmico é estatisticamente significante ao mitigar
VIMS, enquanto mantem níveis desejados de Presença. Ambos Cue Control e o canal para
a mitigação de VIMS emergiram de uma abordagem de Pesquisa através do Design (RtD),
onde o artefacto IVRUX é o produto de conhecimento de design e deu direcção à pesquisa.
A pesquisa apresentada nesta tese é de interesse para profissionais e investigadores tra-
balhando em vídeo 360o e ajuda a delinear futuras direções em tornar o vídeo 360o um
espaço de design rico para a interação e narrativa
Design rules and guidelines for generic condition-based maintenance software's Graphic User Interface
The task of selecting and developing a method of Human Computer Interaction (HCI) for a
Condition Based Maintenance (CBM) system, is investigated in this thesis. Efficiently and
accurately communicating machinery health information extracted from Condition
Monitoring (CM) equipment, to aid and assist plant and machinery maintenance decisions,
is the crux of the problem being researched.
Challenges facing this research include: the multitude of different CM techniques,
developed for measuring different component and machinery condition parameters; the
multitude of different methods of HCI; and the multitude of different ways of
communicating machinery health conditions to CBM practitioners. Each challenge will be
considered whilst pursuing the objective of identifying a generic set of design and
development principles, applicable to the design and development of a CBM system's
Human Machine Interface (HMI). [Continues.