9,414 research outputs found
An empirical study of the “prototype walkthrough”: a studio-based activity for HCI education
For over a century, studio-based instruction has served as an effective pedagogical model in architecture and fine arts education. Because of its design orientation, human-computer interaction (HCI) education is an excellent venue for studio-based instruction. In an HCI course, we have been exploring a studio-based learning activity called the prototype walkthrough, in which a student project team simulates its evolving user interface prototype while a student audience member acts as a test user. The audience is encouraged to ask questions and provide feedback. We have observed that prototype walkthroughs create excellent conditions for learning about user interface design. In order to better understand the educational value of the activity, we performed a content analysis of a video corpus of 16 prototype walkthroughs held in two HCI courses. We found that the prototype walkthrough discussions were dominated by relevant design issues. Moreover, mirroring the justification behavior of the expert instructor, students justified over 80 percent of their design statements and critiques, with nearly one-quarter of those justifications having a theoretical or empirical basis. Our findings suggest that PWs provide valuable opportunities for students to actively learn HCI design by participating in authentic practice, and provide insight into how such opportunities can be best promoted
Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning
Using touch devices to navigate in virtual 3D environments such as computer
assisted design (CAD) models or geographical information systems (GIS) is
inherently difficult for humans, as the 3D operations have to be performed by
the user on a 2D touch surface. This ill-posed problem is classically solved
with a fixed and handcrafted interaction protocol, which must be learned by the
user. We propose to automatically learn a new interaction protocol allowing to
map a 2D user input to 3D actions in virtual environments using reinforcement
learning (RL). A fundamental problem of RL methods is the vast amount of
interactions often required, which are difficult to come by when humans are
involved. To overcome this limitation, we make use of two collaborative agents.
The first agent models the human by learning to perform the 2D finger
trajectories. The second agent acts as the interaction protocol, interpreting
and translating to 3D operations the 2D finger trajectories from the first
agent. We restrict the learned 2D trajectories to be similar to a training set
of collected human gestures by first performing state representation learning,
prior to reinforcement learning. This state representation learning is
addressed by projecting the gestures into a latent space learned by a
variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine
Learning and Principles and Practice of Knowledge Discovery in Databases 2019
(ECMLPKDD 2019
A unified operator splitting approach for multi-scale fluid-particle coupling in the lattice Boltzmann method
A unified framework to derive discrete time-marching schemes for coupling of
immersed solid and elastic objects to the lattice Boltzmann method is
presented. Based on operator splitting for the discrete Boltzmann equation,
second-order time-accurate schemes for the immersed boundary method, viscous
force coupling and external boundary force are derived. Furthermore, a modified
formulation of the external boundary force is introduced that leads to a more
accurate no-slip boundary condition. The derivation also reveals that the
coupling methods can be cast into a unified form, and that the immersed
boundary method can be interpreted as the limit of force coupling for vanishing
particle mass. In practice, the ratio between fluid and particle mass
determines the strength of the force transfer in the coupling. The integration
schemes formally improve the accuracy of first-order algorithms that are
commonly employed when coupling immersed objects to a lattice Boltzmann fluid.
It is anticipated that they will also lead to superior long-time stability in
simulations of complex fluids with multiple scales
Developing the human-computer interface for Space Station Freedom
For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously
Impact of time-variant turbulence behavior on prediction for adaptive optics systems
For high contrast imaging systems, the time delay is one of the major
limiting factors for the performance of the extreme adaptive optics (AO)
sub-system and, in turn, the final contrast. The time delay is due to the
finite time needed to measure the incoming disturbance and then apply the
correction. By predicting the behavior of the atmospheric disturbance over the
time delay we can in principle achieve a better AO performance. Atmospheric
turbulence parameters which determine the wavefront phase fluctuations have
time-varying behavior. We present a stochastic model for wind speed and model
time-variant atmospheric turbulence effects using varying wind speed. We test a
low-order, data-driven predictor, the linear minimum mean square error
predictor, for a near-infrared AO system under varying conditions. Our results
show varying wind can have a significant impact on the performance of wavefront
prediction, preventing it from reaching optimal performance. The impact depends
on the strength of the wind fluctuations with the greatest loss in expected
performance being for high wind speeds.Comment: 10 pages, 8 figures; Accepted to JOSA A March 201
Reviewing and extending the five-user assumption: A grounded procedure for interaction evaluation
" © ACM, 2013. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Computer-Human Interaction (TOCHI), {VOL 20, ISS 5, (November 2013)} http://doi.acm.org/10.1145/2506210 "The debate concerning how many participants represents a sufficient number for interaction testing is
well-established and long-running, with prominent contributions arguing that five users provide a good
benchmark when seeking to discover interaction problems. We argue that adoption of five users in this
context is often done with little understanding of the basis for, or implications of, the decision. We present
an analysis of relevant research to clarify the meaning of the five-user assumption and to examine the
way in which the original research that suggested it has been applied. This includes its blind adoption and
application in some studies, and complaints about its inadequacies in others. We argue that the five-user
assumption is often misunderstood, not only in the field of Human-Computer Interaction, but also in fields
such as medical device design, or in business and information applications. The analysis that we present
allows us to define a systematic approach for monitoring the sample discovery likelihood, in formative and
summative evaluations, and for gathering information in order to make critical decisions during the
interaction testing, while respecting the aim of the evaluation and allotted budget. This approach – which
we call the ‘Grounded Procedure’ – is introduced and its value argued.The MATCH programme (EPSRC Grants: EP/F063822/1 EP/G012393/1
Recommended from our members
A Critical Analysis of Synthesizer User Interfaces for Timbre
In this paper, we review and analyse categories of user interface used in hardware and software electronic music synthesizers. Problems with the user specification and modification of timbre are discussed. Three principal types of user interface for controlling timbre are distinguished. A problem common to all three categories is identified: that the core language of each category has no well-defined mapping onto the task languages of subjective timbre categories as used by musicians
- …