18,443 research outputs found
Recommended from our members
Facilitating insight into a simulation model using visualization and dynamic model previews
This paper shows how model simplification, by replacing iterative steps with unitary predictive equations, can enable dynamic interaction with a complex simulation process. Model previews extend the techniques of dynamic querying and query previews into the context of ad hoc simulation model exploration. A case study is presented within the domain of counter-current chromatography. The relatively novel method of insight evaluation was applied, given the exploratory nature of the task. The evaluation data show that the trade-off in accuracy is far outweighed by benefits of dynamic interaction. The number of insights gained using the enhanced interactive version of the computer model was more than six times higher than the number of insights gained using the basic version of the model. There was also a trend for dynamic interaction to facilitate insights of greater domain importance
Using visual analytics to develop situation awareness in astrophysics
We present a novel collaborative visual analytics application for cognitively overloaded users in the astrophysics domain. The system was developed for scientists who need to analyze heterogeneous, complex data under time pressure, and make predictions and time-critical decisions rapidly and correctly under a constant influx of changing data. The Sunfall Data Taking system utilizes several novel visualization and analysis techniques to enable a team of geographically distributed domain specialists to effectively and remotely maneuver a custom-built instrument under challenging operational conditions. Sunfall Data Taking has been in production use for 2 years by a major international astrophysics collaboration (the largest data volume supernova search currently in operation), and has substantially improved the operational efficiency of its users. We describe the system design process by an interdisciplinary team, the system architecture and the results of an informal usability evaluation of the production system by domain experts in the context of Endsley's three levels of situation awareness
Collaborative video searching on a tabletop
Almost all system and application design for multimedia systems is based around a single user working in isolation to perform some task yet much of the work for which we use computers to help us, is based on working collaboratively with colleagues. Groupware systems do support user collaboration but typically this is supported through software and users still physically work independently. Tabletop systems, such as the DiamondTouch from MERL, are interface devices which support direct user collaboration on a tabletop. When a tabletop is used as the interface for a multimedia system, such as a video search system, then this kind of direct collaboration raises many questions for system design. In this paper we present a tabletop system for supporting a pair of users in a video search task and we evaluate the system not only in terms of search performance but also in terms of user–user interaction and how different user personalities within each pair of searchers impacts search performance and user interaction. Incorporating the user into the system evaluation as we have done here reveals several interesting results and has important ramifications for the design of a multimedia search system
Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning
Using touch devices to navigate in virtual 3D environments such as computer
assisted design (CAD) models or geographical information systems (GIS) is
inherently difficult for humans, as the 3D operations have to be performed by
the user on a 2D touch surface. This ill-posed problem is classically solved
with a fixed and handcrafted interaction protocol, which must be learned by the
user. We propose to automatically learn a new interaction protocol allowing to
map a 2D user input to 3D actions in virtual environments using reinforcement
learning (RL). A fundamental problem of RL methods is the vast amount of
interactions often required, which are difficult to come by when humans are
involved. To overcome this limitation, we make use of two collaborative agents.
The first agent models the human by learning to perform the 2D finger
trajectories. The second agent acts as the interaction protocol, interpreting
and translating to 3D operations the 2D finger trajectories from the first
agent. We restrict the learned 2D trajectories to be similar to a training set
of collected human gestures by first performing state representation learning,
prior to reinforcement learning. This state representation learning is
addressed by projecting the gestures into a latent space learned by a
variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine
Learning and Principles and Practice of Knowledge Discovery in Databases 2019
(ECMLPKDD 2019
Recommended from our members
NAVI: Novel authentication with visual information
Text-based passwords, despite their well-known drawbacks, remain the dominant user authentication scheme implemented. Graphical password systems, based on visual information such as the recognition of photographs and / or pictures, have emerged as a promising alternative to the aggregate reliance on text passwords. Nevertheless, despite the advantages offered they have not been widely used in practice since many open issues need to be resolved. In this paper we propose a novel graphical password scheme, NAVI, where the credentials of the user are his username and a password formulated by drawing a route on a predefined map. We analyze the strength of the password generated by this scheme and present a prototype implementation in order to illustrate the feasibility of our proposal. Finally, we discuss NAVI’s security features and compare it with existing graphical password schemes as well as text-based passwords in terms of key security features, such aspassword keyspace, dictionary attacks and guessing attacks. The proposed scheme appears to have the same or better performance in the majority of the security features examined
Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing
The accuracy of Automated Speech Recognition (ASR) technology has improved,
but it is still imperfect in many settings. Researchers who evaluate ASR
performance often focus on improving the Word Error Rate (WER) metric, but WER
has been found to have little correlation with human-subject performance on
many applications. We propose a new captioning-focused evaluation metric that
better predicts the impact of ASR recognition errors on the usability of
automatically generated captions for people who are Deaf or Hard of Hearing
(DHH). Through a user study with 30 DHH users, we compared our new metric with
the traditional WER metric on a caption usability evaluation task. In a
side-by-side comparison of pairs of ASR text output (with identical WER), the
texts preferred by our new metric were preferred by DHH participants. Further,
our metric had significantly higher correlation with DHH participants'
subjective scores on the usability of a caption, as compared to the correlation
between WER metric and participant subjective scores. This new metric could be
used to select ASR systems for captioning applications, and it may be a better
metric for ASR researchers to consider when optimizing ASR systems.Comment: 10 pages, 8 figures, published in ACM SIGACCESS Conference on
Computers and Accessibility (ASSETS '17
- …