5 research outputs found
Modeling Human Visual Search Performance on Realistic Webpages Using Analytical and Deep Learning Methods
Modeling visual search not only offers an opportunity to predict the
usability of an interface before actually testing it on real users, but also
advances scientific understanding about human behavior. In this work, we first
conduct a set of analyses on a large-scale dataset of visual search tasks on
realistic webpages. We then present a deep neural network that learns to
predict the scannability of webpage content, i.e., how easy it is for a user to
find a specific target. Our model leverages both heuristic-based features such
as target size and unstructured features such as raw image pixels. This
approach allows us to model complex interactions that might be involved in a
realistic visual search task, which can not be easily achieved by traditional
analytical models. We analyze the model behavior to offer our insights into how
the salience map learned by the model aligns with human intuition and how the
learned semantic representation of each target type relates to its visual
search performance.Comment: the 2020 CHI Conference on Human Factors in Computing System
Intelligent Exploration for User Interface Modules of Mobile App with Collective Learning
A mobile app interface usually consists of a set of user interface modules.
How to properly design these user interface modules is vital to achieving user
satisfaction for a mobile app. However, there are few methods to determine
design variables for user interface modules except for relying on the judgment
of designers. Usually, a laborious post-processing step is necessary to verify
the key change of each design variable. Therefore, there is a only very limited
amount of design solutions that can be tested. It is timeconsuming and almost
impossible to figure out the best design solutions as there are many modules.
To this end, we introduce FEELER, a framework to fast and intelligently explore
design solutions of user interface modules with a collective machine learning
approach. FEELER can help designers quantitatively measure the preference score
of different design solutions, aiming to facilitate the designers to
conveniently and quickly adjust user interface module. We conducted extensive
experimental evaluations on two real-life datasets to demonstrate its
applicability in real-life cases of user interface module design in the Baidu
App, which is one of the most popular mobile apps in China.Comment: 10 pages, accepted as a full paper in KDD 202
Reflow: Automatically Improving Touch Interactions in Mobile Applications through Pixel-based Refinements
Touch is the primary way that users interact with smartphones. However,
building mobile user interfaces where touch interactions work well for all
users is a difficult problem, because users have different abilities and
preferences. We propose a system, Reflow, which automatically applies small,
personalized UI adaptations, called refinements -- to mobile app screens to
improve touch efficiency. Reflow uses a pixel-based strategy to work with
existing applications, and improves touch efficiency while minimally disrupting
the design intent of the original application. Our system optimizes a UI by (i)
extracting its layout from its screenshot, (ii) refining its layout, and (iii)
re-rendering the UI to reflect these modifications. We conducted a user study
with 10 participants and a heuristic evaluation with 6 experts and found that
applications optimized by Reflow led to, on average, 9% faster selection time
with minimal layout disruption. The results demonstrate that Reflow's
refinements useful UI adaptations to improve touch interactions
Predicting Human Performance in Vertical Menu Selection Using Deep Learning
International audiencePredicting human performance in interaction tasks allows designers or developers to understand the expected performance of a target interface without actually testing it with real users. In this work, we present a deep neural net to model and predict human performance in performing a sequence of UI tasks. In particular, we focus on a dominant class of tasks, i.e., target selection from a vertical list or menu. We experimented with our deep neural net using a public dataset collected from a desktop laboratory environment and a dataset collected from hundreds of touchscreen smartphone users via crowdsourcing. Our model significantly outperformed previous methods on these datasets. Importantly, our method, as a deep model, can easily incorporate additional UI attributes such as visual appearance and content semantics without changing model architectures. By understanding about how a deep learning model learns from human behaviors, our approach can be seen as a vehicle to discover new patterns about human behaviors to advance analytical modeling