9 research outputs found
Supporting cross-device web search with social navigation-based mobile touch interactions
The wide adoption of smartphones eliminates the time and location barriers for peopleâs daily information access, but also limits usersâ information exploration activities due to the small mobile screen size. Thus, cross-device web search, where people initialize information needs on one device but complete them on another device, is frequently observed in modern search engines, especially for exploratory information needs. This paper aims to support the cross-device web search, on top of the commonly used context-sensitive retrieval framework, for exploratory tasks. To better model usersâ search context, our method not only utilizes the search history (query history and click-through) but also employs the mobile touch interactions (MTI) on mobile devices. To be more specific, we combine MTIâs ability of locating relevant subdocument content [10] with the idea of social navigation that aggregates MTIs from other users who visit the same page. To demonstrate the effectiveness of our proposed approach, we designed a user study to collect cross-device web search logs on three different types of tasks from 24 participants and then compared our approach with two baselines: a traditional full text based relevance feedback approach and a self-MTI based subdocument relevance feedback approach. Our results show that the social navigation-based MTIs outperformed both baselines. A further analysis shows that the performance improvements are related to several factors, including the quality and quantity of click-through documents, task types and usersâ search conditions
A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms
In this paper a review is presented of the research on eye gaze estimation
techniques and applications, that has progressed in diverse ways over the past
two decades. Several generic eye gaze use-cases are identified: desktop, TV,
head-mounted, automotive and handheld devices. Analysis of the literature leads
to the identification of several platform specific factors that influence gaze
tracking accuracy. A key outcome from this review is the realization of a need
to develop standardized methodologies for performance evaluation of gaze
tracking systems and achieve consistency in their specification and comparative
evaluation. To address this need, the concept of a methodological framework for
practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July
201
Understanding and improving mobile reading via scalable and low cost sensing
In recent years, due to the increasing ubiquity of Internet and mobile devices, mobile reading on smart watches and smartphones is experiencing rapid growth. Despite the great potential, new challenges are brought. Compared to traditional reading, mobile reading faces major challenges such as encountering more frequent distractions and lacking portable and efficient technique to deeply understand and improve it.
Fortunately, the development of the hardware and software of mobile devices provide an opportunity to track usersâ behavior and physiological signals accurately in a low-cost and portable manner. In this thesis, I explored the usage of low-cost mobile sensors to solve the measurement challenges of reading.
I used the low-cost mobile sensing techniques on mobile devices to understand and improve the degree and quality of reading. In this thesis, I first present SmartRSVP, a reading interface on smart watches that leverages eye-gaze contact tracking technique and heart rate sensing technique to facilitate reading under distractions. I then present Lepton, an intelligent reading system on smart phones that tracks eye-gaze periodical patterns and sensing the screen touching behavior to monitor readersâ cognitions and emotions during reading. Lastly, I present StrategicReading, which uses the implicitly captured eye gaze patterns, scrolling motions, and log histories to monitor usersâ reading strategies and performance during multiple-sources online reading
Writing for mobile media: The influences of text, digital design and psychological characteristics on the cognitive load of the mobile user
Text elements on the mobile smartphone interface make a significant contribution to the userâs interaction experience. In combination with other visual design features, these words curate the path of the mobile user on a journey through the information to satisfy a specific task. This study analyses the elements that influence the interpretation process and optimum presentation of information on mobile media. I argue that effective digital writing contributes to reducing the cognitive load experienced by the mobile user. The central discussion focuses on the writing of text for this medium, which I suggest forges an entirely unique narrative. The optimum writing approach is based on the multi-dimensional characteristics of hypertext, which allow the writer to facilitate the journey without the user losing control of the interpretation process. This study examines the relationship between the writer, the reader and the text, with a unique perspective on the mobile media writer, who is tasked with achieving balance between the functionality and humanity of digital interaction. To explore influences on the development of the relevant writing techniques, I present insights into the distinctive characteristics of the mobile smartphone device, with specific focus on the screen and keyboard. I also discuss the unique characteristics of the mobile user and show how the visual design of the interface is integral to the writing of text for this medium. Furthermore, this study explores the role, skills, and processes of the current and future digital writer, within the backdrop of incessant technological advancement and revolutionary changes in human-computer behaviour
Understanding search behaviour on mobile devices
Web search on hand-held devices has become enormously common and
popular. Although a number of studies have revealed how users
interact with search engine result pages (SERPs) on desktop
monitors, there are still only few studies related to user
interaction in mobile web search, and search results are shown in
a similar way whether on a mobile phone or a desktop. Therefore,
it is still difficult to know what happens between users and
SERPs while searching on small screens, and this means that the
current presentation of SERPs on mobile devices may not be the
best.
According to the findings from previous studies, including our
earlier work, we can confirm that search behaviour on
touch-enabled mobile devices is different from behaviour with
desktop screens, and so we need to consider a different SERP
presentation design for mobile devices. In this thesis, we
explore several user interactions during search with the aim of
improving search experience on smartphones.
First, one remarkable trend of mobile devices is their
enlargement of screen sizes during the last few years. This leads
us to look for differences in search behaviour on different sized
small screens, and if there are any, to suggest better
presentation of search results for each screen size. In the first
study, we investigated search performance, behaviour, and user
satisfaction on three small screens (3.6 inches for early
smartphones, 4.7 inches for recent smart-phones and 5.5 inches
for phablets). We found no significant differences with respect
to the efficiency of carrying out tasks. However, participants
exhibited different search behaviours on the small, medium, and
large sizes of small screens, respectively: a higher chance of
scrolling with the worst user satisfaction on the smallest
screen; fast information extraction with some hesitation before
selecting a link on the medium screen; and less eye movements on
top links on the largest screen. These results suggest that the
presentation of web search results for each screen size needs to
take into account differences in search behaviour.
Second, although people are familiar with turning pages
horizontally while reading books, vertical scrolling is the
standard option that people have available while searching on
mobile devices. So following a suggestion from the first study,
in the second study we explored the effect of horizontal and
vertical viewport control types (pagination versus scrolling)
with various positions of a correct answer in mobile web search.
Our findings suggest that although users are more familiar with
scrolling, participants spent less time to find the correct
answer with pagination, especially when the relevant result is
located beyond the page fold. In addition, participants using
scrolling exhibited less interest in lower-ranked results even if
the documents were relevant. The overall result indicates that it
is worthwhile providing different viewport controls for better
search experiences in mobile web search.
Third, snippets occupy the biggest space in each search result.
Results from a previous study suggested that snippet length
affects search performance on a desktop monitor. Due to the
smaller screen, the effect seems to be much larger on
smartphones. As one possible idea for a SERP presentation design
from the first study, we investigated appropriate snippet lengths
on mobile devices in the third study. We compared search
behaviour with three different snippet lengths, that is, one
line, two to three lines, and six or more lines of snippets on
mobile SERPs. We found that with long snippets, participants
needed longer search time for a particular task type, and the
longer time consumption provided no better search accuracy. Our
findings suggest that this search performance is related to
viewport movements and user attention.
We expect that our proposed approaches provide ways to understand
mobile web search behaviour, and that the findings can be applied
to a wide range of research areas such as human-computer
integration, information retrieval, and even social science for a
better presentation design of SERP on mobile devices
Gaze estimation and interaction in real-world environments
Human eye gaze has been widely used in human-computer interaction, as it is a promising modality for natural, fast, pervasive, and non-verbal interaction between humans and computers. As the foundation of gaze-related interactions, gaze estimation has been a hot research topic in recent decades. In this thesis, we focus on developing appearance-based gaze estimation methods and corresponding attentive user interfaces with a single webcam for challenging real-world environments. First, we collect a large-scale gaze estimation dataset, MPIIGaze, the first of its kind, outside of controlled laboratory conditions. Second, we propose an appearance-based method that, in stark contrast to a long-standing tradition in gaze estimation, only takes the full face image as input. Second, we propose an appearance-based method that, in stark contrast to a long-standing tradition in gaze estimation, only takes the full face image as input. Third, we study data normalisation for the first time in a principled way, and propose a modification that yields significant performance improvements. Fourth, we contribute an unsupervised detector for human-human and human-object eye contact. Finally, we study personal gaze estimation with multiple personal devices, such as mobile phones, tablets, and laptops.Der Blick des menschlichen Auges wird in Mensch-Computer-Interaktionen verbreitet eingesetzt, da dies eine vielversprechende Möglichkeit fĂŒr natĂŒrliche, schnelle, allgegenwĂ€rtige und nonverbale Interaktion zwischen Mensch und Computer ist. Als Grundlage von blickbezogenen Interaktionen ist die BlickschĂ€tzung in den letzten Jahrzehnten ein wichtiges Forschungsthema geworden. In dieser Arbeit konzentrieren wir uns auf die Entwicklung Erscheinungsbild-basierter Methoden zur BlickschĂ€tzung und entsprechender âattentive user interfacesâ (die Aufmerksamkeit des Benutzers einbeziehende Benutzerschnittstellen) mit nur einer Webcam fĂŒr anspruchsvolle natĂŒrliche Umgebungen. ZunĂ€chst sammeln wir einen umfangreichen Datensatz zur BlickschĂ€tzung, MPIIGaze, der erste, der auĂerhalb von kontrollierten Laborbedingungen erstellt wurde. Zweitens schlagen wir eine Erscheinungsbild-basierte Methode vor, die im Gegensatz zur langjĂ€hrigen Tradition in der BlickschĂ€tzung nur eine vollstĂ€ndige Aufnahme des Gesichtes als Eingabe verwendet. Drittens untersuchen wir die Datennormalisierung erstmals grundsĂ€tzlich und schlagen eine Modifizierung vor, die zu signifikanten Leistungsverbesserungen fĂŒhrt. Viertens stellen wir einen unĂŒberwachten Detektor fĂŒr Augenkontakte zwischen Mensch und Mensch und zwischen Mensch und Objekt vor. AbschlieĂend untersuchen wir die persönliche BlickschĂ€tzung mit mehreren persönlichen GerĂ€ten wie Handy, Tablet und Laptop
The acquisition of sentence alternations : how children understand and use the English dative alternation.
Many English verbs expressing transfer can be used in two different constructions, one with
no preposition (Rick gave Kate a coffee) and one with the preposition to (Rick gave a coffee to
Kate). Whenever speakers use such a verb, they have to choose between these two constructions.
This choice is determined in part by some features of the two objects: all other things being
equal, speakers are more likely to use whichever construction places a shorter object before a
longer one (Rick gave a coffee to the tall and well-dressed woman standing next the the desk at
the southern side of the room), an animate object before an inanimate one (Rick gave Kate a
coffee), a plural object before a singular one (Rick gave Kate and Roy an espresso machine), and
so on. This system of feature-based choices is established very well for adult language using
language corpora and experiments, but there are fewer corpora and experimental studies of child
language. Because of this dearth of data, it is unknown how children acquire this choice-making
system: do they start making choices determined by only one of these features and add the others
piecemeal, or do they learn the system wholesale and only tweak which features win out over
others?
The three experiments in this thesis are a first step in answering this question. They are designed
to map out the effects of length, animacy, and grammatical number on these choices over the
course of typical first language acquisition. Because animacy is less stable a concept than length
and number, the first experiment measures childrenâs and adultsâ conceptions of animacy more
indirectly. The second experiment presents the same participants with sentences using give
where one of the two objects has been replaced by noise, and measures which of a constrained
set of options they gaze at and which they choose to fill the noise gap. This provides measures
of their expectations and preferences for the length, animacy, and number of the objects in these
gaps. The third experiment has participants reproduce give sentences with different combinations
of animacy, number, and construction. Participants reproduce sentences that conform to their
choice-making system more easily.
The results of these three experiments show that children as young as four years already prefer
the animate-before-inanimate order. The shorter-before-longer preference is not found in any
age group when the difference in lengths is just one syllable. This evidence adds to a growing
body of literature converging on the finding that choices in ordering phenomena are affected
by the same features wherever these phenomena occur, throughout language acquisition as
well as across languages. Data from the second experiment also substantiates the common
assumption that touchscreen input and eye gaze are both closely linked to attention. This will
allow researchers in the cognitive sciences to use touchscreens as an alternative to eyetracking
more confidently