611 research outputs found

    Gaze Path Stimulation in Retrospective Think-Aloud

    Get PDF
    For a long time, eye tracking has been thought of as a promising method for usability testing. During the last couple of years, eye tracking has finally started to live up to these expectations, at least in terms of its use in usability laboratories. We know that the user’s gaze path can reveal usability issues that would otherwise go unnoticed, but a common understanding of how best to make use of eye movement data has not been reached. Many usability practitioners seem to have intuitively started to use gaze path replays to stimulate recall for retrospective walk through of the usability test. We review the research on thinkaloud protocols in usability testing and the use of eye tracking in the context of usability evaluation. We also report our own experiment in which we compared the standard, concurrent think-aloud method with the gaze path stimulated retrospective think-aloud method. Our results suggest that the gaze path stimulated retrospective think-aloud method produces more verbal data, and that the data are more informative and of better quality as the drawbacks of concurrent think-aloud have been avoided

    The role of attention, attitude, culture, and social expectancies in the human-animal bond : a biopsychosocial approach.

    Get PDF
    The human-animal bond may positively impact human health. However, employing the human-animal bond in human health and behavioral treatments strategies faces several unresolved issues. Challenges facing human-animal bond research include accepting a theoretical model that encourages systematic organization of human-animal bond research, and investigating the human-animal bond's underlying mechanisms. Using eyetracker technology and various social measures, the goal of the current research was to investigate the role of attention, attitude, culture, and social expectancies in the human-animal bond. Participant's eye movements were monitored as they examined photographs depicting various levels of human-animal interaction. Participants also rated their impressions of the human in each photograph for several characteristics. Results showed that participants attended differently to varying levels of human-animal interactions and made more positive judgments about humans interacting with an animal versus the mere presence of an animal. Biological, psychological, and social factors may be important to how humans relate to and benefit from social interactions with animals

    How google triggers the behavior of its users

    Get PDF
    With this contribution we would like to explore if Google’s new style guides on their search engine result pages count for a more liberal competition on electronic information markets. To get empirical evidence on this research question a two stage experimental eye tracking study was conducted. On the first stage the attention and selection behavior of 20 participants on ‘universal search’ engine result pages was recorded and published (Möller & Schierl, 2012). On the second stage 35 participants took part in a follow-up study in 2013 and were confronted with different pages of search results taken from Google’s proposal to the European Commission. The results reveal that the implemented visual markers by Google weigh heavily in favour of Google’s own services and considering this will have a negative effect on the liberal competition with other providers of online information

    Framing or Gaming? Constructing a Study to Explore the Impact of Option Presentation on Consumers

    Get PDF
    The manner in which choice is framed influences individuals’ decision-making. This research examines the impact of different decision constructs on decision-making by focusing on the more problematic decision constructs: the un-selected and pre-selected optout. The study employs eye-tracking with cued retrospective think-aloud (RTA) to combine quantitative and qualitative data. Eye-tracking will determine how long a user focuses on a decision construct before taking action. Cued RTA where the user will be shown a playback of their interaction will be used to explore their attitudes towards a decision construct and identify problematic designs. This pilot begins the second of a three phase study, which ultimately aims to develop a research model containing the theoretical constructs along with hypothesized causal associations between the constructs to reveal the impact of measures such as decision construct type, default value type and question framing have on the perceived value of the website and loyalty intentions

    Towards an Effective Organization-Wide Bulk Email System

    Full text link
    Bulk email is widely used in organizations to communicate messages to employees. It is an important tool in making employees aware of policies, events, leadership updates, etc. However, in large organizations, the problem of overwhelming communication is widespread. Ineffective organizational bulk emails waste employees' time and organizations' money, and cause a lack of awareness or compliance with organizations' missions and priorities. This thesis focuses on improving organizational bulk email systems by 1) conducting qualitative research to understand different stakeholders; 2) conducting field studies to evaluate personalization's effects on getting employees to read bulk messages; 3) designing tools to support communicators in evaluating bulk emails. We performed these studies at the University of Minnesota, interviewing 25 employees (both senders and recipients), and including 317 participants in total. We found that the university's current bulk email system is ineffective as only 22% of the information communicated was retained by employees. To encourage employees to read high-level information, we implemented a multi-stakeholder personalization framework that mixed important-to-organization messages with employee-preferred messages and improved the studied bulk email's recognition rate by 20%. On the sender side, we iteratively designed a prototype of a bulk email evaluation platform. In field evaluation, we found bulk emails' message-level performance helped communicators in designing bulk emails. We collected eye-tracking data and developed a neural network technique to estimate how much time each message is being read using recipients' interactions with browsers only, which improved the estimation accuracy to 73%. In summary, this work sheds light on how to design organizational bulk email systems that communicate effectively and respect different stakeholders' value.Comment: PhD Thesi

    Problem solving activities in post-editing and translation from scratch: A multi-method study

    Get PDF
    Companies and organisations are increasingly using machine translation to improve efficiency and cost-effectiveness, and then edit the machine translated output to create a fluent text that adheres to given text conventions. This procedure is known as post-editing. Translation and post-editing can often be categorised as problem-solving activities. When the translation of a source text unit is not immediately obvious to the translator, or in other words, if there is a hurdle between the source item and the target item, the translation process can be considered problematic. Conversely, if there is no hurdle between the source and target texts, the translation process can be considered a task-solving activity and not a problem-solving activity. This study investigates whether machine translated output influences problem-solving effort in internet research, syntax, and other problem indicators and whether the effort can be linked to expertise. A total of 24 translators (twelve professionals and twelve semi-professionals) produced translations from scratch from English into German, and (monolingually) post-edited machine translation output for this study. The study is part of the CRITT TPR-DB database. The translation and (monolingual) post-editing sessions were recorded with an eye-tracker and a keylogging program. The participants were all given the same six texts (two texts per task). Different approaches were used to identify problematic translation units. First, internet research behaviour was considered as research is a distinct indicator of problematic translation units. Then, the focus was placed on syntactical structures in the MT output that do not adhere to the rules of the target language, as I assumed that they would cause problems in the (monolingual) post-editing tasks that would not occur in the translation from scratch task. Finally, problem indicators were identified via different parameters like Munit, which indicates how often the participants created and modified one translation unit, or the inefficiency (InEff) value of translation units, i.e. the number of produced and deleted tokens divided by the final length of the translation. Finally, the study highlights how these parameters can be used to identify problems in the translation process data using mere keylogging data

    Problem solving activities in post-editing and translation from scratch: A multi-method study

    Get PDF
    Companies and organisations are increasingly using machine translation to improve efficiency and cost-effectiveness, and then edit the machine translated output to create a fluent text that adheres to given text conventions. This procedure is known as post-editing. Translation and post-editing can often be categorised as problem-solving activities. When the translation of a source text unit is not immediately obvious to the translator, or in other words, if there is a hurdle between the source item and the target item, the translation process can be considered problematic. Conversely, if there is no hurdle between the source and target texts, the translation process can be considered a task-solving activity and not a problem-solving activity. This study investigates whether machine translated output influences problem-solving effort in internet research, syntax, and other problem indicators and whether the effort can be linked to expertise. A total of 24 translators (twelve professionals and twelve semi-professionals) produced translations from scratch from English into German, and (monolingually) post-edited machine translation output for this study. The study is part of the CRITT TPR-DB database. The translation and (monolingual) post-editing sessions were recorded with an eye-tracker and a keylogging program. The participants were all given the same six texts (two texts per task). Different approaches were used to identify problematic translation units. First, internet research behaviour was considered as research is a distinct indicator of problematic translation units. Then, the focus was placed on syntactical structures in the MT output that do not adhere to the rules of the target language, as I assumed that they would cause problems in the (monolingual) post-editing tasks that would not occur in the translation from scratch task. Finally, problem indicators were identified via different parameters like Munit, which indicates how often the participants created and modified one translation unit, or the inefficiency (InEff) value of translation units, i.e. the number of produced and deleted tokens divided by the final length of the translation. Finally, the study highlights how these parameters can be used to identify problems in the translation process data using mere keylogging data

    Problem solving activities in post-editing and translation from scratch: A multi-method study

    Get PDF
    Companies and organisations are increasingly using machine translation to improve efficiency and cost-effectiveness, and then edit the machine translated output to create a fluent text that adheres to given text conventions. This procedure is known as post-editing. Translation and post-editing can often be categorised as problem-solving activities. When the translation of a source text unit is not immediately obvious to the translator, or in other words, if there is a hurdle between the source item and the target item, the translation process can be considered problematic. Conversely, if there is no hurdle between the source and target texts, the translation process can be considered a task-solving activity and not a problem-solving activity. This study investigates whether machine translated output influences problem-solving effort in internet research, syntax, and other problem indicators and whether the effort can be linked to expertise. A total of 24 translators (twelve professionals and twelve semi-professionals) produced translations from scratch from English into German, and (monolingually) post-edited machine translation output for this study. The study is part of the CRITT TPR-DB database. The translation and (monolingual) post-editing sessions were recorded with an eye-tracker and a keylogging program. The participants were all given the same six texts (two texts per task). Different approaches were used to identify problematic translation units. First, internet research behaviour was considered as research is a distinct indicator of problematic translation units. Then, the focus was placed on syntactical structures in the MT output that do not adhere to the rules of the target language, as I assumed that they would cause problems in the (monolingual) post-editing tasks that would not occur in the translation from scratch task. Finally, problem indicators were identified via different parameters like Munit, which indicates how often the participants created and modified one translation unit, or the inefficiency (InEff) value of translation units, i.e. the number of produced and deleted tokens divided by the final length of the translation. Finally, the study highlights how these parameters can be used to identify problems in the translation process data using mere keylogging data

    Problem solving activities in post-editing and translation from scratch: A multi-method study

    Get PDF
    Companies and organisations are increasingly using machine translation to improve efficiency and cost-effectiveness, and then edit the machine translated output to create a fluent text that adheres to given text conventions. This procedure is known as post-editing. Translation and post-editing can often be categorised as problem-solving activities. When the translation of a source text unit is not immediately obvious to the translator, or in other words, if there is a hurdle between the source item and the target item, the translation process can be considered problematic. Conversely, if there is no hurdle between the source and target texts, the translation process can be considered a task-solving activity and not a problem-solving activity. This study investigates whether machine translated output influences problem-solving effort in internet research, syntax, and other problem indicators and whether the effort can be linked to expertise. A total of 24 translators (twelve professionals and twelve semi-professionals) produced translations from scratch from English into German, and (monolingually) post-edited machine translation output for this study. The study is part of the CRITT TPR-DB database. The translation and (monolingual) post-editing sessions were recorded with an eye-tracker and a keylogging program. The participants were all given the same six texts (two texts per task). Different approaches were used to identify problematic translation units. First, internet research behaviour was considered as research is a distinct indicator of problematic translation units. Then, the focus was placed on syntactical structures in the MT output that do not adhere to the rules of the target language, as I assumed that they would cause problems in the (monolingual) post-editing tasks that would not occur in the translation from scratch task. Finally, problem indicators were identified via different parameters like Munit, which indicates how often the participants created and modified one translation unit, or the inefficiency (InEff) value of translation units, i.e. the number of produced and deleted tokens divided by the final length of the translation. Finally, the study highlights how these parameters can be used to identify problems in the translation process data using mere keylogging data

    Problem solving activities in post-editing and translation from scratch: A multi-method study

    Get PDF
    Companies and organisations are increasingly using machine translation to improve efficiency and cost-effectiveness, and then edit the machine translated output to create a fluent text that adheres to given text conventions. This procedure is known as post-editing. Translation and post-editing can often be categorised as problem-solving activities. When the translation of a source text unit is not immediately obvious to the translator, or in other words, if there is a hurdle between the source item and the target item, the translation process can be considered problematic. Conversely, if there is no hurdle between the source and target texts, the translation process can be considered a task-solving activity and not a problem-solving activity. This study investigates whether machine translated output influences problem-solving effort in internet research, syntax, and other problem indicators and whether the effort can be linked to expertise. A total of 24 translators (twelve professionals and twelve semi-professionals) produced translations from scratch from English into German, and (monolingually) post-edited machine translation output for this study. The study is part of the CRITT TPR-DB database. The translation and (monolingual) post-editing sessions were recorded with an eye-tracker and a keylogging program. The participants were all given the same six texts (two texts per task). Different approaches were used to identify problematic translation units. First, internet research behaviour was considered as research is a distinct indicator of problematic translation units. Then, the focus was placed on syntactical structures in the MT output that do not adhere to the rules of the target language, as I assumed that they would cause problems in the (monolingual) post-editing tasks that would not occur in the translation from scratch task. Finally, problem indicators were identified via different parameters like Munit, which indicates how often the participants created and modified one translation unit, or the inefficiency (InEff) value of translation units, i.e. the number of produced and deleted tokens divided by the final length of the translation. Finally, the study highlights how these parameters can be used to identify problems in the translation process data using mere keylogging data
    • …
    corecore