24 research outputs found

    Efficiency of Automated Detectors of Learner Engagement and Affect Compared with Traditional Observation Methods

    Get PDF
    This report investigates the costs of developing automated detectors of student affect and engagement and applying them at scale to the log files of students using educational software. We compare these costs and the accuracy of the computer-based observations with those of more traditional observation methods for detecting student engagement and affect. We discuss the potential for automated detectors to contribute to the development of adaptive and responsive educational software

    Studies on Multi-Device Usage Practices and Interaction Methods

    Get PDF
    People today commonly have multiple information devices, including smartphones, tablets, computers, home media centers, and other devices. As people have many devices, situations and workflows where several devices are combined and used together to accomplish a task have become usual. Groups of co-located persons may also join their information devices together for collaborative activities and experiences. While these developments towards computing with multiple devices offer many opportunities, they also create a need for interfaces and applications that support using multiple devices together.The overall goal of this doctoral thesis is to create new scientific knowledge to inform the design of future interfaces, applications, and technologies that better support multi-device use. The thesis belongs to the field of Human-Computer Interaction (HCI) research. It contains five empirical studies with a total of 110 participants. The study results have been reported in five original publications. The thesis generally follows the design science research methodology.More specifically, this thesis addresses three research questions related to multidevice use. The first question investigates how people actually use multiple information devices together in their daily lives. The results provide a rich picture of everyday multi-device use, including the most common devices and their characteristic practices of use, a categorization of patterns of multi-device use, and an analysis of the process of determining which devices to use. The second question examines the factors that influence the user experience of multi-device interaction methods. The results suggest a set of experiential factors that should be considered when designing methods for multi-device interaction. The set of factors is based on comparative studies of alternative methods for two common tasks in multi-device interaction: device binding and cross-display object movement. The third question explores a more futuristic topic of multi-device interaction methods for wearable devices, focusing on the two most popular categories of wearable devices today: smartwatches and smartglasses. The results present a categorization of actions that people would naturally do to initiate interactions between their wearable devices based on elicitation studies with groups of participants.The results of this thesis advance the scientific knowledge of multi-device use in the domain of human-computer interaction research. The results can be applied in the design of novel interfaces, applications, and technologies that involve the use of multiple information devices

    MIDAS: Multi-device Integrated Dynamic Activity Spaces

    Get PDF
    Mobile phones, tablet computers, laptops, desktops, and large screen displays are increasingly available to individuals for information access, often simultaneously. Dominant content access protocols, such as HTTP/1.1, do not take advantage of this device multiplicity and support information access from single devices only. Changing devices means restarting an information session. Using devices in conjunction with each other poses several challenges, which include the presentation of content on devices with diverse form factors and propagation of the content changes across these devices. In this dissertation, I report on the design and implementation of MIDAS - architecture and a prototype system for multi-device presentations. I propose a framework, called 12C, for characterizing multi-device systems and evaluate MIDAS within this framework. MIDAS is designed as a middleware that can work with multiple client-server architectures, such as the Web and context-aware Trellis, a non-Web hypertext system. It presents information content simultaneously on devices with diverse characteristics without requiring sensor-enhanced environments. The system adapts content elements for optimal presentation on the target device while also striving to retain fidelity with the original form from a human perceptual perspective. MIDAS reconfigures its presentation in response to user actions, availability of devices, and environmental context, such as a user's location or the time of day. I conducted a pilot study that explored human perception of similarity when image attributes such as size and color depth are modified in the process of presenting images on different devices. The results indicated that users tend to prefer scaling of images to color-depth reduction but gray scaling of images is preferable to either modification. Not all images scale equally gracefully; those dominated by natural elements or manmade structures scale exceptionally well. Images that depict recognizable human faces or textual elements should be scaled only to an extent that these features retain their integrity. Attributes of the 12C framework describe aspects of multi-device systems that include infrastructure, presentation, interaction, interface, and security. Based on these criteria, MIDAS is a flexible infrastructure, which lends itself to several content distribution and interaction strategies by separating client- and server-side configuration

    The Use of Multiple Slate Devices to Support Active Reading Activities

    Get PDF
    Reading activities in the classroom and workplace occur predominantly on paper. Since existing electronic devices do not support these reading activities as well as paper, users have difficulty taking full advantage of the affordances of electronic documents. This dissertation makes three main contributions toward supporting active reading electronically. The first contribution is a comprehensive set of active reading requirements, drawn from three decades of research into reading processes. These requirements explain why existing devices are inadequate for supporting active reading activities. The second contribution is a multi-slate reading system that more completely supports the active reading requirements above. Researchers believe the suitability of paper for active reading is largely due to the fact it distributes content across different sheets of paper, which are capable of displaying information as well as capturing input. The multi-slate approach draws inspiration from the independent reading and writing surfaces that paper provides, to blend the beneficial features of e-book readers, tablets, PCs, and tabletop computers. The development of the multi-slate system began with the Dual-Display E-book, which used two screens to provide richer navigation capabilities than a single-screen device. Following the success of the Dual-Display E-book, the United Slates, a general-purpose reading system consisting of an extensible number of slates, was created. The United Slates consisted of custom slate hardware, specialized interactions that enabled the slates to be used cooperatively, and a cloud-based infrastructure that robustly integrated the slates with users' existing computing devices and workflow. The third contribution is a series of evaluations that characterized reading with multiple slates. A laboratory study with 12 participants compared the relative merits of paper and electronic reading surfaces. One month long in-situ deployments of the United Slates with graduate students in the humanities found the multi-slate configuration to be highly effective for reading. The United Slates system delivered desirable paper-like qualities that included enhanced reading engagement, ease of navigation, and peace-of-mind while also providing superior electronic functionality. The positive feedback suggests that the multi-slate configuration is a desirable method for supporting active reading activities

    Acute effects of blue light on alertness

    Get PDF
    Postponed access: the file will be accessible after 2023-11-17The discovery of a third-class retinal photoreceptors (ipRGCs) sensitive to the short wavelength light (from 460nm to 480nm) have influenced research on circadian rhythms over the last two decades, with increasing research focusing on the physiological and psychological effects of blue light and different colour temperatures during both daytime and nighttime. Meanwhile, research on daytime light exposure has given varying results. Piloting an experimental protocol, the acute effects of both monochromatic blue light (max 479 nm) and dim light (<5 lux) were assessed in 12 healthy young subjects, during the morning hours. Chronotype and sleep prior to the test days were also assessed. Participants were exposed to 1 hour dim light (<5 lux) prior to testing for both light conditions. Alertness was measured using a psychomotor vigilance test (PVT) at three different times during the procedure. There was no significant effect of either blue or dim light on alertness, however there was a combined effect of alertness and dim light when total sleep time the night before dim light was accounted for, showing slower response time in the dim light condition.Masteroppgave i psykologiMAPSYK360MAPS-PSYKINTL-KMDINTL-SVINTL-MEDINTL-PSYKINTL-JUSINTL-HFINTL-M

    Understanding and Measuring Privacy and Security Assertions of Mobile and VR Applications

    Get PDF
    The emergence of the COVID-19 pandemic has catalysed a profound transformation in the way mobile applications are utilised and engaged with by consumers. There has been a noticeable surge in people relying on applications for various purposes such as entertainment, remote work, and daily activities. These services collect large amounts of users’ personal information and use them in many areas, such as in medical and financial systems, but they also pose an unprecedented threat to users’ privacy and security. Many international jurisdictions have enacted privacy laws and regulations to restrict the behaviour of apps and define the obligations of app developers. Although various privacy assertions are required in app stores, such as the permission list and the privacy policies, it is usually difficult for regular users to understand the potential threats the app may pose, let alone identify undesired or malicious application behaviours. In this thesis, I have developed a comprehensive framework to assess the current privacy practices of mobile applications. The framework first establishes a knowledge base (including datasets) to model privacy and security assertions. It then builds a sound evaluation system to analyse the privacy practices of mobile applications. Large-scale privacy evaluations were conducted on different realworld datasets, including privacy policies, contact tracing apps, and children’s apps, with the aim of revealing the risks associated with mobile application privacy. Lastly, a novel approach to applying differential privacy on streamed spatial data in VR applications is proposed. This thesis provides a comprehensive guideline for the mobile software industry and legislators to build a stronger and safer privacy ecosystem.Thesis (Ph.D.) -- University of Adelaide, School of Computer and Mathematical Sciences, 202

    Using graphical representation of user interfaces as visual references

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 129-133).My thesis investigates using a graphical representation of user interfaces - screenshots - as a direct visual reference to support various kinds of applications. We have built several systems to demonstrate and validate this idea in domains like searching documentation, GUI automation and testing, and cross-device information migration. In particular, Sikuli Search enables users to search documentation using screenshots of GUI elements instead of keywords. Sikuli Script enables users to programmatically control GUIs without support from the underlying applications. Sikuli Test lets GUI developers and testers create test scripts without coding. Deep Shot introduces a framework and interaction techniques to migrate work states across heterogeneous devices in one action, taking a picture. We also discuss challenges inherent in screenshot-based interactions and propose potential solutions and directions of future research.by Tsung-Hsiang Chang.Ph.D

    Augmented analyses: supporting the study of ubiquitous computing systems

    Get PDF
    Ubiquitous computing is becoming an increasingly prevalent part of our everyday lives. The reliance of society upon such devices as mobile phones, coupled with the increasing complexity of those devices is an example of how our everyday human-human interaction is affected by this phenomenon. Social scientists studying human-human interaction must now take into account the effects of these technologies not just on the interaction itself, but also on the approach required to study it. User evaluation is a challenging topic in ubiquitous computing. It is generally considered to be difficult, certainly more so than in previous computational settings. Heterogeneity in design, distributed and mobile users, invisible sensing systems and so on, all add up to render traditional methods of observation and evaluation insufficient to construct a complete view of interactional activity. These challenges necessitate the development of new observational technologies. This thesis explores some of those challenges and demonstrates that system logs, with suitable methods of synchronising, filtering and visualising them for use in conjunction with more traditional observational approaches such as video, can be used to overcome many of these issues. Through a review of both the literature of the field, and the state of the art of computer aided qualitative data analysis software (CAQDAS), a series of guidelines are constructed showing what would be required of a software toolkit to meet the challenges of studying ubiquitous computing systems. It outlines the design and implementation of two such software packages, \textit{Replayer} and \textit{Digital Replay System}, which approach the problem from different angles, the former being focussed on visualising and exploring the data in system logs and the latter focussing on supporting the methods used by social scientists to perform qualitative analyses. The thesis shows through case studies how this technique can be applied to add significant value to the qualitative analysis of ubiquitous computing systems: how the coordination of system logs and other media can help us find information in the data that would otherwise be inaccessible; an ability to perform studies in locations/settings that would otherwise be impossible, or at least very difficult; and how creating accessible qualitative data analysis tools allows people to study particular settings or technologies who could not have studied them before. This software aims to demonstrate the direction in which other CAQDAS packages may have to move in order to support the study of the characteristics of human-computer and human-human interaction in a world increasingly reliant upon ubiquitous computing technology

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können
    corecore