7 research outputs found

    Untitled

    Get PDF

    Architecting Analytics Across Multiple E-learning Systems to Enhance Learning Design

    No full text
    With the wide expansion of distributed learning environments the way we learn became more diverse than ever. This poses an opportunity to incorporate different data sources of learning traces that can offer broader insights into learner behavior and the intricacies of the learning process. We argue that combining analytics across different e-learning systems can potentially measure the effectiveness of learning designs and maximize learning opportunities in distributed settings. As a step toward this goal, in this study, we considered how to broaden the context of a single learning environment into a learning ecosystem that integrates three separate e-learning systems. We present a cross-platform architecture that captures, integrates, and stores learning-related data from the learning ecosystem. To demonstrate the feasibility and the benefits of cross-platform architecture, we used regression and classification techniques to generate interpretable models with analytics that can be relevant for instructors in understanding learning behavior and sensemaking of the instructional method on learner performance. The results show that combining data across three e-learning systems improve the classification accuracy compared to data from a single learning system by a factor of 5. This article highlights the value of cross-platform learning analytics and presents a springboard for the creation of new cross-system data-driven research practices

    The Future of Emotion in Human-Computer Interaction

    Get PDF
    Emotion has been studied in HCI for two decades, with specific traditions interested in sensing, expressing, transmitting, modelling, experiencing, visualizing, understanding, constructing, regulating, manipulating or adapting to emotion in human-human and human-computer interactions. This CHI 2022 workshop on the Future of Emotion in Human-Computer Interaction brings together interested researchers to take stock of research on emotion in HCI to-date and to explore possible futures. Through group discussion and collaborative speculation we will address questions such as: What are the relationships between digital technology and human emotion? What roles does emotion play in HCI research? How should HCI researchers conceptualize emotion? When should HCI researchers use interdisciplinary theories of emotion or create new theory? Can specific emotions be designed for, and where is this knowledge likely to be applied? What are the implications of emotion research for design, ethics and wellbeing? What is the future of emotion in human-computer interaction

    Making Sense of Emotion-Sensing:Workshop on Quantifying Human Emotions

    Get PDF
    The global pandemic and the uncertainty if and when life will return to normality have motivated a series of studies on human mental health. This research has elicited evidence for increasing numbers of anxiety, depression, and overall impaired mental well-being. But, the global COVID-19 pandemic has also created new opportunities for research into quantifying human emotions: remotely, contactless, in everyday life. The ubiquitous computing community has long been at the forefront of developing, testing, and building user-facing systems that aim at quantifying human emotion. However, rather than aiming at more accurate sensing algorithms, it is time to critically evaluate whether it is actually possible and in what ways it could be beneficial for technologies to be able to detect user emotions. In this workshop, we bring together experts from the fields of Ubiquitous Computing, Human-Computer Interaction, and Psychology to-long-overdue-merge their expertise and ask the fundamental questions: how do we make sense of emotion-sensing, can and should we quantify human emotions

    Context-informed scheduling and analysis:improving accuracy of mobile self-reports

    No full text
    Abstract Mobile self-reports are a popular technique to collect participant labelled data in the wild. While literature has focused on increasing participant compliance to self-report questionnaires, relatively little work has assessed response accuracy. In this paper, we investigate how participant context can affect response accuracy and help identify strategies to improve the accuracy of mobile self-report data. In a 3-week study we collect over 2,500 questionnaires containing both verifiable and non-verifiable questions. We find that response accuracy is higher for questionnaires that arrive when the phone is not in ongoing or very recent use. Furthermore, our results show that long completion times are an indicator of a lower accuracy. Using contextual mechanisms readily available on smartphones, we are able to explain up to 13% of the variance in participant accuracy. We offer actionable recommendations to assist researchers in their future deployments of mobile self-report studies

    Challenges of quantified-self:encouraging self-reported data logging during recurrent smartphone usage

    Get PDF
    Abstract We argue that improved data entry can motivate Quantified-Self (QS) users to better engage with QS applications. To improve data entry, we investigate the notion of transforming active smartphone usage into data logging contributions through alert dialogs. We evaluate this assertion in a 4-week long deployment with 48 participants. We collect 17,906 data entries, where 68.3% of the entries are reported using the alert dialogs. We demonstrate that QS applications can benefit from alert dialogs: to increase data precision, frequency, and reduce the probability of forgetfulness in data logging. We investigate the impact of usage session type (e.g., sessions with different goals or durations) and the assigned reminder delay on frequency of data contributions. We conclude with insights gathered from our investigation, and the implications they have on future designs
    corecore