1,752 research outputs found

    Using natural user interfaces to support synchronous distributed collaborative work

    Get PDF
    Synchronous Distributed Collaborative Work (SDCW) occurs when group members work together at the same time from different places together to achieve a common goal. Effective SDCW requires good communication, continuous coordination and shared information among group members. SDCW is possible because of groupware, a class of computer software systems that supports group work. Shared-workspace groupware systems are systems that provide a common workspace that aims to replicate aspects of a physical workspace that is shared among group members in a co-located environment. Shared-workspace groupware systems have failed to provide the same degree of coordination and awareness among distributed group members that exists in co-located groups owing to unintuitive interaction techniques that these systems have incorporated. Natural User Interfaces (NUIs) focus on reusing natural human abilities such as touch, speech, gestures and proximity awareness to allow intuitive human-computer interaction. These interaction techniques could provide solutions to the existing issues of groupware systems by breaking down the barrier between people and technology created by the interaction techniques currently utilised. The aim of this research was to investigate how NUI interaction techniques could be used to effectively support SDCW. An architecture for such a shared-workspace groupware system was proposed and a prototype, called GroupAware, was designed and developed based on this architecture. GroupAware allows multiple users from distributed locations to simultaneously view and annotate text documents, and create graphic designs in a shared workspace. Documents are represented as visual objects that can be manipulated through touch gestures. Group coordination and awareness is maintained through document updates via immediate workspace synchronization, user action tracking via user labels and user availability identification via basic proxemic interaction. Members can effectively communicate via audio and video conferencing. A user study was conducted to evaluate GroupAware and determine whether NUI interaction techniques effectively supported SDCW. Ten groups of three members each participated in the study. High levels of performance, user satisfaction and collaboration demonstrated that GroupAware was an effective groupware system that was easy to learn and use, and effectively supported group work in terms of communication, coordination and information sharing. Participants gave highly positive comments about the system that further supported the results. The successful implementation of GroupAware and the positive results obtained from the user evaluation provides evidence that NUI interaction techniques can effectively support SDCW

    Quantifying Collaboration Quality in Face-to-Face Classroom Settings Using MMLA

    Get PDF
    Producción CientíficaThe estimation of collaboration quality using manual observation and coding is a tedious and difficult task. Researchers have proposed the automation of this process by estimation into few categories (e.g., high vs. low collaboration). However, such categorical estimation lacks in depth and actionability, which can be critical for practitioners. We present a case study that evaluates the feasibility of quantifying collaboration quality and its multiple sub-dimensions (e.g., collaboration flow) in an authentic classroom setting. We collected multimodal data (audio and logs) from two groups collaborating face-to-face and in a collaborative writing task. The paper describes our exploration of different machine learning models and compares their performance with that of human coders, in the task of estimating collaboration quality along a continuum. Our results show that it is feasible to quantitatively estimate collaboration quality and its sub-dimensions, even from simple features of audio and log data, using machine learning. These findings open possibilities for in-depth automated quantification of collaboration quality, and the use of more advanced features and algorithms to get their performance closer to that of human coders.European Union via the European Regional Development Fund and in the context of CEITER and Next-Lab (Horizon 2020 Research and Innovation Programme, grant agreements no. 669074 and 731685)Junta de Castilla y León (Project VA257P18)Ministerio de Ciencia, Innovación y Universidades (Project TIN2017-85179-C3-2-R

    Emoji as a Proxy of Emotional Communication

    Get PDF
    Nowadays, emoji plays a fundamental role in human computer-mediated communications, allowing the latter to convey body language, objects, symbols, or ideas in text messages using Unicode standardized pictographs and logographs. Emoji allows people expressing more “authentically” emotions and their personalities, by increasing the semantic content of visual messages. The relationship between language, emoji, and emotions is now being studied by several disciplines such as linguistics, psychology, natural language processing (NLP), and machine learning (ML). Particularly, the last two are employed for the automatic detection of emotions and personality traits, building emoji sentiment lexicons, as well as for conveying artificial agents with the ability of expressing emotions through emoji. In this chapter, we introduce the concept of emoji and review the main challenges in using these as a proxy of language and emotions, the ML, and NLP techniques used for classification and detection of emotions using emoji, and presenting new trends for the exploitation of discovered emotional patterns for robotic emotional communication

    Video interaction using pen-based technology

    Get PDF
    Dissertação para obtenção do Grau de Doutor em InformáticaVideo can be considered one of the most complete and complex media and its manipulating is still a difficult and tedious task. This research applies pen-based technology to video manipulation, with the goal to improve this interaction. Even though the human familiarity with pen-based devices, how they can be used on video interaction, in order to improve it, making it more natural and at the same time fostering the user’s creativity is an open question. Two types of interaction with video were considered in this work: video annotation and video editing. Each interaction type allows the study of one of the interaction modes of using pen-based technology: indirectly, through digital ink, or directly, trough pen gestures or pressure. This research contributes with two approaches for pen-based video interaction: pen-based video annotations and video as ink. The first uses pen-based annotations combined with motion tracking algorithms, in order to augment video content with sketches or handwritten notes. It aims to study how pen-based technology can be used to annotate a moving objects and how to maintain the association between a pen-based annotations and the annotated moving object The second concept replaces digital ink by video content, studding how pen gestures and pressure can be used on video editing and what kind of changes are needed in the interface, in order to provide a more familiar and creative interaction in this usage context.This work was partially funded by the UTAustin-Portugal, Digital Media, Program (Ph.D. grant: SFRH/BD/42662/2007 - FCT/MCTES); by the HP Technology for Teaching Grant Initiative 2006; by the project "TKB - A Transmedia Knowledge Base for contemporary dance" (PTDC/EAT/AVP/098220/2008 funded by FCT/MCTES); and by CITI/DI/FCT/UNL (PEst-OE/EEI/UI0527/2011

    PBLL, Learning and Writing Skills: Fostering Motivation, Creativity and Appreciation for Cultures in the 4th ESO English Classroom

    Get PDF
    In this final Dissertation I intend to present a learning unit framed within an active methodology, in which writing skills are developed within a PBLL approach which encapsulates the principles of CLT and SLA. Likewise, it is intended to analyse how the subject of this unit and its structure can foster creativity, motivation and the feeling of appreciation for other cultures in 4th ESO students in IES Pedro de Luna. In order to do so, the dissertation starts with an introduction followed by the main purpose, objectives and justification, and theoretical and curricula framework. Hence, the whole proposal is framed within the European, National and Regional Curriculums in order to generate a solid and reliable base which embraces new perspectives on learning and enhances key competences in order to provide a holistic learning. The theoretical framework particularly supports the implementation of a PBLL so as to stimulate learners through music into significant English learning and to ameliorate school performance. On the other hand, it also justifies the importance of the teaching of writing through song compositions in order to trigger imagination, and self-expression. Other aligned learner-cantered methodologies are analysed since they encapsulate cultural expressions, elaborate on different ways of approaching or resolving problems and explore self-reflection or cooperative work. Thus, these help learners exercise freedom of expression. These premises are further developed and studied in the fourth part, devoted to the methodological design, and the fifth section which examines the whole Unit plan. The last and final piece offers the conclusions which summarize the proposal, pointing out to the innovation stated and possible improvements. En este trabajo de fin de máster presento una unidad didáctica enmarcada dentro de una metodología activa, en la que se desarrollan habilidades para la escritura dentro de un enfoque PBLL y centrado en los principios de CLT y SLA. Asimismo, se pretende analizar cómo el tema de dicha unidad y su estructura pueden fomentar la creatividad, la motivación y el sentimiento de apreciación por otras culturas en alumnos de 4º de ESO del IES Pedro de Luna. Para ello, la disertación comienza con una introducción, propósito principal, objetivos, justificación y diseño del marco teórico y curricular. Así, toda la propuesta se enmarca dentro del currículo europeo, nacional y regional para generar una base sólida que encapsula nuevas perspectivas sobre el aprendizaje e incorpora las competencias clave para proporcionar un aprendizaje holístico. El marco teórico realza la implementación del aprendizaje por proyectos para estimular a los estudiantes a través de la música con la finalidad de proporcionar conocimientos significativos en inglés y mejorar los resultados académicos. Por otro lado, también justifica la importancia de la enseñanza de la escritura con la creación de canciones que activan la imaginación y expresión. Se analizan también otras metodologías centradas en el alumno alineadas con la expresión cultural, formas de abordar problemas, y la autorreflexión o el trabajo cooperativo. Así, todas estas actividades ayudan a los alumnos a ejercer su libertad de expresión. Estas premisas se desarrollan y estudian en la cuarta parte, dedicada al diseño metodológico, y en la quinta sección que examina el plan completo de la Unidad. La última sección ofrece las conclusiones que resumen la propuesta, señalando la propuesta de innovación y las posibles mejoras.<br /

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte Realitäten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt über. Mit der Einführung von leistungsfähigen Mixed-Reality-Geräten verändert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende Geräte sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, müssen Designer und Entwicklerinnen die künftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation präsentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von Modalitäten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenüber Mixed Reality in häuslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der Realität oder Virtualität untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen für Navigationsaufgaben zu einer deutlich höheren Fähigkeit führt, Sehenswürdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der Realität durch Überlagerung von Echtzeitinformationen, die für das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, Wärmestrahlung visuell wahrzunehmen. Darüber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollständig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle Realitäten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die Hände und die Tastatur, zeigt diese in der vermischen Realität an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der Texteingabequalität zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man Virtualität berühren kann, indem wir generisches haptisches Feedback für virtuelle Realitäten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das Präsenzgefühl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als Eingabegerät mit einem sekundären physischen Bildschirm verbunden, um die Ein- und Ausgabemodalitäten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den Artikulationsfähigkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben

    BeSocratic: An Intelligent Tutoring System for the Recognition, Evaluation, and Analysis of Free-form Student Input

    Get PDF
    This dissertation describes a novel intelligent tutoring system, BeSocratic, which aims to help fill the gap between simple multiple-choice systems and free-response systems. BeSocratic focuses on targeting questions that are free-form in nature yet defined to the point which allows for automatic evaluation and analysis. The system includes a set of modules which provide instructors with tools to assess student performance. Beyond text boxes and multiple-choice questions, BeSocratic contains several modules that recognize, evaluate, provide feedback, and analyze student-drawn structures, including Euclidean graphs, chemistry molecules, computer science graphs, and simple drawings. Our system uses a visual, rule-based authoring system which enables the creation of activities for use within science, technology, engineering, and mathematics classrooms. BeSocratic records each action that students make within the system. Using a set of post-analysis tools, teachers have the ability to examine both individual and group performances. We accomplish this using hidden Markov model-based clustering techniques and visualizations. These visualizations can help teachers quickly identify common strategies and errors for large groups of students. Furthermore, analysis results can be used directly to improve activities through advanced detection of student errors and refined feedback. BeSocratic activities have been created and tested at several universities. We report specific results from several activities, and discuss how BeSocratic\u27s analysis tools are being used with data from other systems. We specifically detail two chemistry activities and one computer science activity: (1) an activity focused on improving mechanism use, (2) an activity which assesses student understanding of Gibbs energy, and (3) an activity which teaches students the fundamentals of splay trees. In addition to analyzing data collected from students within BeSocratic, we share our visualizations and results from analyzing data gathered with another educational system, PhET
    corecore