1,553 research outputs found

    Shall I describe it or shall I move closer? Verbal references and locomotion in VR collaborative search tasks

    Get PDF
    Research in pointing-based communication within immersive collaborative virtual environments (ICVE) remains a compelling area of study. Previous studies explored techniques to improve accuracy and reduce errors when hand-pointing from a distance. In this study, we explore how users adapt their behaviour to cope with the lack of accuracy during pointing. In an ICVE where users can move (i.e., locomotion) when faced with a lack of laser pointers, pointing inaccuracy can be avoided by getting closer to the object of interest. Alternatively, collaborators can enrich the utterances with details to compensate for the lack of pointing precision. Inspired by previous CSCW remote desktop collaboration, we measure visual coordination, the implicitness of deixis’ utterances and the amount of locomotion. We design an experiment that compares the effects of the presence/absence of laser pointers across hard/easy-to-describe referents. Results show that when users face pointing inaccuracy, they prefer to move closer to the referent rather than enrich the verbal reference

    Tietojenkäsittelytieteen päivät 2010

    Get PDF

    Tracking Eye Movements over Source Code

    Get PDF
    Studies on software developers’ behavior guide the development of tools that facilitate source code reading and reviewing. Eye trackers have allowed researchers to study this behavior in more detail–to pinpoint where the developer is looking, or even to detect which source code element the developer is viewing. However, systems that map gaze to characteristics as specific as source code elements are often expensive, either because of the cost of compatible eye trackers or because of the cost of the required software. This project aims to use existing technology to create a lower-cost system that provides information on the source code elements that the developer views

    Improving User Involvement Through Live Collaborative Creation

    Full text link
    Creating an artifact - such as writing a book, developing software, or performing a piece of music - is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. This work explores how computational systems can facilitate collaboration, communication, and participation in the context of involving users in the process of creating artifacts while mitigating the challenges inherent to such processes. In particular, the interactive systems presented in this work support live collaborative creation, in which artifact users collaboratively participate in the artifact creation process with creators in real time. In the systems that I have created, I explored liveness, the extent to which the process of creating artifacts and the state of the artifacts are immediately and continuously perceptible, for applications such as programming, writing, music performance, and UI design. Liveness helps preserve natural expressivity, supports real-time communication, and facilitates participation in the creative process. Live collaboration is beneficial for users and creators alike: making the process of creation visible encourages users to engage in the process and better understand the final artifact. Additionally, creators can receive immediate feedback in a continuous, closed loop with users. Through these interactive systems, non-expert participants help create such artifacts as GUI prototypes, software, and musical performances. This dissertation explores three topics: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barriers of entry to live collaboration; and (3) approaches to preserving liveness in the creative process, affording creators more expressivity in making artifacts and affording users access to information traditionally only available in real-time processes. In this work, I showed that enabling collaborative, expressive, and live interactions in computational systems allow the broader population to take part in various creative practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145810/1/snaglee_1.pd

    Gaze Analysis methods for Learning Analytics

    Get PDF
    Eye-tracking had been shown to be predictive of expertise, task-based success, task-difficulty, and the strategies involved in problem solving, both in the individual and collaborative settings. In learning analytics, eye-tracking could be used as a powerful tool, not only to differentiate between the levels of expertise and task-outcome, but also to give constructive feedback to the users. In this dissertation, we show how eye-tracking could prove to be useful to understand the cognitive processes underlying dyadic interaction; in two contexts: pair program comprehension and learning with a Massive Open Online Course (MOOC). The first context is a typical collaborative work scenario, while the second is a special case of dyadic interaction namely the teacher-student pair. We also demonstrate, using one example experiment, how the findings about the relation between the learning outcome in MOOCs and the students' gaze patterns can be leveraged to design a feedback tool to improve the students' learning outcome and their attention levels while learning through a MOOC video. We also show that the gaze can also be used as a cue to resolve the teachers' verbal references in a MOOC video; and this way we can improve the learning experiences of the MOOC students. This thesis is comprised of five studies. The first study, contextualised within a collaborative setting, where the collaborating partners tried to understand the given program. In this study, we examine the relationship among the gaze patterns of the partners, their dialogues and the levels of understanding that the pair attained at the end of the task. The next four studies are contextualised within the MOOC environment. The first MOOC study explores the relationship between the students' performance and their attention level. The second MOOC study, which is a dual eye-tracking study, examines the relation between the individual and collaborative gaze patterns and their relation with the learning outcome. This study also explores the idea of activating students' knowledge, prior to receiving any learning material, and the effect of different ways to activate the students' knowledge on their gaze patterns and their learning outcomes. The third MOOC study, during which we designed a feedback tool based on the results of the first two MOOC studies, demonstrates that the variables we proposed to measure the students' attention, could be leveraged upon to provide feedback about their gaze patterns. We also show that using this feedback tool improves the students' learning outcome and their attention levels. The fourth and final MOOC study shows that augmenting a MOOC video with the teacher's gaze information helps improving the learning experiences of the students. When the teacher's gaze is displayed the perceived difficulty of the content decreases significantly as compared to the moments when there is no gaze augmentation. In a nutshell, through this dissertation, we show that the gaze can be used to understand, support and improve the dyadic interaction, in order to increase the chances of achieving a higher level of task-based success

    Rethinking Productivity in Software Engineering

    Get PDF
    Get the most out of this foundational reference and improve the productivity of your software teams. This open access book collects the wisdom of the 2017 "Dagstuhl" seminar on productivity in software engineering, a meeting of community leaders, who came together with the goal of rethinking traditional definitions and measures of productivity. The results of their work, Rethinking Productivity in Software Engineering, includes chapters covering definitions and core concepts related to productivity, guidelines for measuring productivity in specific contexts, best practices and pitfalls, and theories and open questions on productivity. You'll benefit from the many short chapters, each offering a focused discussion on one aspect of productivity in software engineering. Readers in many fields and industries will benefit from their collected work. Developers wanting to improve their personal productivity, will learn effective strategies for overcoming common issues that interfere with progress. Organizations thinking about building internal programs for measuring productivity of programmers and teams will learn best practices from industry and researchers in measuring productivity. And researchers can leverage the conceptual frameworks and rich body of literature in the book to effectively pursue new research directions. What You'll Learn Review the definitions and dimensions of software productivity See how time management is having the opposite of the intended effect Develop valuable dashboards Understand the impact of sensors on productivity Avoid software development waste Work with human-centered methods to measure productivity Look at the intersection of neuroscience and productivity Manage interruptions and context-switching Who Book Is For Industry developers and those responsible for seminar-style courses that include a segment on software developer productivity. Chapters are written for a generalist audience, without excessive use of technical terminology. ; Collects the wisdom of software engineering thought leaders in a form digestible for any developer Shares hard-won best practices and pitfalls to avoid An up to date look at current practices in software engineering productivit

    Designing to Support Workspace Awareness in Remote Collaboration using 2D Interactive Surfaces

    Get PDF
    Increasing distributions of the global workforce are leading to collaborative workamong remote coworkers. The emergence of such remote collaborations is essentiallysupported by technology advancements of screen-based devices ranging from tabletor laptop to large displays. However, these devices, especially personal and mobilecomputers, still suffer from certain limitations caused by their form factors, that hinder supporting workspace awareness through non-verbal communication suchas bodily gestures or gaze. This thesis thus aims to design novel interfaces andinteraction techniques to improve remote coworkers’ workspace awareness throughsuch non-verbal cues using 2D interactive surfaces.The thesis starts off by exploring how visual cues support workspace awareness infacilitated brainstorming of hybrid teams of co-located and remote coworkers. Basedon insights from this exploration, the thesis introduces three interfaces for mobiledevices that help users maintain and convey their workspace awareness with their coworkers. The first interface is a virtual environment that allows a remote person to effectively maintain his/her awareness of his/her co-located collaborators’ activities while interacting with the shared workspace. To help a person better express his/her hand gestures in remote collaboration using a mobile device, the second interfacepresents a lightweight add-on for capturing hand images on and above the device’sscreen; and overlaying them on collaborators’ device to improve their workspace awareness. The third interface strategically leverages the entire screen space of aconventional laptop to better convey a remote person’s gaze to his/her co-locatedcollaborators. Building on the top of these three interfaces, the thesis envisions an interface that supports a person using a mobile device to effectively collaborate with remote coworkers working with a large display.Together, these interfaces demonstrate the possibilities to innovate on commodity devices to offer richer non-verbal communication and better support workspace awareness in remote collaboration
    corecore