3,457 research outputs found

    CodeGazer: Making Code Navigation Easy and Natural with Gaze Input

    Get PDF
    Navigating source code, an activity common in software development,is time consuming and in need of improvement. We present CodeGazer, a prototype for source code navigation using eye gaze for common navigation functions. These functions include actions such as “Go to Definition” and “Find All Usages” of an identifier, navigate to files and methods, move back and forth between visited points in code and scrolling. We present user study results showing that many users liked and even preferred the gaze-based navigation, in particular the “Go to Definition” function. Gaze-based navigation is also holding up well in completion time when compared to traditional methods. We discuss how eye gaze can be integrated into traditional mouse & keyboard applications in order to make “look up” tasks more natural

    PerfVis: Pervasive Visualization in Immersive AugmentedReality for Performance Awareness

    Full text link
    Developers are usually unaware of the impact of code changes to the performance of software systems. Although developers can analyze the performance of a system by executing, for instance, a performance test to compare the performance of two consecutive versions of the system, changing from a programming task to a testing task would disrupt the development flow. In this paper, we propose the use of a city visualization that dynamically provides developers with a pervasive view of the continuous performance of a system. We use an immersive augmented reality device (Microsoft HoloLens) to display our visualization and extend the integrated development environment on a computer screen to use the physical space. We report on technical details of the design and implementation of our visualization tool, and discuss early feedback that we collected of its usability. Our investigation explores a new visual metaphor to support the exploration and analysis of possibly very large and multidimensional performance data. Our initial result indicates that the city metaphor can be adequate to analyze dynamic performance data on a large and non-trivial software system.Comment: ICPE'19 vision, 4 pages, 2 figure, conferenc

    Understanding Eye Gaze Patterns in Code Comprehension

    Get PDF
    Program comprehension is a sub-field of software engineering that seeks to understand how developers understand programs. Comprehension acts as a starting point for many software engineering tasks such as bug fixing, refactoring, and feature creation. The dissertation presents a series of empirical studies to understand how developers comprehend software in realistic settings. The unique aspect of this work is the use of eye tracking equipment to gather fine-grained detailed information of what developers look at in software artifacts while they perform realistic tasks in an environment familiar to them, namely a context including both the Integrated Development Environment (Eclipse or Visual Studio) and a web browser (Google Chrome). The iTrace eye tracking infrastructure is used for certain eye tracking studies on large code files as it is able to handle page scrolling and context switching. The first study is a classroom-based study on how students actively trained in the classroom understand grouped units of C++ code. Results indicate students made many transitions between lines that were closer together, and were attracted the most to if statements and to a lesser extent assignment code. The second study seeks to understand how developers use Stack Overflow page elements to build summaries of open source project code. Results indicate participants focused more heavily on question and answer text, and the embedded code, more than they did the title, question tags, or votes. The third study presents a larger code summarization study using different information contexts: Stack Overflow, bug repositories and source code. Results show participants tended to visit up to two codebase files in either the combined or isolated codebase session, but visit more bug report pages, and spend longer time on new Stack Overflow pages they visited, when given either these two treatments in isolation. In the combined session, time spent on the one or two codebase files they viewed dominated the session time. Information learned from tracking developers\u27 gaze in these studies can form foundations for developer behavior models, which we hope can later inform recommendations for actions one might take to achieve workflow goals in these settings. Advisor: Bonita Shari

    Representational Learning Approach for Predicting Developer Expertise Using Eye Movements

    Get PDF
    The thesis analyzes an existing eye-tracking dataset collected while software developers were solving bug fixing tasks in an open-source system. The analysis is performed using a representational learning approach namely, Multi-layer Perceptron (MLP). The novel aspect of the analysis is the introduction of a new feature engineering method based on the eye-tracking data. This is then used to predict developer expertise on the data. The dataset used in this thesis is inherently more complex because it is collected in a very dynamic environment i.e., the Eclipse IDE using an eye-tracking plugin, iTrace. Previous work in this area only worked on short code snippets that do not represent how developers usually program in a realistic setting. A comparative analysis between representational learning and non-representational learning (Support Vector Machine, Naive Bayes, Decision Tree, and Random Forest) is also presented. The results are obtained from an extensive set of experiments (with an 80/20 training and testing split) which show that representational learning (MLP) works well on our dataset reporting an average higher accuracy of 30% more for all tasks. Furthermore, a state-of-the-art method for feature engineering is proposed to extract features from the eye-tracking data. The average accuracy on all the tasks is 93.4% with a recall of 78.8% and an F1 score of 81.6%. We discuss the implications of these results on the future of automated prediction of developer expertise. Adviser: Bonita Shari

    Smart Digital Signage With Eye Tracking System

    Get PDF
    Digital signage is a more effective advertising solution compared to the traditional sign board since it is able to show multimedia contents and the advertising information can be easily updated. Current digital signage system has limited user interactive capability. Besides that, current system also lacks a way to collect viewer’s behaviour for analytic purposes. With the advancement of information technology, smart signage system allows some interactions between the viewer and the signage. In this project, a smart digital signage system which is capable of interaction between a user’s mobile device and the signage system is proposed. The user’s application on the mobile device provides a convenience way for navigating and storing the digital advertisements shown on the signage system instead of having paper brochure. Besides interactive capability, the proposed system is also able to detect faces and eyes to count the users viewing duration for each advertisement shown on the display. The faces and eyes detection were implemented using Haar Cascaded classifier available in the OpenCV library. A sanity checking algorithm is proposed to filter out the wrong detected eyes and improve the overall detection rate. Test results showed that the implemented face and eyes detection function can achieve 82.5% of true positive rate and 17.5% of false negative rate. Test on the complete system showed that the proposed smart signage system is able to work as expected. It is able to collect the users’ viewing duration for each advertisement with an average error of less than 12%

    SIMNET: simulation-based exercises for computer net-work curriculum through gamification and augmented reality

    Get PDF
    Gamification and Augmented Reality techniques, in recent years, have tackled many subjects and environments. Its implementation can, in particular, strengthen teaching and learning processes in schools and universities. Therefore, new forms of knowledge, based on interactions with objects, contributing game, experimentation and collaborative work. Through the technologies mentioned above, we intend to develop an application that serves as a didactic tool, giving support in the area of Computer Networks. This application aims to stand out in simulated controlled environments to create computer networks, taking into ac-count the necessary physical devices and the different physical and logical topologies. The main goal is to enrich the students’ learning experiences and contrib-ute to teacher-student interaction, through collaborative learning provided by the tool, minimizing the need for expensive equipment in learning environments.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
    corecore