1,436 research outputs found

    Leveraging Targeted Regions of Interest by Analyzing Code Comprehension With AI-Enabled Eye-Tracking

    Get PDF
    Code comprehension studies techniques for extracting information that give insights on how code is understood. For educators teaching programming courses, this is an important but often difficult task, especially given the challenges of large class sizes, limited time, and grading resources. By analyzing where a student looks during a code comprehension task, instructors can gain insights into what information the student deems important and assess whether they are looking in the right areas of the code. The proportion of time spent viewing a part of the code is also a useful indicator of the student\u27s decision-making process. The goal of this research is to analyze the differences in how students\u27 eyes traverse across code during coding comprehension activities and to offer a systematic way for distinguishing students with a solid understanding of the exercise from those who require further assistance. The study uses coding exercises seeded with errors, measured fixation counts, and average fixation durations of the students\u27 eyes within targeted regions of interest (TROI) using an AI-Enabled Eye-Tracking System (NiCATS). The results of the study showed that students\u27 grades (as a proxy for understanding of the code\u27s context and their decision-making skills) were positively correlated with a higher ratio of the number of fixations in the TROI

    Semi-Automation in Video Editing

    Get PDF
    Semi-automasjon i video redigering Hvordan kan vi bruke kunstig intelligens (KI) og maskin læring til å gjøre videoredigering like enkelt som å redigere tekst? I denne avhandlingen vil jeg adressere problemet med å bruke KI i videoredigering fra et Menneskelig-KI interaksjons perspektiv, med fokus på å bruke KI til å støtte brukerne. Video er et audiovisuelt medium. Redigere videoer krever synkronisering av både det visuelle og det auditive med presise operasjoner helt ned på millisekund nivå. Å gjøre dette like enkelt som å redigere tekst er kanskje ikke mulig i dag. Men hvordan skal vi da støtte brukerne med KI og hva er utfordringene med å gjøre det? Det er fem hovedspørsmål som har drevet forskningen i denne avhandlingen. Hva er dagens "state-of-the-art" i KI støttet videoredigering? Hva er behovene og forventningene av fagfolkene om KI? Hva er påvirkningen KI har på effektiviteten og nøyaktigheten når det blir brukt på teksting? Hva er endringene i brukeropplevelsen når det blir brukt KI støttet teksting? Hvordan kan flere KI metoder bli brukt for å støtte beskjærings- og panoreringsoppgaver? Den første artikkelen av denne avhandlingen ga en syntese og kritisk gjennomgang av eksisterende arbeid med KI-baserte verktøy for videoredigering. Artikkelen ga også noen svar på hvordan og hva KI kan bli brukt til for å støtte brukere ved en undersøkelse utført av 14 fagfolk. Den andre studien presenterte en prototype av KI-støttet videoredigerings verktøy bygget på et eksisterende videoproduksjons program. I tillegg kom det en evaluasjon av både ytelse og brukeropplevelse på en KI-støttet teksting fra 24 nybegynnere. Den tredje studien beskrev et idiom-basert verktøy for å konvertere bredskjermsvideoer lagd for TV til smalere størrelsesforhold for mobil og sosiale medieplattformer. Den tredje studien utforsker også nye metoder for å utøve beskjæring og panorering ved å bruke fem forskjellige KI-modeller. Det ble også presentert en evaluering fra fem brukere. I denne avhandlingen brukte vi en brukeropplevelse og oppgave basert framgangsmåte, for å adressere det semi-automatiske i videoredigering.How can we use artificial intelligence (AI) and machine learning (ML) to make video editing as easy as "editing text''? In this thesis, this problem of using AI to support video editing is explored from the human--AI interaction perspective, with the emphasis on using AI to support users. Video is a dual-track medium with audio and visual tracks. Editing videos requires synchronization of these two tracks and precise operations at milliseconds. Making it as easy as editing text might not be currently possible. Then how should we support the users with AI, and what are the current challenges in doing so? There are five key questions that drove the research in this thesis. What is the start of the art in using AI to support video editing? What are the needs and expectations of video professionals from AI? What are the impacts on efficiency and accuracy of subtitles when AI is used to support subtitling? What are the changes in user experience brought on by AI-assisted subtitling? How can multiple AI methods be used to support cropping and panning task? In this thesis, we employed a user experience focused and task-based approach to address the semi-automation in video editing. The first paper of this thesis provided a synthesis and critical review of the existing work on AI-based tools for videos editing and provided some answers to how should and what more AI can be used in supporting users by a survey of 14 video professional. The second paper presented a prototype of AI-assisted subtitling built on a production grade video editing software. It is the first comparative evaluation of both performance and user experience of AI-assisted subtitling with 24 novice users. The third work described an idiom-based tool for converting wide screen videos made for television to narrower aspect ratios for mobile social media platforms. It explores a new method to perform cropping and panning using five AI models, and an evaluation with 5 users and a review with a professional video editor were presented.Doktorgradsavhandlin

    Toward Establishing a Catalog of Security Architecture Weaknesses

    Get PDF
    The architecture design of a software system plays a crucial role in addressing security requirements early in the development lifecycle through forming design solutions that prevent or mitigate attacks in a system. Consequently, flaws in the software architecture can impact various security concerns in the system, thereby introducing severe breaches that could be exploited by attackers. In this context, this thesis presents the new concept of Common Architectural Weakness Enumeration (CAWE), a catalog that identifies and categorizes common types of vulnerabilities rooted in the software architecture design and provides mitigation techniques to address each of them. Through this catalog, we aim to promote the awareness of architectural flaws and stimulate security design thinking to developers, architects and software engineers. This work also investigates the reported vulnerabilities from four real and complex software systems to verify the existence and implications of architecture weaknesses. From this investigation, we noted that a variety of breaches are indeed rooted in the software design (at least 35% in the investigated systems), providing evidence that architectural weaknesses frequently occurs in complex systems, resulting in medium to high severe vulnerabilities. Therefore, a catalog of such type of weaknesses can be useful for adopting proactive approaches to avoid design vulnerabilities

    Automatically Fixing Syntax Errors Using the Levenshtein Distance

    Get PDF
    Abstract:To ensure high quality software, much emphasis is laid on software testing. While a number of techniques and tools already exist to identify and locate syntax errors, it is still the duty of programmers to manually fix each of these uncovered syntax errors. In this paper we propose an approach to automate the task of fixing syntax errors by using existing compilers and the levenshtein distance between the identified bug and the possible fixes. The levenshtein distance is a measure of the similarity between two strings. A prototype, called ASBF, has also been built and a number of tests carried out which show that the technique works well in most cases. ASBF is able to automatically fix syntax errors in any erroneous source file and can also process several erroneous files in a source folder. The tests carried out also show that the technique can also be applied to multiple programming languages. Currently ASBF can automatically fix software bugs in the Java and the Python programming languages. The tool also has auto-learning capabilities where it can automatically learn from corrections made manually by a user. It can thereafter couple this learning process with the levenshtein distance to improve its software bugcorrection capabilities.Keywords: Automatically fixing syntax errors, bug fixing, auto-learn, levenshtein distance, Java, Python(Article history: Received 16 September 2016 and accepted 9 December 2016

    Algorithms for the automated correction of vertical drift in eye-tracking data

    Get PDF
    A common problem in eye tracking research is vertical drift\u2014the progressive displacement of fixation registrations on the vertical axis that results from a gradual loss of eye tracker calibration over time. This is particularly problematic in experiments that involve the reading of multiline passages, where it is critical that fixations on one line are not erroneously recorded on an adjacent line. Correction is often performed manually by the researcher, but this process is tedious, time-consuming, and prone to error and inconsistency. Various methods have previously been proposed for the automated, post-hoc correction of vertical drift in reading data, but these methods vary greatly, not just in terms of the algorithmic principles on which they are based, but also in terms of their availability, documentation, implementation languages, and so forth. Furthermore, these methods have largely been developed in isolation with little attempt to systematically evaluate them, meaning that drift correction techniques are moving forward blindly. We document ten major algorithms, including two that are novel to this paper, and evaluate them using both simulated and natural eye tracking data. Our results suggest that a method based on dynamic time warping offers great promise, but we also find that some algorithms are better suited than others to particular types of drift phenomena and reading behavior, allowing us to offer evidence-based advice on algorithm selection

    Advances in methods for determining fecundity: application of the new methods to some marine fishes

    Get PDF
    Estimation of individual egg production (realized fecundity) is a key step either to understand the stock and recruit relationship or to carry out fisheries-independent assessment of spawning stock biomass using egg production methods. Many fish are highly fecund and their ovaries may weigh over a kilogram; therefore the work time can be consuming and require large quantities of toxic fixative. Recently it has been shown for Atlantic cod (Gadus morhua) that image analysis can automate fecundity determination using a power equation that links follicles per gram ovary to the mean vitellogenic follicular diameter (the autodiametric method). In this article we demonstrate the precision of the autodiametric method applied to a range of species with different spawning strategies during maturation and spawning. A new method using a solid displacement pipette to remove quantitative fecundity samples (25, 50, 100, and 200 milligram [mg]) is evaluated, as are the underlying assumptions to effectively fix and subsample the ovary. Finally, we demonstrate the interpretation of dispersed formaldehyde-fixed ovarian samples (whole mounts) to assess the presence of atretic and postovulatory follicles to replace labor intensive histology. These results can be used to estimate down regulation (production of atretic follicles) of fecundity during maturation

    Understanding Eye Gaze Patterns in Code Comprehension

    Get PDF
    Program comprehension is a sub-field of software engineering that seeks to understand how developers understand programs. Comprehension acts as a starting point for many software engineering tasks such as bug fixing, refactoring, and feature creation. The dissertation presents a series of empirical studies to understand how developers comprehend software in realistic settings. The unique aspect of this work is the use of eye tracking equipment to gather fine-grained detailed information of what developers look at in software artifacts while they perform realistic tasks in an environment familiar to them, namely a context including both the Integrated Development Environment (Eclipse or Visual Studio) and a web browser (Google Chrome). The iTrace eye tracking infrastructure is used for certain eye tracking studies on large code files as it is able to handle page scrolling and context switching. The first study is a classroom-based study on how students actively trained in the classroom understand grouped units of C++ code. Results indicate students made many transitions between lines that were closer together, and were attracted the most to if statements and to a lesser extent assignment code. The second study seeks to understand how developers use Stack Overflow page elements to build summaries of open source project code. Results indicate participants focused more heavily on question and answer text, and the embedded code, more than they did the title, question tags, or votes. The third study presents a larger code summarization study using different information contexts: Stack Overflow, bug repositories and source code. Results show participants tended to visit up to two codebase files in either the combined or isolated codebase session, but visit more bug report pages, and spend longer time on new Stack Overflow pages they visited, when given either these two treatments in isolation. In the combined session, time spent on the one or two codebase files they viewed dominated the session time. Information learned from tracking developers\u27 gaze in these studies can form foundations for developer behavior models, which we hope can later inform recommendations for actions one might take to achieve workflow goals in these settings. Advisor: Bonita Shari

    3D mesh processing using GAMer 2 to enable reaction-diffusion simulations in realistic cellular geometries

    Full text link
    Recent advances in electron microscopy have enabled the imaging of single cells in 3D at nanometer length scale resolutions. An uncharted frontier for in silico biology is the ability to simulate cellular processes using these observed geometries. Enabling such simulations requires watertight meshing of electron micrograph images into 3D volume meshes, which can then form the basis of computer simulations of such processes using numerical techniques such as the Finite Element Method. In this paper, we describe the use of our recently rewritten mesh processing software, GAMer 2, to bridge the gap between poorly conditioned meshes generated from segmented micrographs and boundary marked tetrahedral meshes which are compatible with simulation. We demonstrate the application of a workflow using GAMer 2 to a series of electron micrographs of neuronal dendrite morphology explored at three different length scales and show that the resulting meshes are suitable for finite element simulations. This work is an important step towards making physical simulations of biological processes in realistic geometries routine. Innovations in algorithms to reconstruct and simulate cellular length scale phenomena based on emerging structural data will enable realistic physical models and advance discovery at the interface of geometry and cellular processes. We posit that a new frontier at the intersection of computational technologies and single cell biology is now open.Comment: 39 pages, 14 figures. High resolution figures and supplemental movies available upon reques

    Space biology initiative program definition review. Trade study 1: Automation costs versus crew utilization

    Get PDF
    A significant emphasis upon automation within the Space Biology Initiative hardware appears justified in order to conserve crew labor and crew training effort. Two generic forms of automation were identified: automation of data and information handling and decision making, and the automation of material handling, transfer, and processing. The use of automatic data acquisition, expert systems, robots, and machine vision will increase the volume of experiments and quality of results. The automation described may also influence efforts to miniaturize and modularize the large array of SBI hardware identified to date. The cost and benefit model developed appears to be a useful guideline for SBI equipment specifiers and designers. Additional refinements would enhance the validity of the model. Two NASA automation pilot programs, 'The Principal Investigator in a Box' and 'Rack Mounted Robots' were investigated and found to be quite appropriate for adaptation to the SBI program. There are other in-house NASA efforts that provide technology that may be appropriate for the SBI program. Important data is believed to exist in advanced medical labs throughout the U.S., Japan, and Europe. The information and data processing in medical analysis equipment is highly automated and future trends reveal continued progress in this area. However, automation of material handling and processing has progressed in a limited manner because the medical labs are not affected by the power and space constraints that Space Station medical equipment is faced with. Therefore, NASA's major emphasis in automation will require a lead effort in the automation of material handling to achieve optimal crew utilization
    corecore