15,030 research outputs found
Do explicit review strategies improve code review performance? Towards understanding the role of cognitive load
Code review is an important process in software engineering â yet, a very expensive one. Therefore, understanding code review and how to improve reviewersâ performance is paramount. In the study presented in this work, we test whether providing developers with explicit reviewing strategies improves their review effectiveness and efficiency. Moreover, we verify if review guidance lowers developersâ cognitive load. We employ an experimental design where professional developers have to perform three code review tasks. Participants are assigned to one of three treatments: ad hoc reviewing, checklist, and guided checklist. The guided checklist was developed to provide an explicit reviewing strategy to developers. While the checklist is a simple form of signaling (a method to reduce cognitive load), the guided checklist incorporates further methods to lower cognitive demands of the task such as segmenting and weeding. The majority of the participants are novice reviewers with low or no code review experience. Our results indicate that the guided checklist is a more effective aid for a simple review,while the checklist supports reviewersâ efficiency and effectiveness in a complex task. However, we did not identify a strong relationship between the guidance provided and code review performance. The checklist has the potential to lower developersâ cognitive load, but higher cognitive load led to better performance possibly due to the generally low effectiveness and efficiency of the study participants. Data and materials: https://doi.org/10.5281/zenodo.5653341. Registered report: https://doi.org/10.17605/OSF.IO/5FPTJ. © 2022, The Author(s)
Reframing the L2 learning experience as narrative reconstructions of classroom learning
In this study we investigate the situated and dynamic nature of the L2 learning experience through a newly-purposed instrument called the Language Learning Story Interview, adapted from McAdamsâ life story interview (2007). Using critical case sampling, data were collected from an equal number of learners of various L2s (e.g., Arabic, English, Mandarin, Spanish) and analyzed using qualitative comparative analysis (Rihoux & Ragin, 2009). Through our data analysis, we demonstrate how language learners construct overarching narratives of the L2 learning experience and what the characteristic features and components that make up these narratives are. Our results provide evidence for prototypical nuclear scenes (McAdams et al., 2004) as well as core specifications and parameters of learnersâ narrative accounts of the L2 learning experience. We discuss how these shape motivation and language learning behavior
SNAP: Stateful Network-Wide Abstractions for Packet Processing
Early programming languages for software-defined networking (SDN) were built
on top of the simple match-action paradigm offered by OpenFlow 1.0. However,
emerging hardware and software switches offer much more sophisticated support
for persistent state in the data plane, without involving a central controller.
Nevertheless, managing stateful, distributed systems efficiently and correctly
is known to be one of the most challenging programming problems. To simplify
this new SDN problem, we introduce SNAP.
SNAP offers a simpler "centralized" stateful programming model, by allowing
programmers to develop programs on top of one big switch rather than many.
These programs may contain reads and writes to global, persistent arrays, and
as a result, programmers can implement a broad range of applications, from
stateful firewalls to fine-grained traffic monitoring. The SNAP compiler
relieves programmers of having to worry about how to distribute, place, and
optimize access to these stateful arrays by doing it all for them. More
specifically, the compiler discovers read/write dependencies between arrays and
translates one-big-switch programs into an efficient internal representation
based on a novel variant of binary decision diagrams. This internal
representation is used to construct a mixed-integer linear program, which
jointly optimizes the placement of state and the routing of traffic across the
underlying physical topology. We have implemented a prototype compiler and
applied it to about 20 SNAP programs over various topologies to demonstrate our
techniques' scalability
Interaction Histories and Short-Term Memory: Enactive Development of Turn-Taking Behaviours in a Childlike Humanoid Robot
In this article, an enactive architecture is described that allows a humanoid robot to learn to compose simple actions into turn-taking behaviours while playing interaction games with a human partner. The robotâs action choices are reinforced by social feedback from the human in the form of visual attention and measures of behavioural synchronisation. We demonstrate that the system can acquire and switch between behaviours learned through interaction based on social feedback from the human partner. The role of reinforcement based on a short-term memory of the interaction was experimentally investigated. Results indicate that feedback based only on the immediate experience was insufficient to learn longer, more complex turn-taking behaviours. Therefore, some history of the interaction must be considered in the acquisition of turn-taking, which can be efficiently handled through the use of short-term memory.Peer reviewedFinal Published versio
Neural Dynamics of Learning and Performance of Fixed Sequences: Latency Pattern Reorganizations and the N-STREAMS Model
Fixed sequences performed from memory play a key role in human cultural behavior, especially in music and in rapid communication through speaking, handwriting, and typing. Upon first performance, fixed sequences are often produced slowly, but extensive practice leads to performance that is both fluid and as rapid as allowed by constraints inherent in the task or the performer. The experimental study of fixed sequence learning and production has generated a large database with some challenging findings, including practice-related reorganizations of temporal properties of performance. In this paper, we analyze this literature and identify a coherent set of robust experimental effects. Among these are both the sequence length effect on latency, a dependence of reaction time on sequence length, and practice-dependent lost of the lengths effect on latency. We then introduce a neural network architecture capable of explaining these effects. Called the NSTREAMS model, this multi-module architecture embodies the hypothesis that the brain uses several substrates for serial order representation and learning. The theory describes three such substrates and how learning autonomously modifies their interaction over the course of practice. A key feature of the architecture is the co-operation of a 'competitive queuing' performance mechanism with both fundamentally parallel ('priority-tagged') and fundamentally sequential ('chain-like') representations of serial order. A neurobiological interpretation of the architecture suggests how different parts of the brain divide the labor for serial learning and performance. Rhodes (1999) presents a complete mathematical model as implementation of the architecture, and reports successful simulations of the major experimental effects. It also highlights how the network mechanisms incorporated in the architecture compare and contrast with earlier substrates proposed for competitive queuing, priority tagging and response chaining.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-92-J-1309, N00014-93-1-1364, N00014-95-1-0409); National Institute of Health (RO1 DC02852
- âŠ