2,223 research outputs found

    Spatial Encoding Strategy Theory: The Relationship between Spatial Skill and STEM Achievement

    Get PDF
    Learners’ spatial skill is a reliable and significant predictor of achievement in STEM, including computing, education. Spatial skill is also malleable, meaning it can be improved through training. Most cognitive skill training improves performance on only a narrow set of similar tasks, but researchers have found ample evidence that spatial training can broadly improve STEM achievement. We do not yet know the cognitive mechanisms that make spatial skill training broadly transferable when other cognitive training is not, but understanding these mechanisms is important for developing training and instruction that consistently benefits learners, especially those starting with low spatial skill. This paper proposes the spatial encoding strategy (SpES) theory to explain the cognitive mechanisms connecting spatial skill and STEM achievement. To motivate SpES theory, the paper reviews research from STEM education, learning sciences, and psychology. SpES theory provides compelling post hoc explanations for the findings from this literature and aligns with neuroscience models about the functions of brain structures. The paper concludes with a plan for testing the theory’s validity and using it to inform future research and instruction. The paper focuses on implications for computing education, but the transferability of spatial skill to STEM performance makes the proposed theory relevant to many education communities

    Framing Movements for Gesture Interface Design

    Get PDF
    Gesture interfaces are an attractive avenue for human-computer interaction, given the range of expression that people are able to engage when gesturing. Consequently, there is a long running stream of research into gesture as a means of interaction in the field of human-computer interaction. However, most of this research has focussed on the technical challenges of detecting and responding to people’s movements, or on exploring the interaction possibilities opened up by technical developments. There has been relatively little research on how to actually design gesture interfaces, or on the kinds of understandings of gesture that might be most useful to gesture interface designers. Running parallel to research in gesture interfaces, there is a body of research into human gesture, which would seem a useful source to draw knowledge that could inform gesture interface design. However, there is a gap between the ways that ‘gesture’ is conceived of in gesture interface research compared to gesture research. In this dissertation, I explore this gap and reflect on the appropriateness of existing research into human gesturing for the needs of gesture interface design. Through a participatory design process, I designed, prototyped and evaluated a gesture interface for the work of the dental examination. Against this grounding experience, I undertook an analysis of the work of the dental examination with particular focus on the roles that gestures play in the work to compare and discuss existing gesture research. I take the work of the gesture researcher McNeill as a point of focus, because he is widely cited within gesture interface research literature. I show that although McNeill’s research into human gesture can be applied to some important aspects of the gestures of dentistry, there remain range of gestures that McNeill’s work does not deal with directly, yet which play an important role in the work and could usefully be responded to with gesture interface technologies. I discuss some other strands of gesture research, which are less widely cited within gesture interface research, but offer a broader conception of gesture that would be useful for gesture interface design. Ultimately, I argue that the gap in conceptions of gesture between gesture interface research and gesture research is an outcome of the different interests that each community brings to bear on the research. What gesture interface research requires is attention to the problems of designing gesture interfaces for authentic context of use and assessment of existing theory in light of this

    EMBODIMENT IN COMPUTER SCIENCE LEARNING: HOW SPACE, METAPHOR, GESTURE, AND SKETCHING SUPPORT STUDENT LEARNING

    Get PDF
    Recently, correlational studies have found that psychometrically assessed spatial skills may be influential in learning computer science (CS). Correlation does not necessarily mean causation; these correlations could be due to several reasons unrelated to spatial skills. Nonetheless, the results are intriguing when considering how students learn to program and what supports their learning. However, it's hard to explain these results. There is not an obvious match between the logic for computer programming and the logic for thinking spatially. CS is not imagistic or visual in the same way as other STEM disciplines since students can't see bits or loops. Spatial abilities and STEM performance are highly correlated, but that makes sense because STEM is a highly visual space. In this thesis, I used qualitative methods to document how space influences and appears in CS learning. My work is naturalistic and inductive, as little is known about how space influences and appears CS learning. I draw on constructivist, situative, and distributed learning theories to frame my investigation of space in CS learning. I investigated CS learning through two avenues. The first is as a sense-making, problem-solving activity, and the second is as a meaning-making and social process between teachers and students. In some ways, I was inspired to understand what was actually happening in these classrooms and how students are actually learning and what supports that learning. While looking for space, I discovered the surprising role embodiment and metaphor played while students make sense of computation and teachers express computational ideas. The implication is that people make meaning from their body-based, lived experiences and not just through their minds, even in a discipline such as computing, which is virtual in nature. For example, teachers use the following spatial language when describing a code trace: "then, it goes up here before going back down to the if-statement." The code is not actually going anywhere, but metaphor and embodiment are used to explain the abstract concept. This dissertation makes three main contributions to computing education research. First, I conducted some of the first studies on embodiment and space in CS learning. Second, I present a conceptual framework for the kinds of embodiment in CS learning. Lastly, I present evidence on the importance of metaphor for learning CS.Ph.D

    Multi-modal usability evaluation.

    Get PDF
    Research into the usability of multi-modal systems has tended to be device-led, with a resulting lack of theory about multi-modal interaction and how it might differ from more conventional interaction. This is compounded by a confusion over the precise definition of modality between the various disciplines within the HCI community, how modalities can be effectively classified, and their usability properties. There is a consequent lack of appropriate methodologies and notations to model such interactions and assess the usability implications of these interfaces. The role of expertise and craft skill in using HCI techniques is also poorly understood. This thesis proposes a new definition of modality, and goes on to identify issues of importance to multi-modal usability, culminating in the development of a new methodology to support the identification of such usability issues. It additionally explores the role of expertise and craft skill in using usability modelling techniques to assess usability issues. By analysing the problems inherent in current definitions and approaches, as well as issues relevant to cognitive science, a clear understanding of both the requirements for a suitable definition of modality and the salient usability issues are obtained. A novel definition of modality, based on the three elements of sense, information form and temporal nature is proposed. Further, an associated taxonomy is produced, which categorises modalities within the sensory dimension as visual, acoustic and haptic. This taxonomy classifies modalities within the information form dimension as lexical, symbolic or concrete, and classifies the temporal form dimension modalities as discrete, continuous, or dynamic. This results in a twenty-seven cell taxonomy, with each cell representing one taxon, indicating one particular type of modality. This is a faceted classification system, with the modality named after the intersection of the categories, building the category names into a compound modality name. The issues surrounding modality are examined and refined into the concepts of modality types, properties and clashes. Modalities are identified as belonging to either the system or the user, and being expressive or receptive in type. Various properties are described based on issues of granularity and redundancy. The five different types of clashes are described. Problems relating to the modelling of multi-modal interaction are examined by means of a motivating case study based on a portion of an interface for a robotic arm. The effectiveness of five modelling techniques, STN, CW, CPM-GOMS, PUM and Z, in representing multi-modal issues are assessed. From this, and using the collated definition, taxonomy and theory, a new methodology, Evaluating Multi-modal Usability (EMU), is developed. This is applied to a previous case study of the robotic arm to assess its application and coverage. Both the definition and EMU are used by students in a case study to test the definition and methodology's effectiveness, and to examine the leverage such an approach may give. The results shows that modalities can be successfully identified within an interactive context, and that usability issues can be described. Empirical video data of the robotic arm in use is used to confirm the issues identified by the previous analyses, and to identify new issues. A rational re-analysis of the six approaches (STN, CW, CPM-GOMS, PUM, Z and EMU) is conducted in order to distinguish between issues identified through craft skill, based on general HCI expertise and familiarity with the problem, and issues identified due to the core of the method for each approach. This is to gain a realistic understanding of the validity of claims made by each method, and to identify how else issues might be identified, and the consequent implications. Craft skill is found to have a wider role than anticipated, and the importance of expertise in using such approaches emphasised. From the case study and the re-analyses the implications for EMU are examined, and suggestions made for future refinement. The main contributions of this thesis are the new definition, taxonomy and theory, which significantly contribute to the theoretical understanding of multi-modal usability, helping to resolve existing confusion in this area. The new methodology, EMU, is a useful technique for examining interfaces for multi-modal usability issues, although some refinement is required. The importance of craft skill in the identification of usability issues has been explicitly explored, with implications for future work on usability modelling and the training of practitioners in such techniques

    A Bibliometric Study on Learning Analytics

    Get PDF
    Learning analytics tools and techniques are continually developed and published in scholarly discourse. This study aims at examining the intellectual structure of the Learning Analytics domain by collecting and analyzing empirical articles on Learning Analytics for the period of 2004-2018. First, bibliometric analysis and citation analyses of 2730 documents from Scopus identified the top authors, key research affiliations, leading publication sources (journals and conferences), and research themes of the learning analytics domain. Second, Domain Analysis (DA) techniques were used to investigate the intellectual structures of learning analytics research, publication, organization, and communication (Hjørland & Bourdieu 2014). The software of VOSviewer is used to analyze the relationship by publication: historical and institutional; author and institutional relationships and the dissemination of Learning Analytics knowledge. The results of this study showed that Learning Analytics had captured the attention of the global community. The United States, Spain, and the United Kingdom are among the leading countries contributing to the dissemination of learning analytics knowledge. The leading publication sources are ACM International Conference Proceeding Series, and Lecture Notes in Computer Science. The intellectual structures of the learning analytics domain are presented in this study the LA research taxonomy can be re-used by teachers, administrators, and other stakeholders to support the teaching and learning environments in a higher education institution
    • …
    corecore