8 research outputs found

    Digital Realities and Academic Research

    Get PDF
    There\u27s a change occurring in the delivery of scientific content. The development and application of virtual reality and augmented reality is changing research in nearly every field, from the life sciences to engineering. As a result, scholarly content is also changing its direction from print centric to fully submersed digital. Historically, scientific content has been simple text and figures. To create higher quality, more intuitive and engaging content, scholarly communication has witnessed a shift to video and, most recently, researchers have begun to include data to create next generation content types that supplement and enrich their works. Scholarly communication will continue this trend, requiring the delivery of content that is more innovative and interactive. However, in a world where the PDF has dominated the industry for years, new skills and technologies will be needed to ensure reader use and engagement remain stable as the information services industry shifts to accommodate new forms of content and articles enhanced by virtual and augmented reality. Implementing and delivering on augmented or virtual reality supplemental material, and supporting them with the necessary tools for engagement, is no easy task. For as much as interest, discussion and innovation are occurring-as with all disruptive entrants-questions will need to be answered, issues addressed, and best practices established so that publisher, author and end-user can benefit from the results of deeper content engagement. For publishers who work directly with scholars and researchers, this pivot means they must re-examine the needs of their customers, understand what they need delivered, where they expect to find that information, and how they want to interact with it. This will require publishers to update their current infrastructures, submission practices and guidelines, as well as develop or license software to keep pace and meet the needs of their authors and readers. This session will help to define the challenges and strengths related to digital realities, data, and the role researchers play in shaping mixed content types in a more data drive, digital environment. Discussion includes: What are some of the pros and cons associated with data and digital reality research? How are these different content types being used as supplemental material and will they be shifting to be seen as a more integral part of the scholarly record? In the future, what role will libraries play in this shift in providing users what they want, and in a format conducive to their work and research

    Modelling collaborative problem-solving competence with transparent learning analytics: is video data enough?

    Get PDF
    In this study, we describe the results of our research to model collaborative problem-solving (CPS) competence based on analytics generated from video data. We have collected ~500 mins video data from 15 groups of 3 students working to solve design problems collaboratively. Initially, with the help of OpenPose, we automatically generated frequency metrics such as the number of the face-in-the-screen; and distance metrics such as the distance between bodies. Based on these metrics, we built decision trees to predict students' listening, watching, making, and speaking behaviours as well as predicting the students' CPS competence. Our results provide useful decision rules mined from analytics of video data which can be used to inform teacher dashboards. Although, the accuracy and recall values of the models built are inferior to previous machine learning work that utilizes multimodal data, the transparent nature of the decision trees provides opportunities for explainable analytics for teachers and learners. This can lead to more agency of teachers and learners, therefore can lead to easier adoption. We conclude the paper with a discussion on the value and limitations of our approach

    Near real-time comprehension classification with artificial neural networks: decoding e-Learner non-verbal behaviour

    Get PDF
    Comprehension is an important cognitive state for learning. Human tutors recognise comprehension and non-comprehension states by interpreting learner non-verbal behaviour (NVB). Experienced tutors adapt pedagogy, materials and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time assessment for e-learner comprehension of on-screen information could provide a powerful tool for both adaptation within intelligent e-learning platforms and appraisal of tutorial content for learning analytics. However, literature suggests that no existing method for automatic classification of learner comprehension by analysis of NVB can provide a practical solution in an e-learning, on-screen, context. This paper presents design, development and evaluation of COMPASS, a novel near real-time comprehension classification system for use in detecting learner comprehension of on-screen information during e-learning activities. COMPASS uses a novel descriptive analysis of learner behaviour, image processing techniques and artificial neural networks to model and classify authentic comprehension indicative non-verbal behaviour. This paper presents a study in which 44 undergraduate students answered on-screen multiple choice questions relating to computer programming. Using a front-facing USB web camera the behaviour of the learner is recorded during reading and appraisal of on-screen information. The resultant dataset of non-verbal behaviour and question-answer scores has been used to train artificial neural network (ANN) to classify comprehension and non-comprehension states in near real-time. The trained comprehension classifier achieved normalised classification accuracy of 75.8%

    Ready Student One: Exploring the predictors of student learning in virtual reality

    Full text link
    Immersive virtual reality (VR) has enormous potential for education, but classroom resources are limited. Thus, it is important to identify whether and when VR provides sufficient advantages over other modes of learning to justify its deployment. In a between-subjects experiment, we compared three methods of teaching Moon phases (a hands-on activity, VR, and a desktop simulation) and measured student improvement on existing learning and attitudinal measures. While a substantial majority of students preferred the VR experience, we found no significant differences in learning between conditions. However, we found differences between conditions based on gender, which was highly correlated with experience with video games. These differences may indicate certain groups have an advantage in the VR setting.Comment: 28 pages, 7 figures, 4 tables. Published in PLOS ONE March 25, 202

    An observation of a negative effect of social cohesion on creativity in musical improvisation

    Get PDF
    Although various social factors can significantly impact creative performance, it is still unclear how social cohesion (i.e., how close we feel to others) influences creativity. We therefore conducted two studies exploring the association between social cohesion and creativity within the domain of musical improvisation, a prime example of creative performance, which usually plays out in social contexts. The first study (n = 58 musical novices) showed that music-induced synchrony facilitates social cohesion. In our second study (n = 18 musical novices), we found that in two out of three experimental conditions, increased social cohesion is associated with less creative musical outcomes, as rated by nine expert musicians. In our subsequent analysis we related measures of social cohesion and creativity. This approach highlights how, within a musical setting, creativity unfolds in the context of social contingencies as social cohesion and related factors

    Exploring the effects of sexual prejudice on dyadic interactions through an automated analysis of nonverbal behaviours

    Get PDF
    Nonverbal behaviours (NVB) are a fundamental part of the communication process: especially indicative of individuals\u2019 inner states such as attitudes and motivations, NVBs can deeply shape the perceived quality of the interaction. Despite their practical importance and theoretical value, NVBs in intergroup interactions (i.e. intergroup nonverbal behaviours; INVB) are an understudied topic. So far, they have been mainly investigated within interethnic contexts (i.e., White and Black people) and by employing invasive or time-consuming procedures, mainly involving subjective evaluations of video-recorded interactions by external coders. The present work aimed at extending previous literature by exploring NVB and its relationship with prejudice within gay/straight dyadic interactions, a relevant but still partially unexplored intergroup context within this field of research. Differently from ethnicity, sexual orientation is less identifiable and cannot be ascertained from visible markers such as the skin colour, but requires self-disclosure. Further and most importantly, we assessed patterns of NVBs through an RGB-depth camera \u2013 the Microsoft Kinect V.2 Sensor \u2013 that allowed us to obtain exact quantitative measures of body movements in a fully automatic and continuous way. In doing so, we conducted three experimental studies in which heterosexual participants (total N = 284) were first administered measures of implicit bias and explicit prejudice towards gay men (Study 1 & 3) or lesbians (Study 2), and then asked to interact with a gay (vs. straight; Study 1 & 3) or lesbian (vs. straight) confederate (Study 2), whose sexual orientation was manipulated (Studies 1 & 2) or disclosed (Study 3). A fake Facebook profile, shown to the participant before the interaction, revealed the confederates\u2019 sexual orientation. In all the studies, we considered the pattern of results on two main NVBs, one concerning proxemics (i.e., interpersonal volume between interactants) and the other concerning kinesics (i.e., amount of upper body motions). We selected these NVBs because previous research revealed that they are particularly meaningful for the comprehension of the psychological immediacy between interactants (i.e., interpersonal volume) and their comfort (or discomfort; amount of upper body motions) during a dyadic interaction. Overall, our work revealed a relevant (and unexpected) pattern of findings concerning interpersonal distance. Unlike previous literature, Study 1 revealed that high (vs. low) implicitly biased participants, instead of keeping a larger distance, tended to stay closer to the confederate presented as gay (vs. straight), especially when discussing a topic concerning the intergroup relation (i.e., the situation of the gay community in Italy) than a neutral one. This result was importantly extended in Study 3: high (vs. low) implicitly biased participants that stood closer to the gay (vs. straight) confederate revealed greater cognitive depletion (i.e., lower performance on a Stroop colour-naming task) after the conversation. This latter result suggests that, at least within gay/straight men interactions, interpersonal distance is an NVB that (high implicitly biased) people can control to manage their self-presentation, with consequent greater impairment of their cognitive resources. This main finding was not replicated in Study 2, in which we focused on dyadic interactions between heterosexual participants and lesbian women, by confirming how heterosexual people\u2019s attitudes (and their consequent INVBs) towards this minority group is distinct from those towards gay men and, presumably, people\u2019s gender plays a more predominant role than their implicit or explicit attitudes. Further, across our studies, we found inconsistent or non-significant results concerning the participants\u2019 upper body motion as an outcome variable. A possible explanation for these inconsistent results could be due to the relatively coarse algorithmic index that we used for this INVB. Theoretical and methodological implications of this work are discussed in the General Discussion section, together with its limitations and indications for future research

    A system for the visual detection and analysis of obsessive compulsive disorder

    Get PDF
    Computer vision is a burgeoning field that lends itself to a diverse range of challenging problems. Recent advances in computing power and algorithmic sophistication have prompted a renaissance in the literature of this field, as previously computationally expensive applications have come to the fore. As a result, researchers have begun applying computer vision techniques especially prominently to the analysis of human actions, in an increasingly advanced manner. Chief among the potential applications of such human action analyses are: human surveillance, crowd analysis, gait analysis and health informatics. Even more recently, researchers have begun to realise the potential of computer vision techniques, occasionally in conjunction with other computational approaches, to enhance the quality of life for people living with mental illness. Much of this research has focused on enhancing the existing, traditionally psychiatric, treatment plans for such individuals. Conventionally, these treatment plans have involved a mental health professional taking a face-to-face approach and relying significantly on subjective feedback from the individual, regarding their current condition and progress. However, recent computational methods have focused on augmenting such approaches with objective, e.g. visual, monitoring and feedback on an individual's condition over time. Of these approaches, most have focused on depression, bipolar disorder, dementia, or some form of anxiety. However, none of the approaches described in the literature has been aimed directly at addressing the issues inherent to patients with Obsessive Compulsive Disorder. Motivated by this, the proposed thesis comprises the design and implementation of a system that is capable of detecting and analysing the compulsive behaviours exhibited by individuals with Obsessive Compulsive Disorder. This is accomplished with the aim of assisting mental health professionals in their treatment of such patients. We achieved the aforementioned via a three-pronged approach, which is represented by the three core chapters of this thesis. Firstly, we created a system for the detection of general repetitive (compulsive) behaviours indicative of Obsessive Compulsive Disorder. This was achieved via the use of a combination of optical flow detection and thresholding, an image matching algorithm, and a set of repetition parameters. Via this approach, we achieved good results across a set of three tested videos. Secondly, we proposed a system capable of classifying behaviour as either compulsive or non-compulsive based on the differences in the repetition intensity patterns across a set of behavioural examples. We achieved this via a form of motion history image, which we call a 'Temporal Motion Heat Map' (TMHM). We produced one such heat map per behavioural example and then reduced its dimensionality using histogram-based pixel intensity frequencies, before feeding the result into a Neural Network. This approach achieved a high classification accuracy on the set of 40 tested behavioural examples, thus demonstrating its ability to accurately differentiate between compulsive and non-compulsive behaviours, as compared to a set of existing approaches. Finally, we built a system that is capable of categorising different types of behaviour, both compulsive and non-compulsive, and then assessing them for relative approximate anxiety levels over time. We achieve this using a combination of Speeded-Up Robust Features (SURF) descriptors for behaviour classification and statistical measures for determining the relative anxiety of a given compulsion. This system is also able to achieve a good accuracy when compared with other approaches

    Detecting human comprehension from nonverbal behaviour using artificial neural networks

    Get PDF
    Every day, communication between humans is abundant with an array of nonverbal behaviours. Nonverbal behaviours are signals emitted without using words such as facial expressions, eye gaze and body movement. Nonverbal behaviours have been used to identify a person’s emotional state in previous research. With nonverbal behaviour being continuously available and almost unconscious, it provides a potentially rich source of knowledge once decoded. Humans are weak decoders of nonverbal behaviour due to being error prone, susceptible to fatigue and poor at simultaneously monitoring numerous nonverbal behaviours. Human comprehension is primarily assessed from written and spoken language. Existing comprehension assessments tools are inhibited by inconsistencies and are often time-consuming with feedback delay. Therefore, there is a niche for attempting to detect human comprehension from nonverbal behaviour using artificially intelligent computational models such as Artificial Neural Networks (ANN), which are inspired by the structure and behaviour of biological neural networks such as those found within the human brain. This Thesis presents a novel adaptable system known as FATHOM, which has been developed to detect human comprehension and non-comprehension from monitoring multiple nonverbal behaviours using ANNs. FATHOM’s Comprehension Classifier ANN was trained and validated on human comprehension detection using the errorbackpropagation learning algorithm and cross-validation in a series of experiments with nonverbal datasets extracted from two independent comprehension studies where each participant was digitally video recorded: (1) during a mock informed consent field study and (2) in a learning environment. The Comprehension Classifier ANN repeatedly achieved averaged testing classification accuracies (CA) above 84% in the first phase of the mock informed consent field study. In the learning environment study, the optimised Comprehension Classifier ANN achieved a 91.385% averaged testing CA. Overall, the findings revealed that human comprehension and noncomprehension patterns can be automatically detected from multiple nonverbal behaviours using ANNs
    corecore