4,617 research outputs found

    GRAIMATTER Green Paper:Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)

    Get PDF
    TREs are widely, and increasingly used to support statistical analysis of sensitive data across a range of sectors (e.g., health, police, tax and education) as they enable secure and transparent research whilst protecting data confidentiality.There is an increasing desire from academia and industry to train AI models in TREs. The field of AI is developing quickly with applications including spotting human errors, streamlining processes, task automation and decision support. These complex AI models require more information to describe and reproduce, increasing the possibility that sensitive personal data can be inferred from such descriptions. TREs do not have mature processes and controls against these risks. This is a complex topic, and it is unreasonable to expect all TREs to be aware of all risks or that TRE researchers have addressed these risks in AI-specific training.GRAIMATTER has developed a draft set of usable recommendations for TREs to guard against the additional risks when disclosing trained AI models from TREs. The development of these recommendations has been funded by the GRAIMATTER UKRI DARE UK sprint research project. This version of our recommendations was published at the end of the project in September 2022. During the course of the project, we have identified many areas for future investigations to expand and test these recommendations in practice. Therefore, we expect that this document will evolve over time. The GRAIMATTER DARE UK sprint project has also developed a minimal viable product (MVP) as a suite of attack simulations that can be applied by TREs and can be accessed here (https://github.com/AI-SDC/AI-SDC).If you would like to provide feedback or would like to learn more, please contact Smarti Reel ([email protected]) and Emily Jefferson ([email protected]).The summary of our recommendations for a general public audience can be found at DOI: 10.5281/zenodo.708951

    Investigating the Efficacy of Algorithmic Student Modelling in Predicting Students at Risk of Failing in the Early Stages of Tertiary Education: Case study of experience based on first year students at an Institute of Technology in Ireland.

    Get PDF
    The application of data analytics to educational settings is an emerging and growing research area. Much of the published works to-date are based on ever-increasing volumes of log data that are systematically gathered in virtual learning environments as part of module delivery. This thesis took a unique approach to modelling academic performance; it is a first study to model indicators of students at risk of failing in first year of tertiary education, based on data gathered prior to commencement of first year, facilitating early engagement with at-risk students. The study was conducted over three years, in 2010 through 2012, and was based on a sample student population (n=1,207) aged between 18 and 60 from a range of academic disciplines. Data was extracted from both student enrolment data maintained by college administration, and an online, self-reporting, learner profiling tool developed specifically for this study. The profiling tool was administered during induction sessions for students enrolling into the first year of study. Twenty-four factors relating to prior academic performance, personality, motivation, self-regulation, learning approaches, learner modality, age and gender were considered. Eight classification algorithms were evaluated. Cross validation model accuracies based on all participants were compared with models trained on the 2010 and 2011 student cohorts, and tested on the 2012 student cohort. Best cross validation model accuracies were a Support Vector Machine (82%) and Neural Network (75%). The k-Nearest Neighbour model, which has received little attention in educational data mining studies, achieved highest model accuracy when applied to the 2012 student cohort (72%). The performance was similar to its cross validation model accuracy (72%). Model accuracies for other algorithms applied to the 2012 student cohort also compared favourably; for example Ensembles (71%), Support Vector Machine (70%) and a Decision Tree (70%). Models of subgroups by age and by academic discipline achieved higher accuracy than models of all participants, however, a larger sample size is needed to confirm results. Progressive sampling showed a sample size \u3e 900 was required to achieve convergence of model accuracy. Results showed that factors most predictive of academic performance in first year of study at tertiary education included age, prior academic performance and self-efficacy. Kinaesthetic modality was also indicative of students at risk of failing, a factor that has not been cited previously as a significant predictor of academic performance. Models reported in this study show that learner profiling completed prior to commencement of first year of study yielded informative and generalisable results that identified students at risk of failing. Additionally, model accuracies were comparable to models reported elsewhere that included data collected from student activity in semester one, confirming the validity of early student profiling

    EDM 2011: 4th international conference on educational data mining : Eindhoven, July 6-8, 2011 : proceedings

    Get PDF

    Current Issues in Emerging eLearning, Volume 7, Issue 1: APLU Special Issue on Implementing Adaptive Learning At Scale

    Get PDF
    The second of two Specials Issues of the CIEE journal to have been produced and guest edited by the Personalized Learning Consortium (PLC) of the Association of Public and Land-grant Universities (APLU), featuring important research resulting from university initiatives to launch, implement and scale up the use of adaptive courseware and the strategies of adaptive learning

    Predicting Paid Certification in Massive Open Online Courses

    Get PDF
    Massive open online courses (MOOCs) have been proliferating because of the free or low-cost offering of content for learners, attracting the attention of many stakeholders across the entire educational landscape. Since 2012, coined as “the Year of the MOOCs”, several platforms have gathered millions of learners in just a decade. Nevertheless, the certification rate of both free and paid courses has been low, and only about 4.5–13% and 1–3%, respectively, of the total number of enrolled learners obtain a certificate at the end of their courses. Still, most research concentrates on completion, ignoring the certification problem, and especially its financial aspects. Thus, the research described in the present thesis aimed to investigate paid certification in MOOCs, for the first time, in a comprehensive way, and as early as the first week of the course, by exploring its various levels. First, the latent correlation between learner activities and their paid certification decisions was examined by (1) statistically comparing the activities of non-paying learners with course purchasers and (2) predicting paid certification using different machine learning (ML) techniques. Our temporal (weekly) analysis showed statistical significance at various levels when comparing the activities of non-paying learners with those of the certificate purchasers across the five courses analysed. Furthermore, we used the learner’s activities (number of step accesses, attempts, correct and wrong answers, and time spent on learning steps) to build our paid certification predictor, which achieved promising balanced accuracies (BAs), ranging from 0.77 to 0.95. Having employed simple predictions based on a few clickstream variables, we then analysed more in-depth what other information can be extracted from MOOC interaction (namely discussion forums) for paid certification prediction. However, to better explore the learners’ discussion forums, we built, as an original contribution, MOOCSent, a cross- platform review-based sentiment classifier, using over 1.2 million MOOC sentiment-labelled reviews. MOOCSent addresses various limitations of the current sentiment classifiers including (1) using one single source of data (previous literature on sentiment classification in MOOCs was based on single platforms only, and hence less generalisable, with relatively low number of instances compared to our obtained dataset;) (2) lower model outputs, where most of the current models are based on 2-polar iii iv classifier (positive or negative only); (3) disregarding important sentiment indicators, such as emojis and emoticons, during text embedding; and (4) reporting average performance metrics only, preventing the evaluation of model performance at the level of class (sentiment). Finally, and with the help of MOOCSent, we used the learners’ discussion forums to predict paid certification after annotating learners’ comments and replies with the sentiment using MOOCSent. This multi-input model contains raw data (learner textual inputs), sentiment classification generated by MOOCSent, computed features (number of likes received for each textual input), and several features extracted from the texts (character counts, word counts, and part of speech (POS) tags for each textual instance). This experiment adopted various deep predictive approaches – specifically that allow multi-input architecture - to early (i.e., weekly) investigate if data obtained from MOOC learners’ interaction in discussion forums can predict learners’ purchase decisions (certification). Considering the staggeringly low rate of paid certification in MOOCs, this present thesis contributes to the knowledge and field of MOOC learner analytics with predicting paid certification, for the first time, at such a comprehensive (with data from over 200 thousand learners from 5 different discipline courses), actionable (analysing learners decision from the first week of the course) and longitudinal (with 23 runs from 2013 to 2017) scale. The present thesis contributes with (1) investigating various conventional and deep ML approaches for predicting paid certification in MOOCs using learner clickstreams (Chapter 5) and course discussion forums (Chapter 7), (2) building the largest MOOC sentiment classifier (MOOCSent) based on learners’ reviews of the courses from the leading MOOC platforms, namely Coursera, FutureLearn and Udemy, and handles emojis and emoticons using dedicated lexicons that contain over three thousand corresponding explanatory words/phrases, (3) proposing and developing, for the first time, multi-input model for predicting certification based on the data from discussion forums which synchronously processes the textual (comments and replies) and numerical (number of likes posted and received, sentiments) data from the forums, adapting the suitable classifier for each type of data as explained in detail in Chapter 7

    Proceedings of Mathsport international 2017 conference

    Get PDF
    Proceedings of MathSport International 2017 Conference, held in the Botanical Garden of the University of Padua, June 26-28, 2017. MathSport International organizes biennial conferences dedicated to all topics where mathematics and sport meet. Topics include: performance measures, optimization of sports performance, statistics and probability models, mathematical and physical models in sports, competitive strategies, statistics and probability match outcome models, optimal tournament design and scheduling, decision support systems, analysis of rules and adjudication, econometrics in sport, analysis of sporting technologies, financial valuation in sport, e-sports (gaming), betting and sports

    Socio-Cognitive and Affective Computing

    Get PDF
    Social cognition focuses on how people process, store, and apply information about other people and social situations. It focuses on the role that cognitive processes play in social interactions. On the other hand, the term cognitive computing is generally used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, it is a type of computing with the goal of discovering more accurate models of how the human brain/mind senses, reasons, and responds to stimuli. Socio-Cognitive Computing should be understood as a set of theoretical interdisciplinary frameworks, methodologies, methods and hardware/software tools to model how the human brain mediates social interactions. In addition, Affective Computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects, a fundamental aspect of socio-cognitive neuroscience. It is an interdisciplinary field spanning computer science, electrical engineering, psychology, and cognitive science. Physiological Computing is a category of technology in which electrophysiological data recorded directly from human activity are used to interface with a computing device. This technology becomes even more relevant when computing can be integrated pervasively in everyday life environments. Thus, Socio-Cognitive and Affective Computing systems should be able to adapt their behavior according to the Physiological Computing paradigm. This book integrates proposals from researchers who use signals from the brain and/or body to infer people's intentions and psychological state in smart computing systems. The design of this kind of systems combines knowledge and methods of ubiquitous and pervasive computing, as well as physiological data measurement and processing, with those of socio-cognitive and affective computing
    • …
    corecore