13 research outputs found

    Artificial Intelligence Technology on Teaching-Learning: Exploring Bangladeshi Teachers’ Perceptions

    Get PDF
    The increasing attention to artificial intelligence technologies in daily life and the need to consider it as a priority topic for students in the twenty-first century clearly leads to artificial intelligence (AI) integration in higher education. Therefore, university teachers must be properly prepared to use AI in their teaching for successful integration. In this study, the researcher aimed to survey to investigate Bangladeshi university teachers' attitudes toward AI as a teaching tool. The survey results showed that teachers have minimal understanding of Artificial Intelligence and its assistance in the classroom. However, they considered it as an educational possibility. The findings indicated that teachers require assistance to be effective and competent in their teaching practices; the findings suggested that AI has the potential to contribute as an assistant

    Equity-Focused Decision-Making Lacks Guidance!

    Get PDF
    Learning Analytics are an academic field with promising usage scenarios for many educational domains. At the same time, learning analytics come with threats such as the amplification of historically grown inequalities. A range of general guidelines for more equity-focused learning analytics have been proposed but fail to provide sufficiently clear guidance for practitioners. With this paper, we attempt to address this theory–practice gap through domain-specific (physics education) refinement of the general guidelines We propose a process as a starting point for this domain-specific refinement that can be applied to other domains as well. Our point of departure is a domain-specific analysis of historically grown inequalities in order to identify the most relevant diversity categories and evaluation criteria. Through two focal points for normative decision-making, namely equity and bias, we analyze two edge cases and highlight where domain-specific refinement of general guidance is necessary. Our synthesis reveals a necessity to work towards domain-specific standards and regulations for bias analyses and to develop counter-measures against (intersectional) discrimination. Ultimately, this should lead to a stronger equity-focused practice in future

    Learning Analytics and societal challenges: Capturing value for education and learning

    Get PDF
    A complex challenge for the society is to offer equal learning opportunities at various life stages and to support students, teachers, and institutions in their various tasks and roles related to learning and teaching. Learning analytics (LA) provides an opportunity to address these societal challenges. As the LA field matures, tool development is aimed at aiding informed human decision-making and combating inequalities. For example, detecting students at risk of dropping out or supporting self-regulated learning. The inception of LA was catalysed by an increasing amount of available data and what could be done with these data to improve learner support and teaching. Simultaneously, an increase in the computational power, machine learning methods, and tools at hand offer renewing affordances to analyse and visualise data both retrospectively and for predictive purposes. Employing LA as a solution also brings potential problems, such as unequal treatment, privacy concerns, and unethical practices. Through selected example cases, this chapter presents and addresses these potentials and risks

    EFAR-MMLA: An evaluation framework to assess and report generalizability of machine learning models in MMLA

    Get PDF
    Producción CientíficaMultimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account MMLA’s educational nature. Furthermore, there is a lack of systematization in model evaluation in MMLA, which is also reflected in the heterogeneous reporting of the evaluation results. To overcome these issues, this paper proposes an evaluation framework to assess and report the generalizability of ML models in MMLA (EFAR-MMLA). To illustrate the usefulness of EFAR-MMLA, we present a case study with two datasets, each with audio and log data collected from a classroom during a collaborative learning session. In this case study, regression models are developed for collaboration quality and its sub-dimensions, and their generalizability is evaluated and reported. The framework helped us to systematically detect and report that the models achieved better performance when evaluated using hold-out or cross-validation but quickly degraded when evaluated across different student groups and learning contexts. The framework helps to open up a “wicked problem” in MMLA research that remains fuzzy (i.e., the generalizability of ML models), which is critical to both accumulating knowledge in the research community and demonstrating the practical relevance of these techniques.Fondo Europeo de Desarrollo Regional - Agencia Nacional de Investigación (grants TIN2017-85179-C3-2-R and TIN2014-53199-C3-2-R)Fondo Europeo de Desarrollo Regional - Junta de Castilla y León (grant VA257P18)Comisión Europea (grant 588438-EPP-1- 2017-1-EL-EPPKA2-KA

    Predicting Paid Certification in Massive Open Online Courses

    Get PDF
    Massive open online courses (MOOCs) have been proliferating because of the free or low-cost offering of content for learners, attracting the attention of many stakeholders across the entire educational landscape. Since 2012, coined as “the Year of the MOOCs”, several platforms have gathered millions of learners in just a decade. Nevertheless, the certification rate of both free and paid courses has been low, and only about 4.5–13% and 1–3%, respectively, of the total number of enrolled learners obtain a certificate at the end of their courses. Still, most research concentrates on completion, ignoring the certification problem, and especially its financial aspects. Thus, the research described in the present thesis aimed to investigate paid certification in MOOCs, for the first time, in a comprehensive way, and as early as the first week of the course, by exploring its various levels. First, the latent correlation between learner activities and their paid certification decisions was examined by (1) statistically comparing the activities of non-paying learners with course purchasers and (2) predicting paid certification using different machine learning (ML) techniques. Our temporal (weekly) analysis showed statistical significance at various levels when comparing the activities of non-paying learners with those of the certificate purchasers across the five courses analysed. Furthermore, we used the learner’s activities (number of step accesses, attempts, correct and wrong answers, and time spent on learning steps) to build our paid certification predictor, which achieved promising balanced accuracies (BAs), ranging from 0.77 to 0.95. Having employed simple predictions based on a few clickstream variables, we then analysed more in-depth what other information can be extracted from MOOC interaction (namely discussion forums) for paid certification prediction. However, to better explore the learners’ discussion forums, we built, as an original contribution, MOOCSent, a cross- platform review-based sentiment classifier, using over 1.2 million MOOC sentiment-labelled reviews. MOOCSent addresses various limitations of the current sentiment classifiers including (1) using one single source of data (previous literature on sentiment classification in MOOCs was based on single platforms only, and hence less generalisable, with relatively low number of instances compared to our obtained dataset;) (2) lower model outputs, where most of the current models are based on 2-polar iii iv classifier (positive or negative only); (3) disregarding important sentiment indicators, such as emojis and emoticons, during text embedding; and (4) reporting average performance metrics only, preventing the evaluation of model performance at the level of class (sentiment). Finally, and with the help of MOOCSent, we used the learners’ discussion forums to predict paid certification after annotating learners’ comments and replies with the sentiment using MOOCSent. This multi-input model contains raw data (learner textual inputs), sentiment classification generated by MOOCSent, computed features (number of likes received for each textual input), and several features extracted from the texts (character counts, word counts, and part of speech (POS) tags for each textual instance). This experiment adopted various deep predictive approaches – specifically that allow multi-input architecture - to early (i.e., weekly) investigate if data obtained from MOOC learners’ interaction in discussion forums can predict learners’ purchase decisions (certification). Considering the staggeringly low rate of paid certification in MOOCs, this present thesis contributes to the knowledge and field of MOOC learner analytics with predicting paid certification, for the first time, at such a comprehensive (with data from over 200 thousand learners from 5 different discipline courses), actionable (analysing learners decision from the first week of the course) and longitudinal (with 23 runs from 2013 to 2017) scale. The present thesis contributes with (1) investigating various conventional and deep ML approaches for predicting paid certification in MOOCs using learner clickstreams (Chapter 5) and course discussion forums (Chapter 7), (2) building the largest MOOC sentiment classifier (MOOCSent) based on learners’ reviews of the courses from the leading MOOC platforms, namely Coursera, FutureLearn and Udemy, and handles emojis and emoticons using dedicated lexicons that contain over three thousand corresponding explanatory words/phrases, (3) proposing and developing, for the first time, multi-input model for predicting certification based on the data from discussion forums which synchronously processes the textual (comments and replies) and numerical (number of likes posted and received, sentiments) data from the forums, adapting the suitable classifier for each type of data as explained in detail in Chapter 7
    corecore