2,371 research outputs found

    Enhancing Learning Object Analysis through Fuzzy C-Means Clustering and Web Mining Methods

    Get PDF
    The development of learning objects (LO) and e-pedagogical practices has significantly influenced and changed the performance of e-learning systems. This development promotes a genuine sharing of resources and creates new opportunities for learners to explore them easily. Therefore, the need for a system of categorization for these objects becomes mandatory. In this vein, classification theories combined with web mining techniques can highlight the performance of these LOs and make them very useful for learners. This study consists of two main phases. First, we extract metadata from learning objects, using the algorithm of Web exploration techniques such as feature selection techniques, which are mainly implemented to find the best set of features that allow us to build useful models. The key role of feature selection in learning object classification is to identify pertinent features and eliminate redundant features from an excessively dimensional dataset. Second, we identify learning objects according to a particular form of similarity using Multi-Label Classification (MLC) based on Fuzzy C-Means (FCM) algorithms. As a clustering algorithm, Fuzzy C-Means is used to perform classification accuracy according to Euclidean distance metrics as similarity measurement. Finally, to assess the effectiveness of LOs with FCM, a series of experimental studies using a real-world dataset were conducted. The findings of this study indicate that the proposed approach exceeds the traditional approach and leads to viable results. Doi: 10.28991/ESJ-2023-07-03-010 Full Text: PD

    Text Summarization Across High and Low-Resource Settings

    Get PDF
    Natural language processing aims to build automated systems that can both understand and generate natural language textual data. As the amount of textual data available online has increased exponentially, so has the need for intelligence systems to comprehend and present it to the world. As a result, automatic text summarization, the process by which a text\u27s salient content is automatically distilled into a concise form, has become a necessary tool. Automatic text summarization approaches and applications vary based on the input summarized, which may constitute single or multiple documents of different genres. Furthermore, the desired output style may consist of a sentence or sub-sentential units chosen directly from the input in extractive summarization or a fusion and paraphrase of the input document in abstractive summarization. Despite differences in the above use-cases, specific themes, such as the role of large-scale data for training these models, the application of summarization models in real-world scenarios, and the need for adequately evaluating and comparing summaries, are common across these settings. This dissertation presents novel data and modeling techniques for deep neural network-based summarization models trained across high-resource (thousands of supervised training examples) and low-resource (zero to hundreds of supervised training examples) data settings and a comprehensive evaluation of the model and metric progress in the field. We examine both Recurrent Neural Network (RNN)-based and Transformer-based models to extract and generate summaries from the input. To facilitate the training of large-scale networks, we introduce datasets applicable for multi-document summarization (MDS) for pedagogical applications and for news summarization. While the high-resource settings allow models to advance state-of-the-art performance, the failure of such models to adapt to settings outside of that in which it was initially trained requires smarter use of labeled data and motivates work in low-resource summarization. To this end, we propose unsupervised learning techniques for both extractive summarization in question answering, abstractive summarization on distantly-supervised data for summarization of community question answering forums, and abstractive zero and few-shot summarization across several domains. To measure the progress made along these axes, we revisit the evaluation of current summarization models. In particular, this dissertation addresses the following research objectives: 1) High-resource Summarization. We introduce datasets for multi-document summarization, focusing on pedagogical applications for NLP, news summarization, and Wikipedia topic summarization. Large-scale datasets allow models to achieve state-of-the-art performance on these tasks compared to prior modeling techniques, and we introduce a novel model to reduce redundancy. However, we also examine how models trained on these large-scale datasets fare when applied to new settings, showing the need for more generalizable models. 2) Low-resource Summarization. While high-resource summarization improves model performance, for practical applications, data-efficient models are necessary. We propose a pipeline for creating synthetic training data for training extractive question-answering models, a form of query-based extractive summarization with short-phrase summaries. In other work, we propose an automatic pipeline for training a multi-document summarizer in answer summarization on community question-answering forums without labeled data. Finally, we push the boundaries of abstractive summarization model performance when little or no training data is available across several domains. 3) Automatic Summarization Evaluation. To understand the extent of progress made across recent modeling techniques and better understand the current evaluation protocols, we examine the current metrics used to compare summarization output quality across 12 metrics across 23 deep neural network models and propose better-motivated summarization evaluation guidelines as well as point to open problems in summarization evaluation

    Use Scenarios & Practical Examples of AI Use in Education

    Full text link
    This report presents a set of use scenarios based on existing resources that teachers can use as inspiration to create their own, with the aim of introducing artificial intelligence (AI) at different pre-university levels, and with different goals. The Artificial Intelligence Education field (AIEd) is very active, with new resources and tools arising continuously. Those included in this document have already been tested with students and selected by experts in the field, but they must be taken just as practical examples to guide and inspire teachers creativity.Comment: Developed within the AI in Education working group of the European Digital Education Hu

    Smart Learning Environment: Paradigm Shift for Online Learning

    Get PDF
    Online learning has always been influenced by advanced technology. The role of online learning is expected not only for delivering contents to massive learners anywhere and anytime but also for promoting successful learning for the learners. Consequently, this emerged role has introduced the concept of smart learning environment. More specifically, smart learning environment is developed to promote personalized learning for learners. Personalized learning focuses on individual learner and provides appropriate feedback individually. Currently, the advances of modern technologies and intelligence data analytics have brought the idea of smart learning environment into realization. Machine learning techniques are generally applied to analyze real-time dynamic learner behavior and provide the appropriate response to the right learner. In this chapter, the evolution of online learning environment from different points of technological overviews is first introduced. Next, the concepts of personalized learning and smart learning environment are explained. Then, the essential components of smart learning environment are presented including learner classification and intervention feedback. Learner classification is to understand different learners. Intervention feedback is to provide an individual response appropriately. Additionally, some machine learning techniques widely used in smart learning environment in order to perform smart classification and response are briefly explained

    Improving Online Education Using Big Data Technologies

    Get PDF
    In a world in full digital transformation, where new information and communication technologies are constantly evolving, the current challenge of Computing Environments for Human Learning (CEHL) is to search the right way to integrate and harness the power of these technologies. In fact, these environments face many challenges, especially the increased demand for learning, the huge growth in the number of learners, the heterogeneity of available resources as well as the problems related to the complexity of intensive processing and real-time analysis of data produced by e-learning systems, which goes beyond the limits of traditional infrastructures and relational database management systems. This chapter presents a number of solutions dedicated to CEHL around the two big paradigms, namely cloud computing and Big Data. The first part of this work is dedicated to the presentation of an approach to integrate both emerging technologies of the big data ecosystem and on-demand services of the cloud in the e-learning field. It aims to enrich and enhance the quality of e-learning platforms relying on the services provided by the cloud accessible via the internet. It introduces distributed storage and parallel computing of Big Data in order to provide robust solutions to the requirements of intensive processing, predictive analysis, and massive storage of learning data. To do this, a methodology is presented and applied which describes the integration process. In addition, this chapter also addresses the deployment of a distributed e-learning architecture combining several recent tools of the Big Data and based on a strategy of data decentralization and the parallelization of the treatments on a cluster of nodes. Finally, this article aims to develop a Big Data solution for online learning platforms based on LMS Moodle. A course recommendation system has been designed and implemented relying on machine learning techniques, to help the learner select the most relevant learning resources according to their interests through the analysis of learning traces. The realization of this system is done using the learning data collected from the ESTenLigne platform and Spark Framework deployed on Hadoop infrastructure

    Delivering manufacturing technology and workshop appreciation to engineering undergraduates using the flipped classroom approach

    Get PDF
    Delivery of manufacturing technology and practical workshop-based work, on undergraduate engineering courses that engage the learners, is challenging. The paper presents an experimental method of workshop delivery using the flipped learning approach, a pedagogical model in which the typical lecture and homework elements of a course are reversed. Video lectures are viewed by students prior to class. In-class time can be devoted to exercises, projects, or discussions as in this case. Learners were asked to observe three audiovisual clips in preparation for class. The objective was to determine whether the flipped classroom approach can enhance the learning experience, through better engagement with the students, compared to conventional classroom-based learning. The level of student participation and level of success have been established by means of feedback questionnaires from more than 100 participants and peer observation. The results are encouraging and demonstrate that this approach is favoured by the students
    corecore