27 research outputs found
An open learning environment for the diagnosis, assistance and evaluation of students based on artificial intelligence
The personalized diagnosis, assistance and evaluation of students in open learning environments can be a challenging task, especially in cases that the processes need to be taking place in real-time, classroom conditions. This paper describes the design of an open learning environment under development, designed to monitor the comprehension of students, assess their prior knowledge, build individual learner profiles, provide personalized assistance and, finally, evaluate their performance by using artificial intelligence. A trial test has been performed, with the participation of 20 students, which displayed promising results
Evaluation of an intelligent open learning system for engineering education
In computer-assisted education, the continuous monitoring and assessment of the learner is crucial for the delivery of personalized education to be effective. In this paper, we present a pilot application of the Student Diagnosis, Assistance, Evaluation System based on Artificial Intelligence (StuDiAsE), an open learning system for unattended student diagnosis, assistance and evaluation based on artificial intelligence. The system demonstrated in this paper has been designed with engineering students in mind and is capable of monitoring their comprehension, assessing their prior knowledge, building individual learner profiles, providing personalized assistance and, finally, evaluating a learner's performance both quantitatively and qualitatively by means of artificial intelligence techniques. The architecture and user interface of the system are being exhibited, the results and feedback received from a pilot application of the system within a theoretical engineering course are being demonstrated and the outcomes are being discussed
An Advanced eLearning Environment Developed for Engineering Learners
Monitoring and evaluating engineering learners through computer-based laboratory exercises is a difficult task, especially under classroom conditions. A complete diagnosis requires the capability to assess both the competence of the learner to use the scientific software and the understanding of the theoretical principles. This monitoring and evaluation needs to be continuous, unobtrusive and personalized in order to be effective. This study presents the results of the pilot application of an eLearning environment developed specifically with engineering learners in mind. As its name suggests, the Learner Diagnosis, Assistance, and Evaluation System based on Artificial Intelligence (StuDiAsE) is an Open Learning Environment that can perform unattended diagnostic, evaluation and feedback tasks based on both quantitative and qualitative parameters. The base architecture of the system, the user interface and its effect on the performance of postgraduate engineering learners are being presented
Comparison of Pre-Trained CNNs for Audio Classification Using Transfer Learning
The paper investigates retraining options and the performance of pre-trained Convolutional Neural Networks (CNNs) for sound classification. CNNs were initially designed for image classification and recognition, and, at a second phase, they extended towards sound classification. Transfer learning is a promising paradigm, retraining already trained networks upon different datasets. We selected three ‘Image’- and two ‘Sound’-trained CNNs, namely, GoogLeNet, SqueezeNet, ShuffleNet, VGGish, and YAMNet, and applied transfer learning. We explored the influence of key retraining parameters, including the optimizer, the mini-batch size, the learning rate, and the number of epochs, on the classification accuracy and the processing time needed in terms of sound preprocessing for the preparation of the scalograms and spectrograms as well as CNN training. The UrbanSound8K, ESC-10, and Air Compressor open sound datasets were employed. Using a two-fold criterion based on classification accuracy and time needed, we selected the ‘champion’ transfer-learning parameter combinations, discussed the consistency of the classification results, and explored possible benefits from fusing the classification estimations. The Sound CNNs achieved better classification accuracy, reaching an average of 96.4% for UrbanSound8K, 91.25% for ESC-10, and 100% for the Air Compressor dataset
Survey on Sound and Video Analysis Methods for Monitoring Face-to-Face Module Delivery
The objective of this work is to identify unobtrusive methodologies that allow the monitoring and understanding of the educational environment, during face to face activities, through capturing and processing of sound and video signals. It is a survey on application and techniques that exploit these two signals (sound and video) retrieved in classrooms, offices and other spaces. We categorize such applications based upon the high level characteristics extracted from the analysis of the low level features of the sound and video signals. Through the overview of these technologies, we attempt to achieve a degree of understanding the human behavior in a smart classroom, on behalf of the students and the teacher. Additionally, we illustrate open-research points for further investigation
Comparison of Pre-Trained CNNs for Audio Classification Using Transfer Learning
The paper investigates retraining options and the performance of pre-trained Convolutional Neural Networks (CNNs) for sound classification. CNNs were initially designed for image classification and recognition, and, at a second phase, they extended towards sound classification. Transfer learning is a promising paradigm, retraining already trained networks upon different datasets. We selected three ‘Image’- and two ‘Sound’-trained CNNs, namely, GoogLeNet, SqueezeNet, ShuffleNet, VGGish, and YAMNet, and applied transfer learning. We explored the influence of key retraining parameters, including the optimizer, the mini-batch size, the learning rate, and the number of epochs, on the classification accuracy and the processing time needed in terms of sound preprocessing for the preparation of the scalograms and spectrograms as well as CNN training. The UrbanSound8K, ESC-10, and Air Compressor open sound datasets were employed. Using a two-fold criterion based on classification accuracy and time needed, we selected the ‘champion’ transfer-learning parameter combinations, discussed the consistency of the classification results, and explored possible benefits from fusing the classification estimations. The Sound CNNs achieved better classification accuracy, reaching an average of 96.4% for UrbanSound8K, 91.25% for ESC-10, and 100% for the Air Compressor dataset