3 research outputs found

    An interval type-2 fuzzy logic based system for improved instruction within intelligent e-learning platforms

    Get PDF
    E-learning is becoming increasingly more popular. However, for such platforms (where the students and tutors are geographically separated), it is necessary to estimate the degree of students' engagement with the course contents. Such feedback is highly important and useful for assessing the teaching quality and adjusting the teaching delivery in large-scale online learning platforms. When the number of attendees is large, it is essential to obtain overall engagement feedback, but it is also challenging to do so because of the high levels of uncertainty associated with the environments and students. To handle such uncertainties, we present a type-2 fuzzy logic based system using visual RGB-D features including head pose direction and facial expressions captured from a low-cost but robust 3D camera (Kinect v2) to estimate the engagement degree of the students for both remote and on-site education. This system enriches another self- learning type-2 fuzzy logic system which provides the instructors with suggestions to vary their teaching means to suit the level of course students and improve the course instruction and delivery. This proposed dynamic e-learning environment involves on-site students, distance students, and a teacher who delivers the lecture to all attending onsite and remote students. The rules are learned from the students' behavior and the system is continuously updated to give the teacher the ability to adapt the lecture delivery instructional approach to varied learners' engagement levels. The efficiency of the proposed system has been evaluated through various real-world experiments in the University of Essex iClassroom on a sample of thirty students and six teachers. These experiments demonstrate the efficiency of the proposed interval type-2 fuzzy logic based system to handle the faced uncertainties and produce superior improved average learners' engagements when compared to type-1 fuzzy systems and nonadaptive systems

    Type-2 Fuzzy Logic based Systems for Adaptive Learning and Teaching within Intelligent E-Learning Environments

    Get PDF
    The recent years have witnessed an increased interest in e-learning platforms that incorporate adaptive learning and teaching systems that enable the creation of adaptive learning environments to suit individual student needs. The efficiency of these adaptive educational systems relies on the methodology used to accurately gather and examine information pertaining to the characteristics and needs of students and relies on the way that information is processed to form an adaptive learning context. The vast majority of existing adaptive educational systems do not learn from the users’ behaviours to create white-box models to handle the high level of uncertainty and that could be easily read and analysed by the lay user. The data generated from interactions, such as teacher–learner or learner–system interactions within asynchronous environments, provide great opportunities to realise more adaptive and intelligent e-learning platforms rather than propose prescribed pedagogy that depends on the idea of a few designers and experts. Another limitation of current adaptive educational systems is that most of the existing systems ignore gauging the students' engagements levels and mapping them to suitable delivery needs which match the students' knowledge and preferred learning styles. It is necessary to estimate the degree of students’ engagement with the course contents. Such feedback is highly important and useful for assessing the teaching quality and adjusting the teaching delivery in small and large-scale online learning platforms. Furthermore, most of the current adaptive educational systems are used within asynchronous e-learning contexts as self-paced e-learning products in which learners can study in their own time and at their own speed, totally ignorant of synchronous e-learning settings of teacher-led delivery of the learning material over a communication tool in real time. This thesis presents novel theoretical and practical architectures based on computationally lightweight T2FLSs for lifelong learning and adaptation of learners’ and teachers’ behaviours in small- and large-scale asynchronous and synchronous e-learning platforms. In small-scale asynchronous and synchronous e-learning platforms, the presented architecture augments an engagement estimate system using a noncontact, low-cost, and multiuser support 3D sensor Kinect (v2). This is able to capture reliable features including head pose direction and hybrid features of facial expression to enable convenient and robust estimation of engagement in small-scale online and onsite learning in an unconstrained and natural environment in which users are allowed to act freely and move without restrictions. We will present unique real-world experiments in large and small-scale e-learning platforms carried out by 1,916 users from King Abdul-Aziz and Essex universities in Saudi Arabia and the UK over the course of teaching Excel and PowerPoint in which the type 2 system is learnt and adapted to student and teacher behaviour. The type-2 fuzzy system will be subjected to extended and varied knowledge, engagement, needs, and a high level of uncertainty variation in e-learning environments outperforming the type 1 fuzzy system and non-adaptive version of the system by producing better performance in terms of improved learning, completion rates, and better user engagements

    A Big Bang Big Crunch Type-2 Fuzzy Logic System for Machine Vision-Based Event Detection and Summarization in Real-world Ambient Assisted Living

    Get PDF
    The recent years have witnessed the prevalence and abundance of vision sensors in various applications such as security surveillance, healthcare and Ambient Assisted Living (AAL) among others. This is so as to realize intelligent environments which are capable of detecting users’ actions and gestures so that the needed services can be provided automatically and instantly to maximize user comfort and safety as well as to minimize energy. However, it is very challenging to automatically detect important events and human behaviour from vision sensors and summarize them in real time. This is due to the massive data sizes related to video analysis applications and the high level of uncertainties associated with the real world unstructured environments occupied by various users. Machine vision based systems can help detect and summarize important information which cannot be detected by any other sensor; for example, how much water a candidate drank and whether or not they had something to eat. However, conventional non-fuzzy based methods are not robust enough to recognize the various complex types of behaviour in AAL applications. Fuzzy logic system (FLS) is an established field of research to robustly handle uncertainties in complicated real-world problems. In this thesis, we will present a general recognition and classification framework based on fuzzy logic systems which allows for behaviour recognition and event summarisation using 2D/3D video sensors in AAL applications. I started by investigating the use of 2D CCTV camera based system where I proposed and developed novel IT2FLS-based methods for silhouette extraction and 2D behaviour recognition which outperform the traditional on the publicly available Weizmann human action dataset. I will also present a novel system based on 3D RGB-D vision sensors and Interval Type-2 Fuzzy Logic based Systems (IT2FLSs) ) generated by the Big Bang Big Crunch (BB-BC) algorithm for the real time automatic detection and summarization of important events and human behaviour. I will present several real world experiments which were conducted for AAL related behaviour with various users. It will be shown that the proposed BB-BC IT2FLSs outperforms its Type-1 FLSs (T1FLSs) counterpart as well as other conventional non-fuzzy methods, and that performance improvement rises when the number of subjects increases. It will be shown that by utilizing the recognized output activity together with relevant event descriptions (such as video data, timestamp, location and user identification) detailed events are efficiently summarized and stored in our back-end SQL event database, which provides services including event searching, activity retrieval and high-definition video playback to the front-end user interfaces
    corecore