5 research outputs found

    Guidelines for digital storytelling for Arab children

    Get PDF
    Children are getting more exposed to various technologies in teaching-learning. Various types of teaching-learning have been designed, including interactive digital storytelling. In Malaysia, local children have been clear about story-based learning materials. However, the situation is a little bit different with Arab children. Because the number of Arab children migrating into Malaysia is increasing, for following their parents who are studying at higher levels, they have to also make themselves familiar with the local scenario. In accordance, this study is initiates, to identify their acceptance towards story-based learning materials, or specifically interactive digital storytelling. Hence, this study reacts proactively, by approaching Arab children asking for their feedback on whether they have any desire for interactive digital storytelling. Through a series of interviews, this study found that they have a strong desire and tendency. Then, the following objectives have been stated: (1) to determine the components for the interactive digital storytelling for Arab children, (2) to design and develop a prototype of the interactive digital storytelling, and (3) to observe on how the Arab children experience the interactive digital storytelling. User-centered design (UCD) approach has been gone through in ensuring that the objectives are achieved. The process of determining the components for the interactive digital storytelling was carried out by directly involving Arab children and their teachers from three preschools in Changlun and Sintok. It was similar with the efforts in determining the contents, and interface design until the prototype development. Having the prototype ready, user testing was carried out to explore the way Arab children experience the prototype. All the processes involved various techniques through observation, interviews, and noting. Specifically, the user testing involved qualitative and empirical data. Qualitative data were gathered through observation, meanwhile the empirical data were gathered using Computer System Usability Questionnaire (CSUQ) tool. In the end, having data processed, the findings show that Arab children are highly satisfied with the prototype. Scientifically, the developed prototype is a mirror of the obtained guidelines, obtained through the UCD seminars. Hence, the positive acceptance on the prototype reflects positive acceptance on the guidelines, as the main contribution of this study. Besides the guidelines as the main contribution of this study, the developed prototype is also a wonderful contribution to the Arab children and their teacher. They will be using it as part of their teaching and learning material

    Conceptual model for usable multi-modal mobile assistance during Umrah

    Get PDF
    Performing Umrah is very demanding and to be performed in very crowded environments. In response to that, many efforts have been initiated to overcome the difficulties faced by pilgrims. However, those efforts focus on acquiring initial perspective and background knowledge before going to Mecca. Findings of preliminary study show that those efforts do not support multi-modality for user interaction. Nowadays the computational capabilities in mobile phones enable it to serve people in various aspects of daily life. Consequently, the mobile phone penetration has increased dramatically in the last decade. Hence, this study aims to propose a comprehensive conceptual model for usable multimodal mobile assistance during Umrah called Multi-model Mobile Assistance during Umrah (MMA-U). Thus, four (4) supporting objectives are formulated, and the Design Science Research Methodology has been adopted. For the usability of MMA-U, Systematic Literature Review (SLR) indicates ten (10) attributes: usefulness, errors rate, simplicity, reliability, ease of use, safety, flexibility, accessibility, attitude, and acceptability. Meanwhile, the content and comparative analysis result in five (5) components that construct the conceptual model of MMA-U: structural, content composition, design principles, development approach, technology, and the design and usability theories. Then, the MMA-U has been reviewed and well-accepted by 15 experts. Later, the MMA-U was incorporated into a prototype called Personal Digital Mutawwif (PDM). The PDM was developed for the purpose of user test in the field. The findings indicate that PDM facilitates the execution of Umrah and successfully meet pilgrims’ needs and expectations. Also, the pilgrims were satisfied and felt that they need to have PDM. In fact, they would recommend PDM to their friends, which mean that use of PDM is safe and suitable while performing Umrah. As a conclusion, the theoretical contribution; the conceptual model of MMA-U; provides guidelines for developing multimodal content mobile applications during Umrah

    Unsupervised methods in multilingual and multimodal semantic modeling

    Get PDF
    In the first part of this project, independent component analysis has been applied to extract word clusters from two Farsi corpora. Both word-document and word-context matrices have been considered to extract such clusters. The application of ICA on the word-document matrices extracted from these two corpora led to the detection of syntagmatic word clusters, while the utilization of word-context matrix resulted in the extraction of both syntagmatic and paradigmatic word clusters. Furthermore, we have discussed some potential benefits of this automatically extracted thesaurus. In such a thesaurus, a word is defined by some other words without being connected to the outer physical objects. In order to fill such a gap, symbol grounding has been proposed by philosophers as a mechanism which might connect words to their physical referents. From their point of view, if words are properly connected to their referents, their meaning might be realized. Once this objective is achieved, a new promising horizon would open in the realm of artificial intelligence. In the second part of the project, we have offered a simple but novel method for grounding words based on the features coming from the visual modality. Firstly, indexical grounding is implemented. In this naïve symbol grounding method, a word is characterized using video indexes as its context. Secondly, such indexical word vectors have been normalized according to the features calculated for motion videos. This multimodal fusion has been referred to as the pattern grounding. In addition, the indexical word vectors have been normalized using some randomly generated data instead of the original motion features. This third case was called randomized grounding. These three cases of symbol grounding have been compared in terms of the performance of translation. Besides that, word clusters have been excerpted by comparing the vector distances and from the dendrograms generated using an agglomerative hierarchical clustering method. We have observed that pattern grounding exceled the indexical grounding in the translation of the motion annotated words, while randomized grounding has deteriorated the translation significantly. Moreover, pattern grounding culminated in the formation of clusters in which a word fit semantically to the other members, while using the indexical grounding, some of the closely related words dispersed into arbitrary clusters

    Audio-coupled video content understanding of unconstrained video sequences

    Get PDF
    Unconstrained video understanding is a difficult task. The main aim of this thesis is to recognise the nature of objects, activities and environment in a given video clip using both audio and video information. Traditionally, audio and video information has not been applied together for solving such complex task, and for the first time we propose, develop, implement and test a new framework of multi-modal (audio and video) data analysis for context understanding and labelling of unconstrained videos. The framework relies on feature selection techniques and introduces a novel algorithm (PCFS) that is faster than the well-established SFFS algorithm. We use the framework for studying the benefits of combining audio and video information in a number of different problems. We begin by developing two independent content recognition modules. The first one is based on image sequence analysis alone, and uses a range of colour, shape, texture and statistical features from image regions with a trained classifier to recognise the identity of objects, activities and environment present. The second module uses audio information only, and recognises activities and environment. Both of these approaches are preceded by detailed pre-processing to ensure that correct video segments containing both audio and video content are present, and that the developed system can be made robust to changes in camera movement, illumination, random object behaviour etc. For both audio and video analysis, we use a hierarchical approach of multi-stage classification such that difficult classification tasks can be decomposed into simpler and smaller tasks. When combining both modalities, we compare fusion techniques at different levels of integration and propose a novel algorithm that combines advantages of both feature and decision-level fusion. The analysis is evaluated on a large amount of test data comprising unconstrained videos collected for this work. We finally, propose a decision correction algorithm which shows that further steps towards combining multi-modal classification information effectively with semantic knowledge generates the best possible results

    Audio-coupled video content understanding of unconstrained video sequences

    Get PDF
    Unconstrained video understanding is a difficult task. The main aim of this thesis is to recognise the nature of objects, activities and environment in a given video clip using both audio and video information. Traditionally, audio and video information has not been applied together for solving such complex task, and for the first time we propose, develop, implement and test a new framework of multi-modal (audio and video) data analysis for context understanding and labelling of unconstrained videos. The framework relies on feature selection techniques and introduces a novel algorithm (PCFS) that is faster than the well-established SFFS algorithm. We use the framework for studying the benefits of combining audio and video information in a number of different problems. We begin by developing two independent content recognition modules. The first one is based on image sequence analysis alone, and uses a range of colour, shape, texture and statistical features from image regions with a trained classifier to recognise the identity of objects, activities and environment present. The second module uses audio information only, and recognises activities and environment. Both of these approaches are preceded by detailed pre-processing to ensure that correct video segments containing both audio and video content are present, and that the developed system can be made robust to changes in camera movement, illumination, random object behaviour etc. For both audio and video analysis, we use a hierarchical approach of multi-stage classification such that difficult classification tasks can be decomposed into simpler and smaller tasks. When combining both modalities, we compare fusion techniques at different levels of integration and propose a novel algorithm that combines advantages of both feature and decision-level fusion. The analysis is evaluated on a large amount of test data comprising unconstrained videos collected for this work. We finally, propose a decision correction algorithm which shows that further steps towards combining multi-modal classification information effectively with semantic knowledge generates the best possible results.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore