16 research outputs found

    Guidelines for digital storytelling for Arab children

    Get PDF
    Children are getting more exposed to various technologies in teaching-learning. Various types of teaching-learning have been designed, including interactive digital storytelling. In Malaysia, local children have been clear about story-based learning materials. However, the situation is a little bit different with Arab children. Because the number of Arab children migrating into Malaysia is increasing, for following their parents who are studying at higher levels, they have to also make themselves familiar with the local scenario. In accordance, this study is initiates, to identify their acceptance towards story-based learning materials, or specifically interactive digital storytelling. Hence, this study reacts proactively, by approaching Arab children asking for their feedback on whether they have any desire for interactive digital storytelling. Through a series of interviews, this study found that they have a strong desire and tendency. Then, the following objectives have been stated: (1) to determine the components for the interactive digital storytelling for Arab children, (2) to design and develop a prototype of the interactive digital storytelling, and (3) to observe on how the Arab children experience the interactive digital storytelling. User-centered design (UCD) approach has been gone through in ensuring that the objectives are achieved. The process of determining the components for the interactive digital storytelling was carried out by directly involving Arab children and their teachers from three preschools in Changlun and Sintok. It was similar with the efforts in determining the contents, and interface design until the prototype development. Having the prototype ready, user testing was carried out to explore the way Arab children experience the prototype. All the processes involved various techniques through observation, interviews, and noting. Specifically, the user testing involved qualitative and empirical data. Qualitative data were gathered through observation, meanwhile the empirical data were gathered using Computer System Usability Questionnaire (CSUQ) tool. In the end, having data processed, the findings show that Arab children are highly satisfied with the prototype. Scientifically, the developed prototype is a mirror of the obtained guidelines, obtained through the UCD seminars. Hence, the positive acceptance on the prototype reflects positive acceptance on the guidelines, as the main contribution of this study. Besides the guidelines as the main contribution of this study, the developed prototype is also a wonderful contribution to the Arab children and their teacher. They will be using it as part of their teaching and learning material

    Conceptual model for usable multi-modal mobile assistance during Umrah

    Get PDF
    Performing Umrah is very demanding and to be performed in very crowded environments. In response to that, many efforts have been initiated to overcome the difficulties faced by pilgrims. However, those efforts focus on acquiring initial perspective and background knowledge before going to Mecca. Findings of preliminary study show that those efforts do not support multi-modality for user interaction. Nowadays the computational capabilities in mobile phones enable it to serve people in various aspects of daily life. Consequently, the mobile phone penetration has increased dramatically in the last decade. Hence, this study aims to propose a comprehensive conceptual model for usable multimodal mobile assistance during Umrah called Multi-model Mobile Assistance during Umrah (MMA-U). Thus, four (4) supporting objectives are formulated, and the Design Science Research Methodology has been adopted. For the usability of MMA-U, Systematic Literature Review (SLR) indicates ten (10) attributes: usefulness, errors rate, simplicity, reliability, ease of use, safety, flexibility, accessibility, attitude, and acceptability. Meanwhile, the content and comparative analysis result in five (5) components that construct the conceptual model of MMA-U: structural, content composition, design principles, development approach, technology, and the design and usability theories. Then, the MMA-U has been reviewed and well-accepted by 15 experts. Later, the MMA-U was incorporated into a prototype called Personal Digital Mutawwif (PDM). The PDM was developed for the purpose of user test in the field. The findings indicate that PDM facilitates the execution of Umrah and successfully meet pilgrims’ needs and expectations. Also, the pilgrims were satisfied and felt that they need to have PDM. In fact, they would recommend PDM to their friends, which mean that use of PDM is safe and suitable while performing Umrah. As a conclusion, the theoretical contribution; the conceptual model of MMA-U; provides guidelines for developing multimodal content mobile applications during Umrah

    A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars

    Get PDF
    Autonomous cars are expected to improve road safety, traffic and mobility. It is projected that in the next 20-30 years fully autonomous vehicles will be on the market. The advancement on the research and development of this technology will allow the disengagement of humans from the driving task, which will be responsibility of the vehicle intelligence. In this scenario new vehicle interior designs are proposed, enabling more flexible human vehicle interactions inside them. In addition, as some important stakeholders propose, control elements such as the steering wheel and accelerator and brake pedals may not be needed any longer. However, this user control disengagement is one of the main issues related with the user acceptance of this technology. Users do not seem to be comfortable with the idea of giving all the decision power to the vehicle. In addition, there can be location awareness situations where the user makes a spontaneous decision and requires some type of vehicle control. Such is the case of stopping at a particular point of interest or taking a detour in the pre-calculated autonomous route of the car. Vehicle manufacturers\u27 maintain the steering wheel as a control element, allowing the driver to take over the vehicle if needed or wanted. This causes a constraint in the previously mentioned human vehicle interaction flexibility. Thus, there is an unsolved dilemma between providing users enough control over the autonomous vehicle and route so they can make spontaneous decision, and interaction flexibility inside the car. This dissertation proposes the use of a voice and pointing gesture human vehicle interaction system to solve this dilemma. Voice and pointing gestures have been identified as natural interaction techniques to guide and command mobile robots, potentially providing the needed user control over the car. On the other hand, they can be executed anywhere inside the vehicle, enabling interaction flexibility. The objective of this dissertation is to provide a strategy to support this system. For this, a method based on pointing rays intersections for the computation of the point of interest (POI) that the user is pointing to is developed. Simulation results show that this POI computation method outperforms the traditional ray-casting based by 76.5% in cluttered environments and 36.25% in combined cluttered and non-cluttered scenarios. The whole system is developed and demonstrated using a robotics simulator framework. The simulations show how voice and pointing commands performed by the user update the predefined autonomous path, based on the recognized command semantics. In addition, a dialog feedback strategy is proposed to solve conflicting situations such as ambiguity in the POI identification. This additional step is able to solve all the previously mentioned POI computation inaccuracies. In addition, it allows the user to confirm, correct or reject the performed commands in case the system misunderstands them

    Integrated Framework Design for Intelligent Human Machine Interaction

    Get PDF
    Human-computer interaction, sometimes referred to as Man-Machine Interaction, is a concept that emerged simultaneously with computers, or more generally machines. The methods by which humans have been interacting with computers have traveled a long way. New designs and technologies appear every day. However, computer systems and complex machines are often only technically successful, and most of the time users may find them confusing to use; thus, such systems are never used efficiently. Therefore, building sophisticated machines and robots is not the only thing someone has to address; in fact, more effort should be put to make these machines simpler for all kind of users, and generic enough to accommodate different types of environments. Thus, designing intelligent human computer interaction modules come to emerge. In this work, we aim to implement a generic framework (referred to as CIMF framework) that allows the user to control the synchronized and coordinated cooperative type of work that a set of robots can perform. Three robots are involved so far: Two manipulators and one mobile robot. The framework should be generic enough to be hardware independent and to allow the easy integration of new entities and modules. We also aim to implement the different building blocks for the intelligent manufacturing cell that communicates with the framework via the most intelligent and advanced human computer interaction techniques. Three techniques shall be addressed: Interface-, audio-, and visual-based type of interaction

    An intelligent user interface model for contact centre operations

    Get PDF
    Contact Centres (CCs) are at the forefront of interaction between an organisation and its customers. Currently, 17 percent of all inbound calls are not resolved on the first call by the first agent attending to that call. This is due to the inability of the contact centre agents (CCAs) to diagnose customer queries and find adequate solutions in an effective and efficient manner. The aim of this research is to develop an intelligent user interface (IUI) model to support and improve CC operations. A literature review of existing IUI architectures, modelbased design and existing CC software together with a field study of CCs has resulted in the design of an IUI model for CCs. The proposed IUI model is described in terms of its architecture, component-level design and interface design. An IUI prototype has been developed as a proof of concept of the proposed IUI model. The IUI prototype was evaluated in order to determine to what extent it supports problem identification and query resolution. User testing, incorporating the use of eye tracking and a post-test questionnaire, was used in order to determine the usability and usefulness of the prototype. The results of this evaluation show that the users were highly satisfied with the task support and query resolution assistance provided by the IUI prototype. This research resulted in the design of an IUI model for the domain of CCs. This model can be used to assist the development of CC applications incorporating IUIs. Use of the proposed IUI model is expected to support and enhance the effectiveness and efficiency of CC operations. Further research is needed to conduct a longitudinal study to determine the impact of IUIs in the CC domain

    A Person-Centric Design Framework for At-Home Motor Learning in Serious Games

    Get PDF
    abstract: In motor learning, real-time multi-modal feedback is a critical element in guided training. Serious games have been introduced as a platform for at-home motor training due to their highly interactive and multi-modal nature. This dissertation explores the design of a multimodal environment for at-home training in which an autonomous system observes and guides the user in the place of a live trainer, providing real-time assessment, feedback and difficulty adaptation as the subject masters a motor skill. After an in-depth review of the latest solutions in this field, this dissertation proposes a person-centric approach to the design of this environment, in contrast to the standard techniques implemented in related work, to address many of the limitations of these approaches. The unique advantages and restrictions of this approach are presented in the form of a case study in which a system entitled the "Autonomous Training Assistant" consisting of both hardware and software for guided at-home motor learning is designed and adapted for a specific individual and trainer. In this work, the design of an autonomous motor learning environment is approached from three areas: motor assessment, multimodal feedback, and serious game design. For motor assessment, a 3-dimensional assessment framework is proposed which comprises of 2 spatial (posture, progression) and 1 temporal (pacing) domains of real-time motor assessment. For multimodal feedback, a rod-shaped device called the "Intelligent Stick" is combined with an audio-visual interface to provide feedback to the subject in three domains (audio, visual, haptic). Feedback domains are mapped to modalities and feedback is provided whenever the user's performance deviates from the ideal performance level by an adaptive threshold. Approaches for multi-modal integration and feedback fading are discussed. Finally, a novel approach for stealth adaptation in serious game design is presented. This approach allows serious games to incorporate motor tasks in a more natural way, facilitating self-assessment by the subject. An evaluation of three different stealth adaptation approaches are presented and evaluated using the flow-state ratio metric. The dissertation concludes with directions for future work in the integration of stealth adaptation techniques across the field of exergames.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Unsupervised methods in multilingual and multimodal semantic modeling

    Get PDF
    In the first part of this project, independent component analysis has been applied to extract word clusters from two Farsi corpora. Both word-document and word-context matrices have been considered to extract such clusters. The application of ICA on the word-document matrices extracted from these two corpora led to the detection of syntagmatic word clusters, while the utilization of word-context matrix resulted in the extraction of both syntagmatic and paradigmatic word clusters. Furthermore, we have discussed some potential benefits of this automatically extracted thesaurus. In such a thesaurus, a word is defined by some other words without being connected to the outer physical objects. In order to fill such a gap, symbol grounding has been proposed by philosophers as a mechanism which might connect words to their physical referents. From their point of view, if words are properly connected to their referents, their meaning might be realized. Once this objective is achieved, a new promising horizon would open in the realm of artificial intelligence. In the second part of the project, we have offered a simple but novel method for grounding words based on the features coming from the visual modality. Firstly, indexical grounding is implemented. In this naïve symbol grounding method, a word is characterized using video indexes as its context. Secondly, such indexical word vectors have been normalized according to the features calculated for motion videos. This multimodal fusion has been referred to as the pattern grounding. In addition, the indexical word vectors have been normalized using some randomly generated data instead of the original motion features. This third case was called randomized grounding. These three cases of symbol grounding have been compared in terms of the performance of translation. Besides that, word clusters have been excerpted by comparing the vector distances and from the dendrograms generated using an agglomerative hierarchical clustering method. We have observed that pattern grounding exceled the indexical grounding in the translation of the motion annotated words, while randomized grounding has deteriorated the translation significantly. Moreover, pattern grounding culminated in the formation of clusters in which a word fit semantically to the other members, while using the indexical grounding, some of the closely related words dispersed into arbitrary clusters
    corecore