2,659 research outputs found

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    Multi-modalities in classroom learning environments

    Get PDF
    This paper will present initial findings from the second phase of a Horizon 2020 funded project, Managing Affective-learning Through Intelligent Atoms and Smart Interactions (MaTHiSiS). The project focusses on the use of different multi-modalities used as part of the project in classrooms across Europe. The MaTHiSiS learning vision is to develop an integrated learning platform, with re-usable learning components which will respond to the needs of future education in primary, secondary, special education schools, vocational environments and learning beyond the classroom. The system comprises learning graphs which attach individual learning goals to the system. Each learning graph is developed from a set of smart learning atoms designed to support learners to achieve progression. Cutting edge technologies are being used to identify the affect state of learners and ultimately improve engagement of learners. Much research identifies how learners engage with learning platforms (c.f. [1], [2], [3]). Not only do e-learning platforms have the capability to engage learners, they provide a vehicle for authentic classroom and informal learning [4] enabling ubiquitous and seamless learning [5] within a non-linear environment. When experiencing more enjoyable interaction learners become more confident and motivated to learn and become less anxious, especially those with learning disabilities or at risk of social exclusion [6], [13]. [7] identified the importance of understanding the affect state of learners who may experience emotions such as 'confusion, frustration, irritation, anger, rage, or even despair' resulting in disengaging with learning. The MaTHiSiS system will use a range of platform agents such as NAO robots and Kinects to measure multi-modalities that support the affect state: facial expression analysis and gaze estimation [8], mobile device-based emotion recognition [9], skeleton motion using depth sensors and speech recognition. Data has been collected using multimodal learning analytics developed for the project, including annotated multimodal recordings of learners interacting with the system, facial expression data and position of the learner. In addition, interviews with teachers and learners, from mainstream education as well as learners with profound multiple learning difficulties and autism, have been carried out to measure engagement and achievement of learners. Findings from schools based in the United Kingdom, mainstream and special schools will be presented and challenges shared

    The role of learning theory in multimodal learning analytics

    Get PDF
    This study presents the outcomes of a semi-systematic literature review on the role of learning theory in multimodal learning analytics (MMLA) research. Based on previous systematic literature reviews in MMLA and an additional new search, 35MMLA works were identified that use theory. The results show that MMLA studies do not always discuss their findings within an established theoretical framework. Most of the theory-driven MMLA studies are positioned in the cognitive and affective domains, and the three most frequently used theories are embodied cognition, cognitive load theory and control–value theory of achievement emotions. Often, the theories are only used to inform the study design, but there is a relationship between the most frequently used theories and the data modalities used to operationalize those theories. Although studies such as these are rare, the findings indicate that MMLA affordances can, indeed, lead to theoretical contributions to learning sciences. In this work, we discuss methods of accelerating theory-driven MMLA research and how this acceleration can extend or even create new theoretical knowledge

    Human behavior understanding for worker-centered intelligent manufacturing

    Get PDF
    “In a worker-centered intelligent manufacturing system, sensing and understanding of the worker’s behavior are the primary tasks, which are essential for automatic performance evaluation & optimization, intelligent training & assistance, and human-robot collaboration. In this study, a worker-centered training & assistant system is proposed for intelligent manufacturing, which is featured with self-awareness and active-guidance. To understand the hand behavior, a method is proposed for complex hand gesture recognition using Convolutional Neural Networks (CNN) with multiview augmentation and inference fusion, from depth images captured by Microsoft Kinect. To sense and understand the worker in a more comprehensive way, a multi-modal approach is proposed for worker activity recognition using Inertial Measurement Unit (IMU) signals obtained from a Myo armband and videos from a visual camera. To automatically learn the importance of different sensors, a novel attention-based approach is proposed to human activity recognition using multiple IMU sensors worn at different body locations. To deploy the developed algorithms to the factory floor, a real-time assembly operation recognition system is proposed with fog computing and transfer learning. The proposed worker-centered training & assistant system has been validated and demonstrated the feasibility and great potential for applying to the manufacturing industry for frontline workers. Our developed approaches have been evaluated: 1) the multi-view approach outperforms the state-of-the-arts on two public benchmark datasets, 2) the multi-modal approach achieves an accuracy of 97% on a worker activity dataset including 6 activities and achieves the best performance on a public dataset, 3) the attention-based method outperforms the state-of-the-art methods on five publicly available datasets, and 4) the developed transfer learning model achieves a real-time recognition accuracy of 95% on a dataset including 10 worker operations”--Abstract, page iv

    2015 Annual Report Transportation Research Center for Livable Communities

    Get PDF
    Table of Contents Messages from the Director and Representatives TRCLC Mission and Objectives Center Personnel Research Investigators Consortia Our Research List of Research Projects Highlighted Projects Technology Transfer and Outreach Activities Student Awards Upcoming Event

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    INSPIRE Newsletter Summer 2017

    Get PDF
    https://scholarsmine.mst.edu/inspire-newsletters/1000/thumbnail.jp
    • …
    corecore