37,874 research outputs found

    Vid2U: facilitating students’ learning through video annotation for tablet device

    Get PDF
    Electronic learning (e-learning) applications are widely used in the Higher-Education institutions to deliver and share learning contents among the students. However, most of the e-learning applications are developed for offline or online desktops. Mobile learning (m-learning) is introduced to allow learners to experience the learning opportunities via mobile devices such as handheld computers, MP3 players, notebooks, mobile phones, and tablets. Interactive video refers to a technique which allows users to have some interactions with the media instead of watching a static video. One of the many ways of making video having the interactive elements is through video annotation. Although the use of video annotation is quite common in learning, there are still needs for more efforts on designing annotation facilities for video management in mobile learning environment. Apart from that, the application of video annotation on mobile devices is also still in its infancy, especially for recent devices like iPad, Microsoft Surface, Samsung Galaxy Tab, and others. The objective of the proposed mobile learning application, Vid2U is to provide the flexibility of accessing and managing video materials at anywhere and anytime, making learning even more widely available. The Vid2U is equipped with many functions such as allowing students to view lecture videos and other video materials related to the course when absent from classes. Students can also add, edit, and delete their lecture notes based on the lecture videos. The mobile-based application also allows students to search video materials related to all courses taken by them. Experimental results obtained from surveys have shown that the Vid2U is able to provide better learning environment for the university learners, which lead to a deeper level of learning engagement

    Semantic annotation in ubiquitous healthcare skills-based learning environments

    No full text
    This paper describes initial work on developing a semantic annotation system for the augmentation of skills-based learning for Healthcare. Scenario driven skills-based learning takes place in an augmented hospital ward simulation involving a patient simulator known as SimMan. The semantic annotation software enables real-time annotations of these simulations for debriefing of the students, student self study and better analysis of the learning approaches of mentors. A description of the developed system is provided with initial findings and future directions for the work.<br/

    Response Collector: A Video Learning System for Flipped Classrooms

    Full text link
    The flipped classroom has become famous as an effective educational method that flips the purpose of classroom study and homework. In this paper, we propose a video learning system for flipped classrooms, called Response Collector, which enables students to record their responses to preparation videos. Our system provides response visualization for teachers and students to understand what they have acquired and questioned. We performed a practical user study of our system in a flipped classroom setup. The results show that students preferred to use the proposed method as the inputting method, rather than naive methods. Moreover, sharing responses among students was helpful for resolving individual students' questions, and students were satisfied with the use of our system.Comment: The 2018 International Conference On Advanced Informatics: Concepts, Theory And Application (ICAICTA2018

    Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases

    Get PDF
    In this paper, we introduce a novel framework for automatic Semantic Video Annotation. As this framework detects possible events occurring in video clips, it forms the annotating base of video search engine. To achieve this purpose, the system has to able to operate on uncontrolled wide-domain videos. Thus, all layers have to be based on generic features. This framework aims to bridge the "semantic gap", which is the difference between the low-level visual features and the human's perception, by finding videos with similar visual events, then analyzing their free text annotation to find a common area then to decide the best description for this new video using commonsense knowledgebases. Experiments were performed on wide-domain video clips from the TRECVID 2005 BBC rush standard database. Results from these experiments show promising integrity between those two layers in order to find expressing annotations for the input video. These results were evaluated based on retrieval performance

    MobiFace: A Novel Dataset for Mobile Face Tracking in the Wild

    Full text link
    Face tracking serves as the crucial initial step in mobile applications trying to analyse target faces over time in mobile settings. However, this problem has received little attention, mainly due to the scarcity of dedicated face tracking benchmarks. In this work, we introduce MobiFace, the first dataset for single face tracking in mobile situations. It consists of 80 unedited live-streaming mobile videos captured by 70 different smartphone users in fully unconstrained environments. Over 95K95K bounding boxes are manually labelled. The videos are carefully selected to cover typical smartphone usage. The videos are also annotated with 14 attributes, including 6 newly proposed attributes and 8 commonly seen in object tracking. 36 state-of-the-art trackers, including facial landmark trackers, generic object trackers and trackers that we have fine-tuned or improved, are evaluated. The results suggest that mobile face tracking cannot be solved through existing approaches. In addition, we show that fine-tuning on the MobiFace training data significantly boosts the performance of deep learning-based trackers, suggesting that MobiFace captures the unique characteristics of mobile face tracking. Our goal is to offer the community a diverse dataset to enable the design and evaluation of mobile face trackers. The dataset, annotations and the evaluation server will be on \url{https://mobiface.github.io/}.Comment: To appear on The 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019
    corecore