9,913 research outputs found
Virtual Reference for Video Collections: System Infrastructure, User Interface and Pilot User Study
A new video-based Virtual Reference (VR) tool called VideoHelp was designed and developed to support video
navigation escorting, a function that enables librarians to co-navigate a digital video with patrons in the web-based
environment. A client/server infrastructure was adopted for the VideoHelp system and timestamps were used to achieve
the video synchronization between the librarians and patrons. A pilot usability study of using VideoHelp prototype in video seeking was conducted and the preliminary results demonstrated that the system is easy to learn and use, and real-time assistance from virtual librarians in video navigation is desirable on a conditional basis
Recommended from our members
Adaptive Synchronization of Semantically Compressed Instructional Videos for Collaborative Distance Learning
The increasing popularity of online courses has highlighted the need for collaborative learning tools for student groups. In addition, the introduction of lecture videos into the online curriculum has drawn attention to the disparity in the network resources available to students. We present an e-Learning architecture and adaptation model called AI2TV (Adaptive Interactive Internet Team Video), which allows groups of students to collaboratively view a video in synchrony. AI2TV upholds the invariant that each student will view semantically equivalent content at all times. A semantic compression model is developed to provide instructional videos at different level-of-details to accommodate dynamic network conditions and usersäó» system requirements. We take advantage of the semantic compression algorithmäó»s ability to provide different layers of semantically equivalent video by adapting the client to play at the appropriate layer that provides the client with the richest possible viewing experience. Video player actions, like play, pause and stop, can be initiated by any group member and and the results of those actions are synchronized with all the other students. These features allow students to review a lecture video in tandem, facilitating the learning process. Experimental trials show that AI2TV successfully synchronizes instructional videos for distributed students while concurrently optimizing the video quality, even under conditions of fluctuating bandwidth, by adaptively adjusting the quality level for each student while still maintaining the invariant
An Immersive Telepresence System using RGB-D Sensors and Head Mounted Display
We present a tele-immersive system that enables people to interact with each
other in a virtual world using body gestures in addition to verbal
communication. Beyond the obvious applications, including general online
conversations and gaming, we hypothesize that our proposed system would be
particularly beneficial to education by offering rich visual contents and
interactivity. One distinct feature is the integration of egocentric pose
recognition that allows participants to use their gestures to demonstrate and
manipulate virtual objects simultaneously. This functionality enables the
instructor to ef- fectively and efficiently explain and illustrate complex
concepts or sophisticated problems in an intuitive manner. The highly
interactive and flexible environment can capture and sustain more student
attention than the traditional classroom setting and, thus, delivers a
compelling experience to the students. Our main focus here is to investigate
possible solutions for the system design and implementation and devise
strategies for fast, efficient computation suitable for visual data processing
and network transmission. We describe the technique and experiments in details
and provide quantitative performance results, demonstrating our system can be
run comfortably and reliably for different application scenarios. Our
preliminary results are promising and demonstrate the potential for more
compelling directions in cyberlearning.Comment: IEEE International Symposium on Multimedia 201
Recommended from our members
Optimizing Quality for Collaborative Video Viewing
The increasing popularity of distance learning and online courses has highlighted the lack of collaborative tools for student groups. In addition, the introduction of lecture videos into the online curriculum has drawn attention to the disparity in the network resources used by the students. We present an architecture and adaptation model called AI2TV (Adaptive Internet Interactive Team Video), a system that allows geographically dispersed participants, possibly some or all disadvantaged in network resources, to collaboratively view a video in synchrony. AI2TV upholds the invariant that each participant will view semantically equivalent content at all times. Video player actions, like play, pause and stop, can be initiated by any of the participants and the results of those actions are seen by all the members. These features allow group members to review a lecture video in tandem to facilitate the learning process. We employ an autonomic (feedback loop) controller that monitors clients' video status and adjusts the quality of the video according to the resources of each client. We show in experimental trials that our system can successfully synchronize video for distributed clients while, at the same time, optimizing the video quality given actual (fluctuating) bandwidth by adaptively adjusting the quality level for each participant
Computer support for collaborative learning environments
This paper deals with computer support for collaborative learning environments. Our analysis is based on a moderate constructivist view on learning, which emphasizes the need to support learners instructionally in their collaborative knowledge construction. We will first illustrate the extent to which the computer can provide tools for supporting collaborative knowledge construction. Secondly, we will focus on instruction itself and show the kinds of advanced instructional methods that computer tools may provide for the learners. Furthermore, we will discuss the learners’ prerequisites and how they must be considered when constructing learning environments.Dieser Bericht behandelt die Unterstützung kooperativer Lernumgebungen durch den Einsatz von Computern. Der theoretische Hintergrund greift auf einen moderaten Konstruktivismus zurück, der die Notwendigkeit einer instruktionalen Unterstützung für die gemeinsame Wissenskonstruktion betont. Darauf aufbauend beschreibt der Bericht in einem ersten Schritt, wie der Computer Werkzeuge zur gemeinsamen Wissenskonstruktion bereitstellen kann. Im zweiten Teil steht die Instruktion für das kooperative Lernen im Vordergrund. Dabei werden Methoden instruktionaler Unterstützung vorgestellt, die computerbasierte Werkzeuge für die gemeinsame Wissenskonstruktion bereitstellen, insbesondere Skripts und inhaltliche Strukturvorgaben. Darüber hinaus beschreibt der Bericht, inwieweit individuelle Lernereigenschaften, wie z.B. das Vorwissen, einen Einfluss auf die Realisierung von Lernumgebungen haben
Virtual Reference for Video Collections: System Infrastructure, User Interface and Pilot User Study
A new video-based Virtual Reference (VR) tool called VideoHelp was designed and developed to support video
navigation escorting, a function that enables librarians to co-navigate a digital video with patrons in the web-based
environment. A client/server infrastructure was adopted for the VideoHelp system and timestamps were used to achieve
the video synchronization between the librarians and patrons. A pilot usability study of using VideoHelp prototype in video seeking was conducted and the preliminary results demonstrated that the system is easy to learn and use, and real-time assistance from virtual librarians in video navigation is desirable on a conditional basis
Mixed reality participants in smart meeting rooms and smart home enviroments
Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments
- …