79 research outputs found
Virtual Field Trip via Digital Storytelling
Digital storytelling is a practice of combining digital content such as 3-dimensional
images, text, sound, images, and video to create a short story. It is the intersection
between the old art of storytelling and access to powerful technologies. This project will
be a step to experiment the development and effectiveness of digital storytelling and
hopefully ignite a source of motivation and encourages others to tap into their interests
and skills to develop their own digital storytelling and expand ICT usage in this
country. School children look forward to traditional field trips. However, such trips are
costly. VFT aims to reduce if not eliminate the constraints that traditional field trips
face such as money, time, energy, resources, distance and inaccessible area. To fit the
time frame, the VFT is created only for small selected areas in the KL Bird Park even
though the KL Bird Park is not that big because some of the areas are not suitable to
take panoramic pictures. The development of the VFT is adapted from QTVR Creation
Steps by Kitchens (2006). The procedure consists of defining the problem statements
and goals, literature review and research, creating image content through taking photos
at the site, transforming the photos to QTVR node through stitching, design and
construct prototype, inserting interactivity such as hotspots, delivering the output, and
last but not least, evaluation. The final output of the project is the KL Bird Park Virtual
Field Trip which consists of a photo based 3D panoramic images for each scene from
the site which are linked to one another and also hotspots which are placed on the
panoramic images to reveal the birds' information with one click on the hotspots. The
informal evaluation of the final output that was conducted shows an overwhelming
response and acceptance. All of the respondents would like to see more of this type of
VFT in the future
Analysis of Visualisation and Interaction Tools Authors
This document provides an in-depth analysis of visualization and interaction tools employed in the context of Virtual Museum. This analysis is required to identify and design the tools and the different components that will be part of the Common Implementation Framework (CIF). The CIF will be the base of the web-based services and tools to support the development of Virtual Museums with particular attention to online Virtual Museum.The main goal is to provide to the stakeholders and developers an useful platform to support and help them in the development of their projects, despite the nature of the project itself. The design of the Common Implementation Framework (CIF) is based on an analysis of the typical workflow ofthe V-MUST partners and their perceived limitations of current technologies. This document is based also on the results of the V-MUST technical questionnaire (presented in the Deliverable 4.1). Based on these two source of information, we have selected some important tools (mainly visualization tools) and services and we elaborate some first guidelines and ideas for the design and development of the CIF, that shall provide a technological foundation for the V-MUST Platform, together with the V-MUST repository/repositories and the additional services defined in the WP4. Two state of the art reports, one about user interface design and another one about visualization technologies have been also provided in this document
The Video Browser Showdown: a live evaluation of interactive video search tools
The Video Browser Showdown evaluates the performance of exploratory video search tools on a common data set in a common environment and in presence of the audience. The main goal of this competition is to enable researchers in the field of interactive video search to directly compare their tools at work. In this paper, we present results from the second Video Browser Showdown (VBS2013) and describe and evaluate the tools of all participating teams in detail. The evaluation results give insights on how exploratory video search tools are used and how they perform in direct comparison. Moreover, we compare the achieved performance to results from another user study where 16 participants employed a standard video player to complete the same tasks as performed in VBS2013. This comparison shows that the sophisticated tools enable better performance in general, but for some tasks common video players provide similar performance and could even outperform the expert tools. Our results highlight the need for further improvement of professional tools for interactive search in videos
Videos in Context for Telecommunication and Spatial Browsing
The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance
Quais as potencialidades da criação de um videoclip em Realidade Virtual?
The focus of this dissertation is in the potentiality of the creation of a music
video in Virtual Reality (VR). As such, it was analyzed in the theoretical
framework what are music videos, what is VR and, more specifically, what are
music videos in VR, which is a unresearched topic in the academic field, to
which this dissertation provides some basis for feature research in this area.
A music video for VR was made for this dissertation as to better understand the
potentialities of the medium. The music video was made under the
conventional phases of audiovisual production. Then user tests were made for
this music video in order to evaluate its characteristics. The tests were made
with 3 independent samples because the music video was shown in desktop 360-2D, mobile-360-2D, and VR on a Head Mounted Display (HMD). This was
done as to better understand the potentialities of VR by comparing it to other
mediums. Each participant had to answer to a pre-test questionnaire, and to a
post-test questionnaire. The analysis of the results showed that although VR
offers the added benefit of immersion it also poses its own challenges and a
completely new approach to music video making.O foco desta dissertação é na potencialidade de criação de videoclips em
Realidade Virtual. Como tal, foi analisado no enquadramento teórico o que são
videoclips, o que é a Realidade Virtual e, mais especificamente, o que são
videoclips em Realidade Virtual, tópico sobre o qual não existe muita
investigação no meio académico, de maneira a que esta dissertação serve
como base para investigações futuras.
Um videoclip foi feito para Realidade Virtual para a investigação da qual esta
dissertação faz parte, a fim de perceber as potencialidades do meio. O
videoclip foi criado com base nas fases convencionais de produção
audiovisual. Feito isto foram feitos testes de utilizador para avaliar as
características do videoclip. Estes testes foram feitos em 3 amostras
independentes cada uma para um formato diferente de visualização de
videoclips em Realidade Virtual, desktop-360-2D, mobile-360-2D e em
Realidade Virtual com um Head Mounted Display. Isto foi feito de modo a
perceber melhor as potencialidades da Realidade Virtual comparativamente
com os outros meios. Cada participante teve de responder a um questionário
pré-teste e a um questionário pós-teste. A análise dos resultados mostrou que
apesar da Realidade Virtual oferecer o benefício da imersão, ela tem os seus
desafios, e requere uma abordagem completamente nova para a realização de
videoclips.Mestrado em Comunicação Multimédi
Image Retrieval within Augmented Reality
Die vorliegende Arbeit untersucht das Potenzial von Augmented Reality zur Verbesserung von Image Retrieval Prozessen. Herausforderungen in Design und Gebrauchstauglichkeit wurden für beide Forschungsbereiche dargelegt und genutzt, um Designziele für Konzepte zu entwerfen. Eine Taxonomie für Image Retrieval in Augmented Reality wurde basierend auf der Forschungsarbeit entworfen und eingesetzt, um verwandte Arbeiten und generelle Ideen für Interaktionsmöglichkeiten zu strukturieren. Basierend auf der Taxonomie wurden Anwendungsszenarien als weitere Anforderungen für Konzepte formuliert. Mit Hilfe der generellen Ideen und Anforderungen wurden zwei umfassende Konzepte für Image Retrieval in Augmented Reality ausgearbeitet. Eins der Konzepte wurde auf einer Microsoft HoloLens umgesetzt und in einer Nutzerstudie evaluiert. Die Studie zeigt, dass das Konzept grundsätzlich positiv aufgenommen wurde und bietet Erkenntnisse über unterschiedliches Verhalten im Raum und verschiedene Suchstrategien bei der Durchführung von Image Retrieval in der erweiterten Realität.:1 Introduction
1.1 Motivation and Problem Statement
1.1.1 Augmented Reality and Head-Mounted Displays
1.1.2 Image Retrieval
1.1.3 Image Retrieval within Augmented Reality
1.2 Thesis Structure
2 Foundations of Image Retrieval and Augmented Reality
2.1 Foundations of Image Retrieval
2.1.1 Definition of Image Retrieval
2.1.2 Classification of Image Retrieval Systems
2.1.3 Design and Usability in Image Retrieval
2.2 Foundations of Augmented Reality
2.2.1 Definition of Augmented Reality
2.2.2 Augmented Reality Design and Usability
2.3 Taxonomy for Image Retrieval within Augmented Reality
2.3.1 Session Parameters
2.3.2 Interaction Process
2.3.3 Summary of the Taxonomy
3 Concepts for Image Retrieval within Augmented Reality
3.1 Related Work
3.1.1 Natural Query Specification
3.1.2 Situated Result Visualization
3.1.3 3D Result Interaction
3.1.4 Summary of Related Work
3.2 Basic Interaction Concepts for Image Retrieval in Augmented Reality
3.2.1 Natural Query Specification
3.2.2 Situated Result Visualization
3.2.3 3D Result Interaction
3.3 Requirements for Comprehensive Concepts
3.3.1 Design Goals
3.3.2 Application Scenarios
3.4 Comprehensive Concepts
3.4.1 Tangible Query Workbench
3.4.2 Situated Photograph Queries
3.4.3 Conformance of Concept Requirements
4 Prototypic Implementation of Situated Photograph Queries
4.1 Implementation Design
4.1.1 Implementation Process
4.1.2 Structure of the Implementation
4.2 Developer and User Manual
4.2.1 Setup of the Prototype
4.2.2 Usage of the Prototype
4.3 Discussion of the Prototype
5 Evaluation of Prototype and Concept by User Study
5.1 Design of the User Study
5.1.1 Usability Testing
5.1.2 Questionnaire
5.2 Results
5.2.1 Logging of User Behavior
5.2.2 Rating through Likert Scales
5.2.3 Free Text Answers and Remarks during the Study
5.2.4 Observations during the Study
5.2.5 Discussion of Results
6 Conclusion
6.1 Summary of the Present Work
6.2 Outlook on Further WorkThe present work investigates the potential of augmented reality for improving the image retrieval process. Design and usability challenges were identified for both fields of research in order to formulate design goals for the development of concepts. A taxonomy for image retrieval within augmented reality was elaborated based on research work and used to structure related work and basic ideas for interaction. Based on the taxonomy, application scenarios were formulated as further requirements for concepts. Using the basic interaction ideas and the requirements, two comprehensive concepts for image retrieval within augmented reality were elaborated. One of the concepts was implemented using a Microsoft HoloLens and evaluated in a user study. The study showed that the concept was rated generally positive by the users and provided insight in different spatial behavior and search strategies when practicing image retrieval in augmented reality.:1 Introduction
1.1 Motivation and Problem Statement
1.1.1 Augmented Reality and Head-Mounted Displays
1.1.2 Image Retrieval
1.1.3 Image Retrieval within Augmented Reality
1.2 Thesis Structure
2 Foundations of Image Retrieval and Augmented Reality
2.1 Foundations of Image Retrieval
2.1.1 Definition of Image Retrieval
2.1.2 Classification of Image Retrieval Systems
2.1.3 Design and Usability in Image Retrieval
2.2 Foundations of Augmented Reality
2.2.1 Definition of Augmented Reality
2.2.2 Augmented Reality Design and Usability
2.3 Taxonomy for Image Retrieval within Augmented Reality
2.3.1 Session Parameters
2.3.2 Interaction Process
2.3.3 Summary of the Taxonomy
3 Concepts for Image Retrieval within Augmented Reality
3.1 Related Work
3.1.1 Natural Query Specification
3.1.2 Situated Result Visualization
3.1.3 3D Result Interaction
3.1.4 Summary of Related Work
3.2 Basic Interaction Concepts for Image Retrieval in Augmented Reality
3.2.1 Natural Query Specification
3.2.2 Situated Result Visualization
3.2.3 3D Result Interaction
3.3 Requirements for Comprehensive Concepts
3.3.1 Design Goals
3.3.2 Application Scenarios
3.4 Comprehensive Concepts
3.4.1 Tangible Query Workbench
3.4.2 Situated Photograph Queries
3.4.3 Conformance of Concept Requirements
4 Prototypic Implementation of Situated Photograph Queries
4.1 Implementation Design
4.1.1 Implementation Process
4.1.2 Structure of the Implementation
4.2 Developer and User Manual
4.2.1 Setup of the Prototype
4.2.2 Usage of the Prototype
4.3 Discussion of the Prototype
5 Evaluation of Prototype and Concept by User Study
5.1 Design of the User Study
5.1.1 Usability Testing
5.1.2 Questionnaire
5.2 Results
5.2.1 Logging of User Behavior
5.2.2 Rating through Likert Scales
5.2.3 Free Text Answers and Remarks during the Study
5.2.4 Observations during the Study
5.2.5 Discussion of Results
6 Conclusion
6.1 Summary of the Present Work
6.2 Outlook on Further Wor
Practical, appropriate, empirically-validated guidelines for designing educational games
There has recently been a great deal of interest in the
potential of computer games to function as innovative
educational tools. However, there is very little evidence of
games fulfilling that potential. Indeed, the process of
merging the disparate goals of education and games design
appears problematic, and there are currently no practical
guidelines for how to do so in a coherent manner. In this
paper, we describe the successful, empirically validated
teaching methods developed by behavioural psychologists
and point out how they are uniquely suited to take
advantage of the benefits that games offer to education. We
conclude by proposing some practical steps for designing
educational games, based on the techniques of Applied
Behaviour Analysis. It is intended that this paper can both
focus educational games designers on the features of games
that are genuinely useful for education, and also introduce a
successful form of teaching that this audience may not yet
be familiar with
Human-Computer Interaction
In this book the reader will find a collection of 31 papers presenting different facets of Human Computer Interaction, the result of research projects and experiments as well as new approaches to design user interfaces. The book is organized according to the following main topics in a sequential order: new interaction paradigms, multimodality, usability studies on several interaction mechanisms, human factors, universal design and development methodologies and tools
컴퓨터를 활용한 여러 사람의 동작 연출
학위논문 (박사)-- 서울대학교 대학원 공과대학 전기·컴퓨터공학부, 2017. 8. 이제희.Choreographing motion is the process of converting written stories or messages into the real movement of actors. In performances or movie, directors spend a consid-erable time and effort because it is the primary factor that audiences concentrate. If multiple actors exist in the scene, choreography becomes more challenging. The fundamental difficulty is that the coordination between actors should precisely be ad-justed. Spatio-temporal coordination is the first requirement that must be satisfied, and causality/mood are also another important coordinations. Directors use several assistant tools such as storyboards or roughly crafted 3D animations, which can visu-alize the flow of movements, to organize ideas or to explain them to actors. However, it is difficult to use the tools because artistry and considerable training effort are required. It also doesnt have ability to give any suggestions or feedbacks. Finally, the amount of manual labor increases exponentially as the number of actor increases.
In this thesis, we propose computational approaches on choreographing multiple actor motion. The ultimate goal is to enable novice users easily to generate motions of multiple actors without substantial effort. We first show an approach to generate motions for shadow theatre, where actors should carefully collaborate to achieve the same goal. The results are comparable to ones that are made by professional ac-tors. In the next, we present an interactive animation system for pre-visualization, where users exploits an intuitive graphical interface for scene description. Given a de-scription, the system can generate motions for the characters in the scene that match the description. Finally, we propose two controller designs (combining regression with trajectory optimization, evolutionary deep reinforcement learning) for physically sim-ulated actors, which guarantee physical validity of the resultant motions.Chapter 1 Introduction 1
Chapter 2 Background 8
2.1 Motion Generation Technique 9
2.1.1 Motion Editing and Synthesis for Single-Character 9
2.1.2 Motion Editing and Synthesis for Multi-Character 9
2.1.3 Motion Planning 10
2.1.4 Motion Control by Reinforcement Learning 11
2.1.5 Pose/Motion Estimation from Incomplete Information 11
2.1.6 Diversity on Resultant Motions 12
2.2 Authoring System 12
2.2.1 System using High-level Input 12
2.2.2 User-interactive System 13
2.3 Shadow Theatre 14
2.3.1 Shadow Generation 14
2.3.2 Shadow for Artistic Purpose 14
2.3.3 Viewing Shadow Theatre as Collages/Mosaics of People 15
2.4 Physics-based Controller Design 15
2.4.1 Controllers for Various Characters 15
2.4.2 Trajectory Optimization 15
2.4.3 Sampling-based Optimization 16
2.4.4 Model-Based Controller Design 16
2.4.5 Direct Policy Learning 17
2.4.6 Deep Reinforcement Learning for Control 17
Chapter 3 Motion Generation for Shadow Theatre 19
3.1 Overview 19
3.2 Shadow Theatre Problem 21
3.2.1 Problem Definition 21
3.2.2 Approaches of Professional Actors 22
3.3 Discovery of Principal Poses 24
3.3.1 Optimization Formulation 24
3.3.2 Optimization Algorithm 27
3.4 Animating Principal Poses 29
3.4.1 Initial Configuration 29
3.4.2 Optimization for Motion Generation 30
3.5 Experimental Results 32
3.5.1 Implementation Details 33
3.5.2 Animation 34
3.5.3 3D Fabrication 34
3.6 Discussion 37
Chapter 4 Interactive Animation System for Pre-visualization 40
4.1 Overview 40
4.2 Graphical Scene Description 42
4.3 Candidate Scene Generation 45
4.3.1 Connecting Paths 47
4.3.2 Motion Cascade 47
4.3.3 Motion Selection For Each Cycle 49
4.3.4 Cycle Ordering 51
4.3.5 Generalized Paths and Cycles 52
4.3.6 Motion Editing 54
4.4 Scene Ranking 54
4.4.1 Ranking Criteria 54
4.4.2 Scene Ranking Measures 57
4.5 Scene Refinement 58
4.6 Experimental Results 62
4.7 Discussion 65
Chapter 5 Physics-based Design and Control 69
5.1 Overview 69
5.2 Combining Regression with Trajectory Optimization 70
5.2.1 Simulation and Motor Skills 71
5.2.2 Control Adaptation 75
5.2.3 Control Parameterization 79
5.2.4 Efficient Construction 81
5.2.5 Experimental Results 84
5.2.6 Discussion 89
5.3 Example-Guided Control by Deep Reinforcement Learning 91
5.3.1 System Overview 92
5.3.2 Initial Policy Construction 95
5.3.3 Evolutionary Deep Q-Learning 100
5.3.4 Experimental Results 107
5.3.5 Discussion 114
Chapter 6 Conclusion 119
6.1 Contribution 119
6.2 Future Work 120
요약 135Docto
- …