18 research outputs found

    Gaze Guidance, Task-Based Eye Movement Prediction, and Real-World Task Inference using Eye Tracking

    Get PDF
    The ability to predict and guide viewer attention has important applications in computer graphics, image understanding, object detection, visual search and training. Human eye movements provide insight into the cognitive processes involved in task performance and there has been extensive research on what factors guide viewer attention in a scene. It has been shown, for example, that saliency in the image, scene context, and task at hand play significant roles in guiding attention. This dissertation presents and discusses research on visual attention with specific focus on the use of subtle visual cues to guide viewer gaze and the development of algorithms to predict the distribution of gaze about a scene. Specific contributions of this work include: a framework for gaze guidance to enable problem solving and spatial learning, a novel algorithm for task-based eye movement prediction, and a system for real-world task inference using eye tracking. A gaze guidance approach is presented that combines eye tracking with subtle image-space modulations to guide viewer gaze about a scene. Several experiments were conducted using this approach to examine its impact on short-term spatial information recall, task sequencing, training, and password recollection. A model of human visual attention prediction that uses saliency maps, scene feature maps and task-based eye movements to predict regions of interest was also developed. This model was used to automatically select target regions for active gaze guidance to improve search task performance. Finally, we develop a framework for inferring real-world tasks using image features and eye movement data. Overall, this dissertation naturally leads to an overarching framework, that combines all three contributions to provide a continuous feedback system to improve performance on repeated visual search tasks. This research has important applications in data visualization, problem solving, training, and online education

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    The Future of Humanoid Robots

    Get PDF
    This book provides state of the art scientific and engineering research findings and developments in the field of humanoid robotics and its applications. It is expected that humanoids will change the way we interact with machines, and will have the ability to blend perfectly into an environment already designed for humans. The book contains chapters that aim to discover the future abilities of humanoid robots by presenting a variety of integrated research in various scientific and engineering fields, such as locomotion, perception, adaptive behavior, human-robot interaction, neuroscience and machine learning. The book is designed to be accessible and practical, with an emphasis on useful information to those working in the fields of robotics, cognitive science, artificial intelligence, computational methods and other fields of science directly or indirectly related to the development and usage of future humanoid robots. The editor of the book has extensive R&D experience, patents, and publications in the area of humanoid robotics, and his experience is reflected in editing the content of the book

    Eye movements in dynamic environments

    Get PDF
    The capabilities of the visual system and the biological mechanisms controlling its active nature are still unequaled by modern technology. Despite the spatial and temporal complexity of our environment, we succeed in tasks that demand extracting relevant information from complex, ambiguous, and noisy sensory data. Dynamically distributing visual attention across multiple targets is an important task. In many situations, for example driving a vehicle, switching focus between several targets (e.g., looking ahead, mirrors, control panels) is needed to succeed. This is further complicated by the fact, that most information gathered during active gaze is highly dynamic (e.g., other vehicles on the street, changes of street direction). Hence, while looking at one of the targets, the uncertainty regarding the others increases. Crucially, we manage to do so despite omnipresent stochastic changes in our surroundings. The mechanisms responsible for how the brain schedules our visual system to access the information we need exactly when we need it are far from understood. In a dynamic world, humans not only have to decide where to look but also when to direct their gaze to potentially informative locations in the visual scene. Our foveated visual apparatus is only capable of gathering information with high resolution within a limited area of the visual field. As a consequence, in a changing environment, we constantly and inevitably lose information about the locations not currently brought into focus. Little is known about how the timing of eye movements is related to environmental regularities and how gaze strategies are learned. This is due to three main reasons: First, to relate the scheduling of eye movements to stochastic environmental dynamics, we need to have access to those statistics. However, these are usually unknown. Second, to apply the powerful framework of statistical learning theory, we require knowledge of the current goals of the subject. During every-day tasks, the goal structure can be complex, multi-dimensional and is only partially accessible. Third, the computational problem is, in general, intractable. Usually, it involves learning sequences of eye movements rather than a single action from delayed rewards under temporal and spatial uncertainty that is further amplified by dynamic changes in the environment. In the present thesis, we propose an experimental paradigm specifically designed to target these problems: First, we use simple stimuli with reduced spatial complexity and controlled stochastic behavior. Second, we give subjects explicit task instructions. Finally, the temporal and spatial statistics are designed in a way, that significantly simplifies computation and makes it possible to infer several human properties from the action sequences while still using normative models for behavior. We present results from four different studies that show how this approach can be used to gain insights into the temporal structure of human gaze selection. In a controlled setting in which crucial quantities are known, we show how environmental dynamics are learned and used to control several components of the visual apparatus by properly scheduling the time course of actions. First, we investigated how endogenous eye blinks are controlled in the presence of nonstationary environmental demands. Eye blinks are linked to dopamine and therefore have been used as a behavioral marker for many internal cognitive processes. Also, they introduce gaps in the stream of visual information. Empirical results had suggested that 1) blinking behavior is affected by the current activity and 2) highly variable between participants. We present a computational approach that quantifies the relationship between blinking behavior and environmental demands. We show that blinking is the result of a trade-off between task demands and the internal urge to blink in our psychophysical experiment. Crucially, we can predict the temporal dynamics of blinking (i.e., the distribution of interblink intervals) for individual blinking patterns. Second, we present behavioral data establishing that humans learn to adjust their temporal eye movements efficiently. More time is spent at locations where meaningful events are short and therefore easily missed. Our computational model further shows how several properties of the visual system determine the timing of gaze. We present a Bayesian learner that fully explains how eye movement patterns change due to learning the event statistics. Thus, humans use temporal regularities learned from observations to adjust the scheduling of eye movements in a nearly optimal way. This is a first computational account towards understanding how eye movements are scheduled in natural behavior. After establishing the connection of temporal eye movement dynamics, reward in the form of task performance, and physiological costs for saccades and endogenous eye blinks, we applied our paradigm to study the variability in temporal eye movement sequences within and across subjects. The experimental design facilitates analyzing the temporal structure of eye movementswith full knowledge about the statistics of the environment. Hence, we can quantify the internal beliefs about task-relevant properties and can further study how they contribute to the variability in gaze sequences in combination with physiological costs. Crucially, we developed a visual monitoring task where a subject is confronted with the same stimulus dynamics multiple times while learning effects are kept to a minimum. Hence, we are not only able to compute the variability between subjects but also over trials of the same subject. We present behavioral data and results from our computational model showing how variability of eye movement sequences is related to task properties. Having access to the subjects' reward structure, we are able to show how expected rewards influence the variance in visual behavior. Finally, we studied the computational properties underlying the control of eye movement sequences in a visual search task. In particular, we investigated whether eye movements are planned. Research from psychology has merely revealed that sequences of multiple eye movements are jointly prepared as a scanpath. Here we examine whether humans are capable of finding the optimal scanpath even if it requires incorporating more than just the next eye movement into the decision. For a visual search task, we derive an ideal observer as well as an ideal planner based on the framework of partially observable Markov decision processes (POMDP). The former always takes the action associated with the maximum immediate reward while the latter maximized the total sum of rewards for the whole action sequence. We show that depending on the search shape ideal planner and ideal observer lead to different scanpaths. Following this paradigm, we found evidence that humans are indeed capable of planning scanpaths. The ideal planner explained our subjects' behavior better compared to the ideal observer. In particular, the location of the first fixation differed depending on the shape and the time available for the search, a characteristic well predicted by the ideal planner but not by the ideal observer. Overall, our results are the first evidence that our visual system is capable of taking into account future consequences beyond the immediate reward for choosing the next fixation target. In summary, this thesis proposes an experimental paradigm that enables us to study the temporal structure of eye movements in dynamic environments. While approaching this computationally is generally intractable, we reduce the complexity of the stimuli in dimensions that do not contribute to the temporal effects. As a consequence, we can collect eye movement data in tasks with a rich temporal structure while being able to compute the internal beliefs of our subjects in a way that is not possible for natural stimuli. We present four different studies that show how this paradigm can lead to new insights into several properties of the visual system. Our findings have several implications for future work: First, we established several factors that play a crucial role in the generation of gaze behavior and have to be accounted for when describing the temporal dynamics of eye movements. Second, future models of eye movements should take into account, that delayed rewards can affect behavior. Third, the relationship between behavioral variability and properties of the reward structure are not limited to eye movements. Instead, it is a general prediction by the computational framework. Therefore, future work can use this approach to study the variability of various other actions. Our computational models have applications in state of the art technology. For example, blink rates are already utilized in vigilance systems for drivers. Our computational model is able to describe the temporal statistics of blinking behavior beyond simple blink rates and also accounts for interindividual differences in eye physiology. Using algorithms that can deal with natural images, e.g., deep neural networks, the environmental statistics can be extracted and our models then can be used to predict eye movements in daily situations like driving a vehicle

    Drawing, Handwriting Processing Analysis: New Advances and Challenges

    No full text
    International audienceDrawing and handwriting are communicational skills that are fundamental in geopolitical, ideological and technological evolutions of all time. drawingand handwriting are still useful in defining innovative applications in numerous fields. In this regard, researchers have to solve new problems like those related to the manner in which drawing and handwriting become an efficient way to command various connected objects; or to validate graphomotor skills as evident and objective sources of data useful in the study of human beings, their capabilities and their limits from birth to decline

    Konzepte und Guidelines für Applikationen in Cinematic Virtual Reality

    Get PDF
    Die meisten Menschen, die zum ersten Mal einen omnidirektionalen Film über ein Head-Mounted Display (HMD) sehen, sind fasziniert von der neuen Erlebniswelt. Das Gefühl, an einem anderen Ort zu sein, weit weg von der Realität, beeindruckt und lässt sie in eine andere Welt eintauchen. Die über Jahrzehnte entwickelte Filmsprache lässt sich nicht ohne Weiteres auf dieses neue Medium - Cinematic Virtual Reality (CVR) - übertragen. Der Betrachter kann die Blickrichtung und damit den sichtbaren Ausschnitt des Bildes frei wählen, und es ist deshalb nicht immer möglich, dem Zuschauer zu zeigen, was für die Geschichte wichtig ist. Herkömmliche Methoden für die Lenkung der Aufmerksamkeit - wie Nahaufnahmen oder Zooms - sind nicht ohne Weiteres verwendbar, andere – wie Bewegung und Farben – benötigen eine Evaluation und Anpassung. Um neue Konzepte und Methoden für CVR zu finden, sind neben den Forschungsergebnissen aus dem Filmbereich auch die anderer Forschungsgebiete, wie Virtual und Augmented Reality (VR und AR), relevant. Um geeignete Techniken der Aufmerksamkeitslenkung in CVR zu ergründen, werden in dieser Arbeit bekannte Methoden aus Film, VR und AR analysiert und eine einheitliche Taxonomie präsentiert. Dadurch ist es möglich, die verschiedenen Aspekte detaillierter zu untersuchen. Auch die Positionierung der Kamera kann nicht ohne Weiteres vom traditionellen Film auf CVR übertragen werden. Der Zuschauer nimmt bei der Betrachtung einer CVR-Anwendung in der virtuellen Welt die Position der Kamera ein. Dies kann zu Problemen führen, wenn die Kamerahöhe nicht seiner eigenen Körpergröße entspricht. Außerdem ist eine Auflösung einer Szene durch verschiedene Einstellungsgrößen nicht ohne Weiteres möglich, da dies für den Zuschauer ein Umherspringen in der virtuellen Welt bedeuten würde. In dieser Arbeit werden die Auswirkungen verschiedener Kamerapositionen auf den Zuschauer untersucht und Guidelines zur Kamerapositionierung vorgestellt. Die dazugewonnene Raumkomponente bietet neue Möglichkeiten. Schnitte müssen nicht unbedingt von der verstrichenen Zeit abhängen, sondern können auch auf der Blickrichtung des Betrachters basieren. In Übereinstimmung mit dem Begriff Timeline führen wir das Konzept der Spaceline für diese Methode der Story-Konstruktion ein. Während die Schnitte auf der Timeline vom Filmemacher festgelegt werden, bestimmt der Betrachter die Spaceline - innerhalb eines vom Filmemacher festgelegten Konstrukts. Durch diese individuelle Zuschauerführung ist es möglich, dass jeder seine eigene Geschichte in seinem eigenen Tempo und mit seinen eigenen Prioritäten entdeckt. Das Spaceline-Konzept bietet neue Interaktionsmöglichkeiten, die durch verschiedene Selektionstechniken umgesetzt werden können. Um Techniken zu finden, die für CVR geeignet sind, werden in dieser Arbeit blick- und kopfbasierte Ansätze untersucht. Auch wenn deren Wirksamkeit stark von den gewählten Parametern und physiologischen Faktoren abhängen, konnten wertvolle Erkenntnisse gewonnen werden, die in einen Design-Space für CVR-Konstrukte einfließen. Dieser Design-Space ermöglicht es beim Entwerfen einer CVR-Anwendung, die Attribute zu finden, die für die Anwendung am besten geeignet sind. Aber nicht nur die Entwicklung von CVR-Anwendungen stellt neue Herausforderungen. Durch das HMD ist ein Zuschauer von der restlichen Welt isoliert, und es bedarf neuer Methoden, um CVR als soziale Erfahrung erlebbar zu machen. Einige davon werden in dieser Arbeit vorgestellt und analysiert. Aus den gewonnenen Erfahrungen werden Empfehlungen für einen CVR-Movie-Player abgeleitet. Um die vorgestellten Konzepte und Guidelines zu entwickeln, wurden eine Reihe von Nutzerstudien durchgeführt, zum Teil mit Aufzeichnung der Kopf- und Blickrichtungen. Um diese Daten analysieren zu können, wurde ein Tool entwickelt, welches die Visualisierung der Daten auf dem Film ermöglicht. In dieser Arbeit werden Konzepte und Guidelines für verschiedene Felder in Cinematic Virtual Reality vorgestellt: Aufmerksamkeitslenkung, Kamerapositionierung, Montage, Zuschauererlebnis und Datenanalyse. Auf jedem dieser Gebiete konnten Erkenntnisse gewonnen werden, die auch für die andere Bereiche von Interesse sind. Oft hängen die Ergebnisse der einzelnen Fachgebiete zusammen und ergänzen sich gegenseitig. Ziel der Arbeit ist es, die verschiedenen Aspekte als Ganzes zu präsentieren.Most people who watch an omnidirectional film for the first time on a head-mounted display (HMD) are fascinated by the new world of experience. The feeling of being in a different place, right in the middle of the action, far from reality, impresses and gives the opportunity to immerse in another world. The film language developed over decades cannot simply be transferred to this new media, Cinematic Virtual Reality (CVR). The viewer can freely choose the direction of view and thus the visible section of the picture and it is therefore not always possible to show the viewer what is important for the story. Traditional methods for directing attention - such as close-ups or zooms - are not easy to use, others - such as movement and colors - needs to be assessed and adjusted. For finding new concepts and methods for CVR, in addition to the research results from the film area, other areas in the field of virtual and augmented reality (VR and AR) are relevant. In order to find suitable methods to draw attention in CVR, known methods from film, VR, and AR are analyzed in this work and a uniform taxonomy is presented. This makes it possible to investigate the various aspects of the methods in more detail. The positioning of the camera cannot simply be transferred from traditional film to CVR. When viewing a CVR application in the virtual world, the viewer takes the place of the camera in the real world. This can lead to problems if the camera height does not match the viewer’s height. In addition, a resolution of a scene due different setting sizes is not possible as this would mean that the viewer would jump around in the virtual world. In this work, the effects of different camera positions on the viewer are examined and guidelines for camera positioning are presented. The additional space component offers new possibilities. Cuts do not necessarily have to depend on the elapsed time, but can also be based on the viewer's gaze. In accordance with the term timeline, we introduce the concept of the spaceline for this method of story plot construction. While the cuts on the timeline are determined by the filmmaker, the viewer determines the spaceline - within a construct determined by the filmmaker. Through this individual guided tour, everyone can discover their own story at their own pace and with their own priorities. The spaceline concept offers new interaction options that can be implemented using various selection techniques. In order to find methods that are suitable for CVR, this work examines eye and head-based techniques. Even if the effectiveness of them strongly depends on the chosen parameters and physiological factors, valuable insights are gained, which are included in a design space for spaceline constructs. This design space allows one to find the attributes that best suits for the application when designing a CVR application. But not only the creation of CVR applications presents new challenges. The HMD isolates a viewer from the rest of the world and requires new techniques to experience CVR in a social way. Some of these are presented and analyzed in this work. Recommendations for a CVR movie player are derived from the experience gained. Several user studies were conducted to develop the concepts and guidelines, some of them by recording of the head and gaze directions. To be able to analyze this data, a tool was developed which enables us to visualize the data on the film. In this work, concepts and guidelines for various fields in Cinematic Virtual Reality are presented: attention guiding, camera positioning, montage, audience experience, and data analysis. In each of these areas, knowledge was gained that is also of interest to the other fields. The findings of the individual fields are often related and complement each other. This work aims to present the various aspects as a whole

    Konzepte und Guidelines für Applikationen in Cinematic Virtual Reality

    Get PDF
    Die meisten Menschen, die zum ersten Mal einen omnidirektionalen Film über ein Head-Mounted Display (HMD) sehen, sind fasziniert von der neuen Erlebniswelt. Das Gefühl, an einem anderen Ort zu sein, weit weg von der Realität, beeindruckt und lässt sie in eine andere Welt eintauchen. Die über Jahrzehnte entwickelte Filmsprache lässt sich nicht ohne Weiteres auf dieses neue Medium - Cinematic Virtual Reality (CVR) - übertragen. Der Betrachter kann die Blickrichtung und damit den sichtbaren Ausschnitt des Bildes frei wählen, und es ist deshalb nicht immer möglich, dem Zuschauer zu zeigen, was für die Geschichte wichtig ist. Herkömmliche Methoden für die Lenkung der Aufmerksamkeit - wie Nahaufnahmen oder Zooms - sind nicht ohne Weiteres verwendbar, andere – wie Bewegung und Farben – benötigen eine Evaluation und Anpassung. Um neue Konzepte und Methoden für CVR zu finden, sind neben den Forschungsergebnissen aus dem Filmbereich auch die anderer Forschungsgebiete, wie Virtual und Augmented Reality (VR und AR), relevant. Um geeignete Techniken der Aufmerksamkeitslenkung in CVR zu ergründen, werden in dieser Arbeit bekannte Methoden aus Film, VR und AR analysiert und eine einheitliche Taxonomie präsentiert. Dadurch ist es möglich, die verschiedenen Aspekte detaillierter zu untersuchen. Auch die Positionierung der Kamera kann nicht ohne Weiteres vom traditionellen Film auf CVR übertragen werden. Der Zuschauer nimmt bei der Betrachtung einer CVR-Anwendung in der virtuellen Welt die Position der Kamera ein. Dies kann zu Problemen führen, wenn die Kamerahöhe nicht seiner eigenen Körpergröße entspricht. Außerdem ist eine Auflösung einer Szene durch verschiedene Einstellungsgrößen nicht ohne Weiteres möglich, da dies für den Zuschauer ein Umherspringen in der virtuellen Welt bedeuten würde. In dieser Arbeit werden die Auswirkungen verschiedener Kamerapositionen auf den Zuschauer untersucht und Guidelines zur Kamerapositionierung vorgestellt. Die dazugewonnene Raumkomponente bietet neue Möglichkeiten. Schnitte müssen nicht unbedingt von der verstrichenen Zeit abhängen, sondern können auch auf der Blickrichtung des Betrachters basieren. In Übereinstimmung mit dem Begriff Timeline führen wir das Konzept der Spaceline für diese Methode der Story-Konstruktion ein. Während die Schnitte auf der Timeline vom Filmemacher festgelegt werden, bestimmt der Betrachter die Spaceline - innerhalb eines vom Filmemacher festgelegten Konstrukts. Durch diese individuelle Zuschauerführung ist es möglich, dass jeder seine eigene Geschichte in seinem eigenen Tempo und mit seinen eigenen Prioritäten entdeckt. Das Spaceline-Konzept bietet neue Interaktionsmöglichkeiten, die durch verschiedene Selektionstechniken umgesetzt werden können. Um Techniken zu finden, die für CVR geeignet sind, werden in dieser Arbeit blick- und kopfbasierte Ansätze untersucht. Auch wenn deren Wirksamkeit stark von den gewählten Parametern und physiologischen Faktoren abhängen, konnten wertvolle Erkenntnisse gewonnen werden, die in einen Design-Space für CVR-Konstrukte einfließen. Dieser Design-Space ermöglicht es beim Entwerfen einer CVR-Anwendung, die Attribute zu finden, die für die Anwendung am besten geeignet sind. Aber nicht nur die Entwicklung von CVR-Anwendungen stellt neue Herausforderungen. Durch das HMD ist ein Zuschauer von der restlichen Welt isoliert, und es bedarf neuer Methoden, um CVR als soziale Erfahrung erlebbar zu machen. Einige davon werden in dieser Arbeit vorgestellt und analysiert. Aus den gewonnenen Erfahrungen werden Empfehlungen für einen CVR-Movie-Player abgeleitet. Um die vorgestellten Konzepte und Guidelines zu entwickeln, wurden eine Reihe von Nutzerstudien durchgeführt, zum Teil mit Aufzeichnung der Kopf- und Blickrichtungen. Um diese Daten analysieren zu können, wurde ein Tool entwickelt, welches die Visualisierung der Daten auf dem Film ermöglicht. In dieser Arbeit werden Konzepte und Guidelines für verschiedene Felder in Cinematic Virtual Reality vorgestellt: Aufmerksamkeitslenkung, Kamerapositionierung, Montage, Zuschauererlebnis und Datenanalyse. Auf jedem dieser Gebiete konnten Erkenntnisse gewonnen werden, die auch für die andere Bereiche von Interesse sind. Oft hängen die Ergebnisse der einzelnen Fachgebiete zusammen und ergänzen sich gegenseitig. Ziel der Arbeit ist es, die verschiedenen Aspekte als Ganzes zu präsentieren.Most people who watch an omnidirectional film for the first time on a head-mounted display (HMD) are fascinated by the new world of experience. The feeling of being in a different place, right in the middle of the action, far from reality, impresses and gives the opportunity to immerse in another world. The film language developed over decades cannot simply be transferred to this new media, Cinematic Virtual Reality (CVR). The viewer can freely choose the direction of view and thus the visible section of the picture and it is therefore not always possible to show the viewer what is important for the story. Traditional methods for directing attention - such as close-ups or zooms - are not easy to use, others - such as movement and colors - needs to be assessed and adjusted. For finding new concepts and methods for CVR, in addition to the research results from the film area, other areas in the field of virtual and augmented reality (VR and AR) are relevant. In order to find suitable methods to draw attention in CVR, known methods from film, VR, and AR are analyzed in this work and a uniform taxonomy is presented. This makes it possible to investigate the various aspects of the methods in more detail. The positioning of the camera cannot simply be transferred from traditional film to CVR. When viewing a CVR application in the virtual world, the viewer takes the place of the camera in the real world. This can lead to problems if the camera height does not match the viewer’s height. In addition, a resolution of a scene due different setting sizes is not possible as this would mean that the viewer would jump around in the virtual world. In this work, the effects of different camera positions on the viewer are examined and guidelines for camera positioning are presented. The additional space component offers new possibilities. Cuts do not necessarily have to depend on the elapsed time, but can also be based on the viewer's gaze. In accordance with the term timeline, we introduce the concept of the spaceline for this method of story plot construction. While the cuts on the timeline are determined by the filmmaker, the viewer determines the spaceline - within a construct determined by the filmmaker. Through this individual guided tour, everyone can discover their own story at their own pace and with their own priorities. The spaceline concept offers new interaction options that can be implemented using various selection techniques. In order to find methods that are suitable for CVR, this work examines eye and head-based techniques. Even if the effectiveness of them strongly depends on the chosen parameters and physiological factors, valuable insights are gained, which are included in a design space for spaceline constructs. This design space allows one to find the attributes that best suits for the application when designing a CVR application. But not only the creation of CVR applications presents new challenges. The HMD isolates a viewer from the rest of the world and requires new techniques to experience CVR in a social way. Some of these are presented and analyzed in this work. Recommendations for a CVR movie player are derived from the experience gained. Several user studies were conducted to develop the concepts and guidelines, some of them by recording of the head and gaze directions. To be able to analyze this data, a tool was developed which enables us to visualize the data on the film. In this work, concepts and guidelines for various fields in Cinematic Virtual Reality are presented: attention guiding, camera positioning, montage, audience experience, and data analysis. In each of these areas, knowledge was gained that is also of interest to the other fields. The findings of the individual fields are often related and complement each other. This work aims to present the various aspects as a whole
    corecore