46 research outputs found

    Synchronized Illumination Modulation for Digital Video Compositing

    Get PDF
    Informationsaustausch ist eines der Grundbedürfnisse der Menschen. Während früher dazu Wandmalereien,Handschrift, Buchdruck und Malerei eingesetzt wurden, begann man später, Bildfolgen zu erstellen, die als sogenanntes ”Daumenkino” den Eindruck einer Animation vermitteln. Diese wurden schnell durch den Einsatz rotierender Bildscheiben, auf denen mit Hilfe von Schlitzblenden, Spiegeln oder Optiken eine Animation sichtbar wurde, automatisiert – mit sogenannten Phenakistiskopen,Zoetropen oder Praxinoskopen. Mit der Erfindung der Fotografie begannen in der zweiten Hälfte des 19. Jahrhunderts die ersten Wissenschaftler wie Eadweard Muybridge, Etienne-Jules Marey und Ottomar Anschütz, Serienbildaufnahmen zu erstellen und diese dann in schneller Abfolge, als Film, abzuspielen. Mit dem Beginn der Filmproduktion wurden auch die ersten Versuche unternommen, mit Hilfe dieser neuen Technik spezielle visuelle Effekte zu generieren, um damit die Immersion der Bewegtbildproduktionen weiter zu erhöhen. Während diese Effekte in der analogen Phase der Filmproduktion bis in die achtziger Jahre des 20.Jahrhunderts recht beschränkt und sehr aufwendig mit einem enormen manuellen Arbeitsaufwand erzeugt werden mussten, gewannen sie mit der sich rapide beschleunigenden Entwicklung der Halbleitertechnologie und der daraus resultierenden vereinfachten digitalen Bearbeitung immer mehr an Bedeutung. Die enormen Möglichkeiten, die mit der verlustlosen Nachbearbeitung in Kombination mit fotorealistischen, dreidimensionalen Renderings entstanden, führten dazu, dass nahezu alle heute produzierten Filme eine Vielfalt an digitalen Videokompositionseffekten beinhalten. ...Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ..

    Technology in contemporary cinema and its impact on film production and reception

    Get PDF
    Bakalářská práce se zaměřuje na techniky využívané v současném filmovém průmyslu. V první části se práce zabývá barevným klíčováním, snímáním pohybu a tvorbou animací. Druhá část práce je poté zaměřena na dopad těchto technik. Práce bere v úvahu nejen dopad na produkci filmu, ale také skutečnost, jak tyto techniky ovlivňují samotné přijetí filmu ze strany diváků.The Bachelor thesis focuses on the contemporary utilised film techniques. The first part deals with a chroma key, motion capture, and computer-generated imagery. The second part of the thesis is focused on the impact those techniques have. The thesis deals not only with the impact on the production but also on the reception of the film.

    Registro espacial 2D–3D para a inspeção remota de subestações de energia

    Get PDF
    Remote inspection and supervisory control are critical features for smart factories, civilian surveillance, power systems, among other domains. For reducing the time to make decisions, operators must have both a high situation awareness, implying a considerable amount of data to be presented, and minimal sensory load. Recent research suggests the adoption of computer vision techniques for automatic inspection, as well as virtual reality (VR) as an alternative to traditional SCADA interfaces. Nevertheless, although VR may provide a good representation of a substation’s state, it lacks some real-time information, available from online field cameras and microphones. Since these two sources of information (VR and field information) are not integrated into one single solution, we miss the opportunity of using VR as a SCADA-aware remote inspection tool, during operation and disaster-response routines. This work discusses a method to augment virtual environments of power substations with field images, enabling operators to promptly see a virtual representation of the inspected area's surroundings. The resulting environment is integrated with an image-based state inference machine, continuously checking the inferred states against the ones reported by the SCADA database. Whenever a discrepancy is found, an alarm is triggered and the virtual camera can be immediately teleported to the affected region, speeding up system reestablishment. The solution is based on a client-server architecture and allows multiple cameras deployed in multiple substations. Our results concern the quality of the 2D–3D registration and the rendering framerate for a simple scenario. The collected quantitative metrics suggest good camera pose estimations and registrations, as well as an arguably optimal rendering framerate for substations' equipment inspection.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorCEMIG - Companhia Energética de Minas GeraisCNPq - Conselho Nacional de Desenvolvimento Científico e TecnológicoFAPEMIG - Fundação de Amparo a Pesquisa do Estado de Minas GeraisTese (Doutorado)A inspeção remota e o controle supervisório são requisitos críticos para fábricas modernas, vigilância de civis, sistemas de energia e outras áreas. Para reduzir o tempo da tomada de decisão, os operadores precisam de uma elevada consciência da situação em campo, o que implica em uma grande quantidade de dados a serem apresentados, mas com menor carga sensorial possível. Estudos recentes sugerem a adoção de técnicas de visão computacional para inspeção automática, e a Realidade Virtual (VR) como uma alternativa às interfaces tradicionais do SCADA. Entretanto, apesar de fornecer uma boa representação do estado da subestação, os ambientes virtuais carecem de algumas informações de campo, provenientes de câmeras e microfones. Como essas duas fontes de dados (VR e dispositivos de captura) não são integrados em uma única solução, perde-se a oportunidade de usar VR como uma ferramenta de inspeção remota conectada ao SCADA, durante a operação e rotinas de respostas a desastres. Este trabalho trata de um método para aumentar ambientes virtuais de subestações com imagens de campo, permitindo aos operadores a rápida visualização de uma representação virtual do entorno da área monitorada. O ambiente resultante é integrado com uma máquina de inferência estados por imagens, comparando continuamente os estados inferidos com aqueles reportados pela base SCADA. Na ocasião de uma discrepância, um alarme é gerado e possibilita que a câmera virtual seja imediatamente teletransportada para a região afetada, acelerando o processo de retomada do sistema. A solução se baseia em uma arquitetura cliente-servidor e permite múltiplas câmeras presentes em múltiplas subestações. Os resultados dizem respeito à qualidade do registro 2D–3D e à taxa de renderização para um cenário simples. As métricas quantitativas coletadas sugerem bons níveis de registro e estimativa de pose de câmera, além de uma taxa ótima de renderização para fins de inspeção de equipamentos em subestações

    Proceedings of the 2nd European conference on disability, virtual reality and associated technologies (ECDVRAT 1998)

    Get PDF
    The proceedings of the conferenc

    Towards Real-time Mixed Reality Matting In Natural Scenes

    Get PDF
    In Mixed Reality scenarios, background replacement is a common way to immerse a user in a synthetic environment. Properly identifying the background pixels in an image or video is a dif- ficult problem known as matting. Proper alpha mattes usually come from human guidance, special hardware setups, or color dependent algorithms. This is a consequence of the under-constrained nature of the per pixel alpha blending equation. In constant color matting, research identifies and replaces a background that is a single color, known as the chroma key color. Unfortunately, the algorithms force a controlled physical environment and favor constant, uniform lighting. More generic approaches, such as natural image matting, have made progress finding alpha matte solutions in environments with naturally occurring backgrounds. However, even for the quicker algorithms, the generation of trimaps, indicating regions of known foreground and background pixels, normally requires human interaction or offline computation. This research addresses ways to automatically solve an alpha matte for an image in realtime, and by extension a video, using a consumer level GPU. It does so even in the context of noisy environments that result in less reliable constraints than found in controlled settings. To attack these challenges, we are particularly interested in automatically generating trimaps from depth buffers for dynamic scenes so that algorithms requiring more dense constraints may be used. The resulting computation is parallelizable so that it may run on a GPU and should work for natural images as well as chroma key backgrounds. Extra input may be required, but when this occurs, commodity hardware available in most Mixed Reality setups should be able to provide the input. This allows us to provide real-time alpha mattes for Mixed Reality scenarios that take place in relatively controlled environments. As a consequence, while monochromatic backdrops (such as green screens or retro-reflective material) aid the algorithm’s accuracy, they are not an explicit requirement. iii Finally we explore a sub-image based approach to parallelize an existing hierarchical approach on high resolution imagery. We show that locality can be exploited to significantly reduce the memory and compute requirements of previously necessary when computing alpha mattes of high resolution images. We achieve this using a parallelizable scheme that is both independent of the matting algorithm and image features. Combined, these research topics provide a basis for Mixed Reality scenarios using real-time natural image matting on high definition video sources

    Eliciting Music Performance Anxiety of Vocal and Piano Students Through the Use of Virtual Reality

    Get PDF
    Despite the growth of virtual reality technologies, there is a lack of understanding of implementing these technologies within the collegiate classroom. This case study provides a mixed-method insight into a virtual reality (VR) asset deployed in a music performance environment. The study examined the effectiveness of a virtual reality environment as measured by the physiological response and user feedback. Ten voice and four piano college students participated in the study. Each participant performed musical works within an authentic practice room and the virtual concert hall via a Virtual Reality (VR) headset. Data was collected across four criteria. Participants’ heart rates were recorded before and after the performances. A State-Trait Anxiety Inventory test was presented to participants before and after the performances. Each performance was recorded and then blindly evaluated by two licensed music adjudicators. After the performances, participants completed a self-evaluation. Results indicated that virtual concert hall sessions caused a change in some categories of physiological, performance, and anxiety compared to an authentic practice room. No statistical difference was recorded in heart rate for vocalists between both environments. This project serves as a proof of concept that VR technologies can effectively elicit change in music performance anxiety. Furthermore, the study could encourage further research on mitigating music performance anxiety through virtual environment exposure

    Narratives of ocular experience in interactive 360° environments

    Get PDF
    The purpose of this research project was to examine how immersive digital virtual technologies have the potential to expand the genre of interactive film into new forms of audience engagement and narrative production. Aside from addressing the limitations of interactive film, I have explored how interactive digital narratives can be reconfigured in the wake of immersive media. My contribution to knowledge stems from using a transdisciplinary synthesis of the interactive systems in film and digital media art, which is embodied in the research framework and theoretical focal point that I have titled Cynematics (chapter 2). Using a methodology that promotes iterative experimentation I developed a series of works that allowed me to practically explore the limitations of interactive film systems that involve non-haptic user interaction. This is evidenced in the following series of works: Virtual Embodiment, Narrative Maze, Eye Artefact Interactions and Routine Error - all of which are discussed in chapter 4 of this thesis. Each of these lab experiments collectively build towards the development of novel interactive 360° film practices. Funneling my research towards these underexplored processes I focused on virtual gaze interaction (chapters 4-6), aiming to define and historically contextualise this system of interaction, whilst critically engaging with it through my practice. It is here that gaze interaction is cemented as the key focus of this thesis. The potential of interactive 360° film is explored through the creation of three core pieces of practice, which are titled as follows: Systems of Seeing (chapter 5), Mimesis (chapter 6), Vanishing Point (chapter 7). Alongside the close readings in these chapters and the theoretical developments explored in each are the interaction designs included in the appendix of the thesis. These provide useful context for readers unable to experience these site-specific installations as virtual reality applications. After creating these systems, I established terms to theoretically unpack some of the processes occurring within them. These include Datascape Mediation (chapter 2), which frames agency as a complex entanglement built on the constantly evolving relationships between human and machine - and Live-Editing Practice (chapter 7), which aims to elucidate how the interactive 360° film practice designed for this research leads to new way of thinking about how we design, shoot and interact with 360° film. Reflecting on feedback from exhibiting Mimesis I decided to define and evaluate the key modes of virtual gaze interaction, which led to the development of a chapter and concept referred to as The Reticle Effect (chapter 6). This refers to how a visual overlay that is used to represent a user's line of sight not only shapes their experience of the work, but also dictates their perception of genre. To navigate this, I combined qualitative and quantitative analysis to explore user responses to four different types of gaze interaction. In preparing to collect this data I had to articulate these different types of interaction, which served to demarcate the difference between each of these types of gaze interaction. Stemming from this I used questionnaires, thematic analysis and data visualisation to explore the use and response to these systems. The results of this not only supports the idea of the reticle effect, but also gives insight into how these different types of virtual gaze interaction shape whether these works are viewed as games or as types of interactive film. The output of this allowed me to further expand on interactive 360° film as a genre of immersive media and move beyond the realm of interactive film into new technological discourses, which serves to validate the nascent, yet expansive reach of interactive 360° film as a form of practice. The thesis is concluded by framing this research within the wider discourse of posthuman theory as given that the technologies of immersive media perpetuate a state of extended human experience - how we interact and consider the theories that surround these mediums needs to be considered in the same way. The practice and theory developed throughout this thesis contribute to this discourse and allow for new ways of considering filmic language in the wake of interactive 360° film practice

    Cinematic Experiments

    Get PDF
    corecore