6 research outputs found

    ipProjector: Designs and Techniques for Geometry-Based Interactive Applications Using a Portable Projector

    Get PDF
    We propose an interactive projection system for a virtual studio setup using a single self-contained and portable projection device. The system is named ipProjector, which stands for Interactive Portable Projector. Projection allows special effects of a virtual studio to be seen by live audiences in real time. The portable device supports 360-degree shooting and projecting angles and is easy to be integrated with an existing studio setup. We focus on two fundamental requirements of the system and their implementations. First, nonintrusive projection is performed to ensure that the special effect projections and the environment analysis (for locating the target actors or objects) can be performed simultaneously in real time. Our approach uses Digital Light Processing technology, color wheel analysis, and nearest-neighbor search algorithm. Second, a paired projector-camera system is geometrically calibrated with two alternative setups. The first uses a motion sensor for real-time geometric calibration, and the second uses a beam splitter for scene-independent geometric calibration. Based on a small-scale laboratory setting, experiments were conducted to evaluate the geometric accuracy of the proposed approaches, and an application was built to demonstrate the proposed ipProjector concept. Techniques of special effect rendering are not concerned in this paper

    Robotic Cameraman for Augmented Reality based Broadcast and Demonstration

    Get PDF
    In recent years, a number of large enterprises have gradually begun to use vari-ous Augmented Reality technologies to prominently improve the audiences’ view oftheir products. Among them, the creation of an immersive virtual interactive scenethrough the projection has received extensive attention, and this technique refers toprojection SAR, which is short for projection spatial augmented reality. However,as the existing projection-SAR systems have immobility and limited working range,they have a huge difficulty to be accepted and used in human daily life. Therefore,this thesis research has proposed a technically feasible optimization scheme so thatit can be practically applied to AR broadcasting and demonstrations. Based on three main techniques required by state-of-art projection SAR applica-tions, this thesis has created a novel mobile projection SAR cameraman for ARbroadcasting and demonstration. Firstly, by combining the CNN scene parsingmodel and multiple contour extractors, the proposed contour extraction pipelinecan always detect the optimal contour information in non-HD or blurred images.This algorithm reduces the dependency on high quality visual sensors and solves theproblems of low contour extraction accuracy in motion blurred images. Secondly, aplane-based visual mapping algorithm is introduced to solve the difficulties of visualmapping in these low-texture scenarios. Finally, a complete process of designing theprojection SAR cameraman robot is introduced. This part has solved three mainproblems in mobile projection-SAR applications: (i) a new method for marking con-tour on projection model is proposed to replace the model rendering process. Bycombining contour features and geometric features, users can identify objects oncolourless model easily. (ii) a camera initial pose estimation method is developedbased on visual tracking algorithms, which can register the start pose of robot to thewhole scene in Unity3D. (iii) a novel data transmission approach is introduced to establishes a link between external robot and the robot in Unity3D simulation work-space. This makes the robotic cameraman can simulate its trajectory in Unity3D simulation work-space and project correct virtual content. Our proposed mobile projection SAR system has made outstanding contributionsto the academic value and practicality of the existing projection SAR technique. Itfirstly solves the problem of limited working range. When the system is running ina large indoor scene, it can follow the user and project dynamic interactive virtualcontent automatically instead of increasing the number of visual sensors. Then,it creates a more immersive experience for audience since it supports the user hasmore body gestures and richer virtual-real interactive plays. Lastly, a mobile systemdoes not require up-front frameworks and cheaper and has provided the public aninnovative choice for indoor broadcasting and exhibitions

    Investigating and understanding the theoretical constructs and technical impactors on immersive dome user experience

    Get PDF
    This thesis explores the novel area of immersive digital domes (IDEs), a unique virtual reality tool that allows more than one user, sometimes a lot of users, into a space to share in engaging, interactive or real-time experiences. Using an active commercial development program, in the form of a government backed knowledge transfer program, this thesis develops a theoretical design framework, and evaluates it via iterative technical implementation with the overall goal to advance the collective knowledge of user experience within immersive digital domes. Between September 2015 and September 2017, the project researched and analysed dedicated developments into the design, development and implementation of improvements in the host companies digital dome product. During this time, the project team was able to establish itself as a knowledge leader in both the experiences currently offered within digital domes, as well as the inherent flaws in both their technical creation and internal user experiences. Via detailed analysis of the available research, this thesis explores the relationship between existing interactive space paradigms and those found within immersive domes. With the aim to understand what components contribute to experience within IDEs and how these components act together to influence user perception of the social, interactive and experience. This knowledge is used to drive direct commercial change and impact in the design, development and technical advancement, as well as the inherent impactors on user experience within immersive interactive spaces. This direct connection to instant commercial implementation and iteration allow for a very pragmatic qualitative methodology within the research, which was conducted across two areas, or phases. Phase one focuses on the experiential aspect of IDE for the end user. This involved collection and analysis of data from end users, sampling of existing literature and scrutiny of previous experiences, thus allowing for the beginning of constructs in the evaluation of IDE experiences. Phase two focuses on the refinement of the developer experience. It does so by utilising direct influence from the theoretical learning the first half of the research to implement rapid prototyping, hands on iteration and real-world technical adaptations to further refine the understanding of the user experience within immersive dome environments. Below, chapters two, three and four explore the existing literature, background and give context to the project and novel area of study, proposing the initial research questions this work will answer. Chapter four specifically discusses the unique on-site, active and implemented research methodology employed by this project; including detail about how results and developments will be validated. Chapter five explores phase one – end user experience. It expands on the findings in current literature to outline the understanding and developments in the field of user experience design within immersive domes. Documenting the contributing elements to dome experience as extrapolated from within the existing literature and via detailed analysis with an assembled expert panel. Chapter five also revisits the initial research hypotheses for further refinement. Chapter six is the documentation of phase two. Outlining and analysing a number of technical implementation within the host digital dome, and how the implementations answer the identified areas outlined in the user experience hypotheses. Technical implementations cover both hardware and software refinements that each have a distinct purpose or issue to address. Each of these are mapped to the constructs of immersive dome UX. Finally, chapters seven and eight integrate the findings from each phase and explore the larger research crossover between the technical and end user experience areas. Chapter seven analyses the impact of the changes over the course of the 24-month period and discusses what further improvements are possible within user and technical experience based on the previous work and learnings. Chapter eight looks to future research, discussing the implications for the development of IDEs, future areas of work and the goal of a more measurable framework in the capturing and analysis of UX within immersive digital domes

    Automatic Projector Calibration Using Self-Identifying Patterns

    No full text

    Synchronized Illumination Modulation for Digital Video Compositing

    Get PDF
    Informationsaustausch ist eines der Grundbedürfnisse der Menschen. Während früher dazu Wandmalereien,Handschrift, Buchdruck und Malerei eingesetzt wurden, begann man später, Bildfolgen zu erstellen, die als sogenanntes ”Daumenkino” den Eindruck einer Animation vermitteln. Diese wurden schnell durch den Einsatz rotierender Bildscheiben, auf denen mit Hilfe von Schlitzblenden, Spiegeln oder Optiken eine Animation sichtbar wurde, automatisiert – mit sogenannten Phenakistiskopen,Zoetropen oder Praxinoskopen. Mit der Erfindung der Fotografie begannen in der zweiten Hälfte des 19. Jahrhunderts die ersten Wissenschaftler wie Eadweard Muybridge, Etienne-Jules Marey und Ottomar Anschütz, Serienbildaufnahmen zu erstellen und diese dann in schneller Abfolge, als Film, abzuspielen. Mit dem Beginn der Filmproduktion wurden auch die ersten Versuche unternommen, mit Hilfe dieser neuen Technik spezielle visuelle Effekte zu generieren, um damit die Immersion der Bewegtbildproduktionen weiter zu erhöhen. Während diese Effekte in der analogen Phase der Filmproduktion bis in die achtziger Jahre des 20.Jahrhunderts recht beschränkt und sehr aufwendig mit einem enormen manuellen Arbeitsaufwand erzeugt werden mussten, gewannen sie mit der sich rapide beschleunigenden Entwicklung der Halbleitertechnologie und der daraus resultierenden vereinfachten digitalen Bearbeitung immer mehr an Bedeutung. Die enormen Möglichkeiten, die mit der verlustlosen Nachbearbeitung in Kombination mit fotorealistischen, dreidimensionalen Renderings entstanden, führten dazu, dass nahezu alle heute produzierten Filme eine Vielfalt an digitalen Videokompositionseffekten beinhalten. ...Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ..
    corecore