168 research outputs found

    On the Hardware/Software Design and Implementation of a High Definition Multiview Video Surveillance System

    Get PDF
    published_or_final_versio

    Virtual camera synthesis for soccer game replays

    Get PDF
    International audienceIn this paper, we present a set of tools developed during the creation of a platform that allows the automatic generation of virtual views in a live soccer game production. Observing the scene through a multi-camera system, a 3D approximation of the players is computed and used for the synthesis of virtual views. The system is suitable both for static scenes, to create bullet time effects, and for video applications, where the virtual camera moves as the game plays

    Model based 3D vision synthesis and analysis for production audit of installations.

    Get PDF
    PhDOne of the challenging problems in the aerospace industry is to design an automated 3D vision system that can sense the installation components in an assembly environment and check certain safety constraints are duly respected. This thesis describes a concept application to aid a safety engineer to perform an audit of a production aircraft against safety driven installation requirements such as segregation, proximity, orientation and trajectory. The capability is achieved using the following steps. The initial step is to perform image capture of a product and measurement of distance between datum points within the product with/without reference to a planar surface. This provides the safety engineer a means to perform measurements on a set of captured images of the equipment they are interested in. The next step is to reconstruct the digital model of fabricated product by using multiple captured images to reposition parts according to the actual model. Then, the projection onto the 3D digital reconstruction of the safety related installation constraints, respecting the original intent of the constraints that are defined in the digital mock up is done. The differences between the 3D reconstruction of the actual product and the design time digital mockup of the product are identified. Finally, the differences/non conformances that have a relevance to safety driven installation requirements with reference to the original safety requirement intent are identified. The above steps together give the safety engineer the ability to overlay a digital reconstruction that should be as true to the fabricated product as possible so that they can see how the product conforms or doesn't conform to the safety driven installation requirements. The work has produced a concept demonstrator that will be further developed in future work to address accuracy, work flow and process efficiency. A new depth based segmentation technique GrabcutD which is an improvement to existing Grabcut, a graph cut based segmentation method is proposed. Conventional Grabcut relies only on color information to achieve segmentation. However, in stereo or multiview analysis, there is additional information that could be also used to improve segmentation. Clearly, depth based approaches bear the potential discriminative power of ascertaining whether the object is nearer of farer. We show the usefulness of the approach when stereo information is available and evaluate it using standard datasets against state of the art result

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun
    • …
    corecore