145 research outputs found

    Immersive Gaming in a Hemispherical Dome Case study: Blender Game Engine

    Get PDF
    In the following we will discuss a cost effectiveimmersive gaming environment and the implementation inBlender [1], an open source game engine. This extends traditionalapproaches to immersive gaming which tend to concentrateon multiple flat screens, sometimes surrounding the player, orcylindrical [2] displays. In the former there are unnatural gapsbetween each display due to screen framing, in both cases theyrarely cover the 180 horizontal degree field of view and areeven less likely to cover the vertical field of view required tofully engage the field of view of the human visual system. Thesolution introduced here concentrates on seamless hemisphericaldisplays, planetariums in general and the iDome [3] as a specificcase study. The methodology discussed is equally appropriateto other realtime 3D environments that are available in sourcecode form or have a suitably powerful means of modifying therendering pipeline

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    Spherical Image Processing for Immersive Visualisation and View Generation

    Get PDF
    This research presents the study of processing panoramic spherical images for immersive visualisation of real environments and generation of in-between views based on two views acquired. For visualisation based on one spherical image, the surrounding environment is modelled by a unit sphere mapped with the spherical image and the user is then allowed to navigate within the modelled scene. For visualisation based on two spherical images, a view generation algorithm is developed for modelling an indoor manmade environment and new views can be generated at an arbitrary position with respect to the existing two. This allows the scene to be modelled using multiple spherical images and the user to move smoothly from one sphere mapped image to another one by going through in-between sphere mapped images generated

    3D virtualization of an underground semi-submerged cave system

    Get PDF
    Underwater caves represent the most challenging scenario for exploration, mapping and 3D modelling. In such complex environment, unsuitable to humans, highly specialized skills and expensive equipment are normally required. Technological progress and scientific innovation attempt, nowadays, to develop safer and more automatic approaches for the virtualization of these complex and not easily accessible environments, which constitute a unique natural, biological and cultural heritage. This paper presents a pilot study realised for the virtualization of 'Grotta Giusti' (Fig. 1), an underground semi-submerged cave system in central Italy. After an introduction on the virtualization process in the cultural heritage domain and a review of techniques and experiences for the virtualization of underground and submerged environments, the paper will focus on the employed virtualization techniques. In particular, the developed approach to simultaneously survey the semi-submersed areas of the cave relying on a stereo camera system and the virtualization of the virtual cave will be discussed

    Intraoperative Endoscopic Augmented Reality in Third Ventriculostomy

    Get PDF
    In neurosurgery, as a result of the brain-shift, the preoperative patient models used as a intraoperative reference change. A meaningful use of the preoperative virtual models during the operation requires for a model update. The NEAR project, Neuroendoscopy towards Augmented Reality, describes a new camera calibration model for high distorted lenses and introduces the concept of active endoscopes endowed with with navigation, camera calibration, augmented reality and triangulation modules

    Spherical image processing for immersive visualisation and view generation

    Get PDF
    This research presents the study of processing panoramic spherical images for immersive visualisation of real environments and generation of in-between views based on two views acquired. For visualisation based on one spherical image, the surrounding environment is modelled by a unit sphere mapped with the spherical image and the user is then allowed to navigate within the modelled scene. For visualisation based on two spherical images, a view generation algorithm is developed for modelling an indoor manmade environment and new views can be generated at an arbitrary position with respect to the existing two. This allows the scene to be modelled using multiple spherical images and the user to move smoothly from one sphere mapped image to another one by going through in-between sphere mapped images generated.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A novel approach to programmable imaging using MOEMS

    Get PDF
    New advancements in science are frequently sparked by the invention of new instruments. Possibly the most important scientific instrument of the past fifty years is the digital computer. Among the computers many uses and impacts, digital imaging has revolutionized images and photography, merging computer processing and optical images. In this thesis, we merge an additional reconfigurable micro-mechanical domain into the digital imaging system, introducing a novel imaging method called Programmable Imaging. With our imaging method, we selectively sample the object plane, by utilizing state-of-the-art Micro-Optical-Electrical-Mechanical Systems (MOEMS) of mirror arrays. The main concept is to use an array of tiny mirrors that have the ability to tilt in different directions. Each mirror acts as an “eye” which images a scene. The individual images from each mirror are then reassembled, such that all of the information is placed into a single image. By exact control of the mirrors, the object plane can be sampled in a desired fashion, such that post-processing effects, such as image distortion and digital zoom, that are currently performed in software can now be performed in real time in hardware as the image gets captured. It is important to note that even for different sampling or imaging functions, no hardware components or settings are changed in the system.In this work, we present our programmable imaging system prototype. The MOEMS chipset used in our prototype is the Lucent LambdaRouter mirror array. This device contains 256 individually-controlled micro-mirrors, which can be tilted on both the x and y axes ±8o. We describe the theoretical model of our system, including a system model, capacity model, and diffraction results. We experimentally prototype our programmable imaging system using both a single mirror, followed by multiple mirrors. With the single mirror imaging, we explore examples related to single projection systems and give details of our required mirror calibration. Using this technique, we show mosaic images, as well as images in which a single pixel was extracted for every mirror tilt. Using this single pixel approach, the greatest capabilities of our programmable imaging are realized. When using multiple mirrors to image an object, new features of our system are demonstrated. In this case, the object plane can be viewed from different perspectives. From these multi-perspective images, virtual 3-D images can be created. In addition, stereo depth estimation can be performed to calculate the distance between the object and the image plane. This depth measurement is significant, as the depth information is taken with only one image from only one camera.Ph.D., Electrical Engineering -- Drexel University, 200
    corecore