4 research outputs found
Recommended from our members
Camera positioning for 3D panoramic image rendering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known
camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated
using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation
of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with
a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects.
To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential
of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality
Omnidirectional Stereo Vision for Autonomous Vehicles
Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications
Modélisation et développement d'une plateforme intelligente pour la capture d'images panoramiques cylindriques
In most robotic applications, vision systems can significantly improve the perception of the environment. The panoramic view has particular attractions because it allows omnidirectional perception. However, it is rarely used because the methods that provide panoramic views also have significant drawbacks. Most of these omnidirectional vision systems involve the combination of a matrix camera and a mirror, rotating matrix cameras or a wide angle lens. The major drawbacks of this type of sensors are in the great distortions of the images and the heterogeneity of the resolution. Some other methods, while providing homogeneous resolutions, also provide a huge data flow that is difficult to process in real time and are either too slow or lacking in precision. To address these problems, we propose a smart panoramic vision system that presents technological improvements over rotating linear sensor methods. It allows homogeneous 360 degree cylindrical imaging with a resolution of 6600 × 2048 pixels and a precision turntable to synchronize position with acquisition. We also propose a solution to the bandwidth problem with the implementation of a feature etractor that selects only the invariant feaures of the image in such a way that the camera produces a panoramic view at high speed while delivering only relevant information. A general geometric model has been developped has been developped to describe the image formation process and a caligration method specially designed for this kind of sensor is presented. Finally, localisation and structure from motion experiments are described to show a practical use of the system in SLAM applications.Dans la plupart des applications de robotique, un système de vision apporte une amélioration significative de la perception de l’environnement. La vision panoramique est particulièrement intéressante car elle rend possible une perception omnidirectionnelle. Elle est cependant rarement utilisée en pratique à cause des limitations technologiques découlant des méthodes la permettant. La grande majorité de ces méthodes associent des caméras, des miroirs, des grands angles et des systèmes rotatifs ensembles pour créer des champs de vision élargis. Les principaux défauts de ces méthodes sont les importantes distorsions des images et l’hétérogénéité de la résolution. Certaines autres méthodes permettant des résolutions homogènes, prodiguent un flot de données très important qui est difficile à traiter en temps réel et sont soit trop lents soit manquent de précision. Pour résoudre ces problèmes, nous proposons la réalisation d’une caméra panoramique intelligente qui présente plusieurs améliorations technologiques par rapport aux autres caméras linéaires rotatives. Cette caméra capture des panoramas cylindriques homogènes avec une résolution de 6600 × 2048 pixels. La synchronisation de la capture avec la position angulaire est possible grâce à une plateforme rotative de précision. Nous proposons aussi une solution au problème que pose le gros flot de données avec l’implémentation d’un extracteur de primitives qui sélectionne uniquement les primitives invariantes des images pour donner un système panoramique de vision qui ne transmet que les données pertinentes. Le système a été modélisé et une méthode de calibrage spécifiquement conçue pour les systèmes cylindriques rotatifs est présentée. Enfin, une application de localisation et de reconstruction 3D est décrite pour montrer une utilisation pratique dans une application de type Simultaneous Localization And Mapping ( SLAM )
An investigation into web-based panoramic video virtual reality with reference to the virtual zoo.
Panoramic image Virtual Reality (VR) is a 360 degree image which has been interpreted as a kind of VR that allows users to navigate, view, hear and have remote access to a virtual environment. Panoramic Video VR builds on this, where filming is done in the real world to create a highly dynamic and immersive environment. This is proving to be a very attractive technology and has introduced many possible applications but still present a number of challenges, considered in this research.
An initial literature survey identified limitations in panoramic video to date: these were the technology (e.g. filming and stitching) and the design of effective navigation methods. In particular, there is a tendency for users to become disoriented during way-finding. In addition, an effective interface design to embed contextual information is required.
The research identified the need to have a controllable test environment in order to evaluate the production of the video and the optimal way of presenting and navigating within the scene. Computer Graphics (CG) simulation scenes were developed to establish a method of capturing, editing and stitching the video under controlled conditions. In addition, a novel navigation method, named the “image channel” was proposed and integrated within this environment. This replaced hotspots: the traditional navigational jumps between locations. Initial user testing indicated that the production was appropriate and did significantly improve user perception of position and orientation over jump-based navigation. The interface design combined with the environment view alone was sufficient for users to understand their location without the need to augment the view with an on screen map.
After obtaining optimal methods in building and improving the technology, the research looked for a natural, complex, and dynamic real environment for testing. The web-based virtual zoo (World Association of Zoos and Aquariums) was selected as an ideal production: It had the purpose to allow people to get close to animals in their natural habitat and created particular interest to develop a system for knowledge delivery, raising protection concerns, and entertaining visitors: all key roles of a zoo.
The design method established from CG was then used to develop a film rig and production unit for filming a real animal habitat: the Formosan rock monkey in Taiwan. A web-based panoramic video of this was built and tested though user experience testing and expert interviews. The results of this were essentially identical to the testing done in the prototype environment, and validated the production. Also was successfully attracting users to the site repeatedly.
The research has contributed to new knowledge in improvement to the production process, improvement to presentation and navigating within panoramic videos through the proposed Image Channel method, and has demonstrated that web-based virtual zoo can be improved to help address considerable pressure on animal extinction and animal habitat degradation that affect humans by using this technology. Further studies were addressed. The research was sponsored by Taiwan’s Government and Twycross Zoo UK was a collaborator