4 research outputs found

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications

    Modélisation et développement d'une plateforme intelligente pour la capture d'images panoramiques cylindriques

    Get PDF
    In most robotic applications, vision systems can significantly improve the perception of the environment. The panoramic view has particular attractions because it allows omnidirectional perception. However, it is rarely used because the methods that provide panoramic views also have significant drawbacks. Most of these omnidirectional vision systems involve the combination of a matrix camera and a mirror, rotating matrix cameras or a wide angle lens. The major drawbacks of this type of sensors are in the great distortions of the images and the heterogeneity of the resolution. Some other methods, while providing homogeneous resolutions, also provide a huge data flow that is difficult to process in real time and are either too slow or lacking in precision. To address these problems, we propose a smart panoramic vision system that presents technological improvements over rotating linear sensor methods. It allows homogeneous 360 degree cylindrical imaging with a resolution of 6600 × 2048 pixels and a precision turntable to synchronize position with acquisition. We also propose a solution to the bandwidth problem with the implementation of a feature etractor that selects only the invariant feaures of the image in such a way that the camera produces a panoramic view at high speed while delivering only relevant information. A general geometric model has been developped has been developped to describe the image formation process and a caligration method specially designed for this kind of sensor is presented. Finally, localisation and structure from motion experiments are described to show a practical use of the system in SLAM applications.Dans la plupart des applications de robotique, un système de vision apporte une amélioration significative de la perception de l’environnement. La vision panoramique est particulièrement intéressante car elle rend possible une perception omnidirectionnelle. Elle est cependant rarement utilisée en pratique à cause des limitations technologiques découlant des méthodes la permettant. La grande majorité de ces méthodes associent des caméras, des miroirs, des grands angles et des systèmes rotatifs ensembles pour créer des champs de vision élargis. Les principaux défauts de ces méthodes sont les importantes distorsions des images et l’hétérogénéité de la résolution. Certaines autres méthodes permettant des résolutions homogènes, prodiguent un flot de données très important qui est difficile à traiter en temps réel et sont soit trop lents soit manquent de précision. Pour résoudre ces problèmes, nous proposons la réalisation d’une caméra panoramique intelligente qui présente plusieurs améliorations technologiques par rapport aux autres caméras linéaires rotatives. Cette caméra capture des panoramas cylindriques homogènes avec une résolution de 6600 × 2048 pixels. La synchronisation de la capture avec la position angulaire est possible grâce à une plateforme rotative de précision. Nous proposons aussi une solution au problème que pose le gros flot de données avec l’implémentation d’un extracteur de primitives qui sélectionne uniquement les primitives invariantes des images pour donner un système panoramique de vision qui ne transmet que les données pertinentes. Le système a été modélisé et une méthode de calibrage spécifiquement conçue pour les systèmes cylindriques rotatifs est présentée. Enfin, une application de localisation et de reconstruction 3D est décrite pour montrer une utilisation pratique dans une application de type Simultaneous Localization And Mapping ( SLAM )

    An investigation into web-based panoramic video virtual reality with reference to the virtual zoo.

    Get PDF
    Panoramic image Virtual Reality (VR) is a 360 degree image which has been interpreted as a kind of VR that allows users to navigate, view, hear and have remote access to a virtual environment. Panoramic Video VR builds on this, where filming is done in the real world to create a highly dynamic and immersive environment. This is proving to be a very attractive technology and has introduced many possible applications but still present a number of challenges, considered in this research. An initial literature survey identified limitations in panoramic video to date: these were the technology (e.g. filming and stitching) and the design of effective navigation methods. In particular, there is a tendency for users to become disoriented during way-finding. In addition, an effective interface design to embed contextual information is required. The research identified the need to have a controllable test environment in order to evaluate the production of the video and the optimal way of presenting and navigating within the scene. Computer Graphics (CG) simulation scenes were developed to establish a method of capturing, editing and stitching the video under controlled conditions. In addition, a novel navigation method, named the “image channel” was proposed and integrated within this environment. This replaced hotspots: the traditional navigational jumps between locations. Initial user testing indicated that the production was appropriate and did significantly improve user perception of position and orientation over jump-based navigation. The interface design combined with the environment view alone was sufficient for users to understand their location without the need to augment the view with an on screen map. After obtaining optimal methods in building and improving the technology, the research looked for a natural, complex, and dynamic real environment for testing. The web-based virtual zoo (World Association of Zoos and Aquariums) was selected as an ideal production: It had the purpose to allow people to get close to animals in their natural habitat and created particular interest to develop a system for knowledge delivery, raising protection concerns, and entertaining visitors: all key roles of a zoo. The design method established from CG was then used to develop a film rig and production unit for filming a real animal habitat: the Formosan rock monkey in Taiwan. A web-based panoramic video of this was built and tested though user experience testing and expert interviews. The results of this were essentially identical to the testing done in the prototype environment, and validated the production. Also was successfully attracting users to the site repeatedly. The research has contributed to new knowledge in improvement to the production process, improvement to presentation and navigating within panoramic videos through the proposed Image Channel method, and has demonstrated that web-based virtual zoo can be improved to help address considerable pressure on animal extinction and animal habitat degradation that affect humans by using this technology. Further studies were addressed. The research was sponsored by Taiwan’s Government and Twycross Zoo UK was a collaborator
    corecore