1,840 research outputs found

    Stereography using a single lens

    Get PDF
    In this paper we have put forth an innovative method of obtaining a stereographic image on a single frame using a single lens. This method has been verified experimentally. A preliminary prototype of the same is built with an optimized use of the material available in the laboratory. The prospective applications of this technique are also explored in brief. This method once commercialized, will reduce the expenses incurred in the stereo videography. We also propose a simplified method of obtaining anaglyph.Comment: 8 pages, 3 figures, This paper has been submitted to American Journal of Physic

    Panoramic Depth Imaging: Single Standard Camera Approach

    Get PDF
    In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera’s optical center from the rotational center of the system we are able to capture the motion parallax effect which enables stereo reconstruction. The camera is rotating on a circular path with a step defined by the angle, equivalent to one pixel column of the captured image. The equation for depth estimation can be easily extracted from the system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric pixel columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. The search space on the epipolar line can be additionaly constrained. The focus of the paper is mainly on the system analysis. Results of the stereo reconstruction procedure and quality evaluation of generated depth images are quite promissing. The system performs well for reconstruction of small indoor spaces. Our finall goal is to develop a system for automatic navigation of a mobile robot in a room

    Under vehicle perception for high level safety measures using a catadioptric camera system

    Get PDF
    In recent years, under vehicle surveillance and the classification of the vehicles become an indispensable task that must be achieved for security measures in certain areas such as shopping centers, government buildings, army camps etc. The main challenge to achieve this task is to monitor the under frames of the means of transportations. In this paper, we present a novel solution to achieve this aim. Our solution consists of three main parts: monitoring, detection and classification. In the first part we design a new catadioptric camera system in which the perspective camera points downwards to the catadioptric mirror mounted to the body of a mobile robot. Thanks to the catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part we use speeded up robust features (SURF) in an object recognition algorithm. Fast appearance based mapping algorithm (FAB-MAP) is exploited for the classification of the means of transportations in the third part. Proposed technique is implemented in a laboratory environment

    Capturing Panoramic Depth Images with a Single Standard Camera

    Get PDF
    In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera’s optical center from the rotational center of the system we are able to capture the motion parallax effect which enables the stereo reconstruction. The camera is rotating on a circular path with the step deïŹned by an angle equivalent to one column of the captured image. The equation for depth estimation can be easily extracted from system geometry. To ïŹnd the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. We focused mainly on the system analysis. The system performs well in the reconstruction of small indoor spaces

    Panoramic Stereovision and Scene Reconstruction

    Get PDF
    With advancement of research in robotics and computer vision, an increasingly high number of applications require the understanding of a scene in three dimensions. A variety of systems are deployed to do the same. This thesis explores a novel 3D imaging technique. This involves the use of catadioptric cameras in a stereoscopic arrangement. A secondary system aims to stabilize the system in the event that the cameras are misaligned during operation. The system provides a stark advantage due to it being a cost effective alternative to present day standard state-of-the-art systems that achieve the same goal of 3D imaging. The compromise lies in the quality of depth estimation, which can be overcome with a different imager and calibration. The result was a panoramic disparity map generated by the system

    Single Cone Mirror Omni-Directional Stereo

    Get PDF
    Omni-directional view and stereo information for scene points are both crucial in many computer vision applications. In some demanding applications like autonomous robots, we need to acquire both in real-time without sacrificing too much image resolution. This work describes a novel method to meet all the stringent demands with relatively simple setup and off-the-shelf equipments. Only one simple reflective surface and two regular (perspective) camera views are needed. First we describe the novel stereo method. Then we discuss some variations in practical implementation and their respective tradeoffs

    Geometrical Calibration for the Panrover: a Stereo Omnidirectional System for Planetary Rover

    Get PDF
    Abstract. A novel panoramic stereo imaging system is proposed in this paper. The system is able to carry out a 360° stereoscopic vision, useful for rover autonomous-driving, and capture simultaneously a high-resolution stereo scene. The core of the concept is a novel "bifocal panoramic lens" (BPL) based on hyper hemispheric model (Pernechele et al. 2016). This BPL is able to record a panoramic field of view (FoV) and, simultaneously, an area (belonging to the panoramic FoV) with a given degree of magnification by using a unique image sensor. This strategy makes possible to avoid rotational mechanisms. Using two BPLs settled in a vertical baseline (system called PANROVER) allows the monitoring of the surrounding environment in stereoscopic (3D) mode and, simultaneously, capturing an high-resolution stereoscopic images to analyse scientific cases, making it a new paradigm in the planetary rovers framework.Differently from the majority of the Mars systems which are based on rotational mechanisms for the acquisition of the panoramic images (mosaicked on ground), the PANROVER does not contain any moving components and can rescue a hi-rate stereo images of the context panorama.Scope of this work is the geometric calibration of the panoramic acquisition system by the omnidirectional calibration methods (Scaramuzza et al. 2006) based on Zhang calibration grid. The procedures are applied in order to obtain well rectified synchronized stereo images to be available for 3D reconstruction. We applied a Zhang chess boards based approach even during STC/SIMBIO-SYS stereo camera calibration (Simioni et al. 2014, 2017). In this case the target of the calibration will be the stereo heads (the BPLs) of the PANROVER with the scope of extracting the intrinsic parameters of the optical systems. Differently by previous pipelines, using the same data bench the estimate of the extrinsic parameters is performed

    A smart wheelchair system using a combination of stereoscopic and spherical vision cameras

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.Reports have shown growing numbers of people who fall into the categories of the elderly or those living with some form of disability. Physical and functional impairments are broad-ranging across these groups and the causes are numerous, including strokes, spinal cord injury, spina bifida, multiple sclerosis, muscular dystrophy, and various degenerative disorders. Rehabilitation technologies are a solution to many of these impairments and aim to improve the quality of life for the people who require them. Smart wheelchair developments, in particular, have the purpose of assisting those with mobility disabilities. Providing independence in mobility can have many significant benefits to the users in their daily lives, including improved physical, cognitive, confidence, communication, and social skills. Unfortunately for many, particularly those with tetraplegia (partial or total loss of functionality through illness or injury to all four limbs and torso), there is a serious lack of options available for adequately and safely controlling mobility devices such as wheelchairs. There are few options for hands-free controlling wheelchairs, and furthermore, there are no accessible options for intelligent assistance from the wheelchair to make hands-free control easy and safe. This is a serious issue since the limited hands-free control options available can be difficult to use, resulting in many accidents. There are also new control technology devices emerging in research, such as brain-computer interfaces (BCIs), which could potentially provide an adequate means of control for many people who cannot use currently commercial options, but require intelligent assistance from the wheelchair to make use of such a system safe. In this thesis, the design and development of a new smart wheelchair, named TIM, is introduced to address these issues. The TIM smart wheelchair was created with the intention of providing intelligent assistance during navigation for any hands-free control technology, both currently commercial and new devices produced in research. This aims to vastly improve the options available to the people who are in need of such smart wheelchair developments. A method of utilising stereoscopic cameras for adaptive, real-time vision mapping is presented in this thesis, as cameras are increasingly becoming a more accessible and inexpensive form of artificial sensor. The mapping process in this method involves acquisition from the left and right stereo pair of cameras, which then undergo a range of image pre-processing techniques before being stereo-processed, which includes matching and correlation algorithms, to produce a disparity image. This disparity image contains depth information about the scene, which is then converted into a 3-dimensional (3D) point map, placing all mapped pixels of the environment, and features within, into a 3D plane. Unnecessary information, such as the floor and everything above the maximum height of the TIM smart wheelchair, is removed and the remaining data extracted into a 2-dimensional (2D) bird’s eye view environment map. This mapping representation assists the wheelchair in the later steps of making intelligent navigational decisions based on the relative placement of objects in the environment. Wheel encoders on the drive wheels are also acquired during operation, and odometry change calculations are performed to facilitate the ability of the system to ‘remember’ mapped object points that have passed outside the vision range. This is performed frequently to construct a real-time environment map, and to remember the placement of objects that have moved out of the range of vision, in order to further avoid collisions. This is particularly useful for static environments and creating maps of the static object placements within. A wheel parameter correction process was also employed to increase the accuracy of this mapping process and successfully reduce the errors associated with drive wheel specifications, which in turn can affect the mapping process based on the wheel encoder information. Correction of these parameters helped optimise the ‘memory mapping’ process and reduce skewing and accumulative errors in the maps. A process for intelligent stereo processing parameter selection was designed, as the quality of disparity images, and hence the quality of environmental-mapping, is heavily dependent on the stereo processing parameters, which may work well when set for one environment but produce problems in another. The differences that affected performance between environments were found to mostly be the lighting conditions which resulted from the varying types of environments. As such, this proposed method involves classifying environmental categories in real-time, based on image data available, and adapting the parameters accordingly. The environment types were separated into four categories to account for most encountered environmental situations, being 1) ‘General Illumination Contrast’, 2) ‘Extreme Illumination Contrast’, 3) ‘Consistent Dark’, and 4) ‘Consistent Bright’. The proposed method successfully allowed classification in real-time of the environment categories and adaptation of the stereo processing parameters in accordance, producing a system that can change its settings ‘on the fly’ to suit the environment the wheelchair is navigating through. Limited vision and trouble with dynamic objects were found to be downfalls with the stereoscopic vision, so to address these, methods of utilising a spherical vision camera system were introduced for obstacle detection over a wide vision range. Spherical vision is an extension of monoscopic cameras, producing 360Âș of panoramic vision. A strategy to utilise these panoramic images is presented, in which the images are separated into segments and ‘Traffic Light’ zones. The segments display different areas of the image representing the allocated areas around the wheelchair. The ‘Traffic Light’ zones within the segments are separated into three categories: 1) Red, meaning an obstruction is present around the wheelchair, 2) Yellow, indicating to take caution as an object is nearby, and 3) Green, meaning there are no objects close to the wheelchair in this segment. Image processing techniques have been assembled as a pre-processing strategy, and neural networks are used for intelligent classification of the segmented images into the zone categories. This method provides a wider range of vision than the stereoscopic cameras alone, and also takes into account the issue of detecting dynamic obstacles, such as people moving around. A unique combination of the stereoscopic cameras and the spherical vision cameras is then introduced. This combination and system configuration is biologically inspired by the equine vision system. Horses inherently have a large vision range, which includes a wide monocular vision range around and a binocular vision overlap ahead of the horse. In accordance with this effective vision system, the camera configuration on the TIM smart wheelchair was modelled similarly and advanced software integration strategies then followed. A method for advanced real-time obstacle avoidance is presented, which utilises algorithms in research, such as Vector Field Histogram (VFH) and Vector Polar Histogram (VPH) methods, and adapts them for use with the specified camera configuration. Further improvement upon the algorithms for this application provides safer obstacle avoidance during navigation in unknown environments, with an added emphasis on making automated navigational decisions towards areas with more available free space. Speed and manner of obstacle avoidance is dependent upon the placement spread of objects in an environment and how close they are to the wheelchair during navigation. Finally, the combination and integration of the automated guidance and obstacle avoidance capabilities of the wheelchair with hands-free control technologies are introduced. The aim of the TIM smart wheelchair system was to effectively provide safe navigation with automated obstacle avoidance in a manner that ultimately executes the user’s intentions for travel. As such, a head-movement control (HMC) device and a brain-computer interface (BCI) device are both separately integrated with the TIM smart wheelchair, providing a display of two new options for hands-free control. Experimental studies were conducted using these two control devices separately, to assess the performance of the TIM smart wheelchair, as well as its ability to carry out user’s navigational intentions safely and effectively. Eight able-bodied participants trialled the system, including four male and four female, with ages ranging from 21 to 56 years old. All of these able-bodied participants have not previously had any experience operating a wheelchair. In addition, two male tetraplegic (C-6 to C-7) participants also completed the experimental study, aged 20 and 33. Both tetraplegic participants are wheelchair users, so these experiments were of great importance. The same tasks applied to all, and included navigating obstacle courses with the use of the head-movement control system and the brain-computer interface for control. Experiment runs were conducted for each control system with automated navigational guidance assistance from TIM, and repeated for some capable participants without the assistance from TIM. This process was also conducted in two types of obstacle courses, being 1) a ‘Static Course’ and 2) a ‘Dynamic Course’, requiring different types of challenges in obstacle avoidance. This provided results to assess the performance and safety of the TIM smart wheelchair in a range of environments and situations. Evaluation of the results displayed the feasibility and effectiveness of the developed TIM smart wheelchair system. This system, once equipped with a unique camera configuration and reliable obstacle avoidance strategies, was able to successfully allow users to control the wheelchair with research-produced hands-free interface devices and effectively navigate safely through challenging environments. The TIM smart wheelchair system is able to adapt to people with various types and levels of physical impairment, and ultimately provide ease-of-use as well as safety during navigation

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications

    Comparing of radial and tangencial geometric for cylindric panorama

    Full text link
    Cameras generally have a field of view only large enough to capture a portion of their surroundings. The goal of immersion is to replace many of your senses with virtual ones, so that the virtual environment will feel as real as possible. Panoramic cameras are used to capture the entire 360°view, also known as panoramic images.Virtual reality makes use of these panoramic images to provide a more immersive experience compared to seeing images on a 2D screen. This thesis, which is in the field of Computer vision, focuses on establishing a multi-camera geometry to generate a cylindrical panorama image and successfully implementing it with the cheapest cameras possible. The specific goal of this project is to propose the cameras geometry which will decrease artifact problems related to parallax in the panorama image. We present a new approach of cylindrical panoramic images from multiple cameras which its setup has cameras placed evenly around a circle. Instead of looking outward, which is the traditional ”radial” configuration, we propose to make the optical axes tangent to the camera circle, a ”tangential” configuration. Beside an analysis and comparison of radial and tangential geometries, we provide an experimental setup with real panoramas obtained in realistic conditionsLes camĂ©ras ont gĂ©nĂ©ralement un champ de vision Ă  peine assez grand pour capturer partie de leur environnement. L’objectif de l’immersion est de remplacer virtuellement un grand nombre de sens, de sorte que l’environnement virtuel soit perçu comme le plus rĂ©el possible. Une camĂ©ra panoramique est utilisĂ©e pour capturer l’ensemble d’une vue 360°, Ă©galement connue sous le nom d’image panoramique. La rĂ©alitĂ© virtuelle fait usage de ces images panoramiques pour fournir une expĂ©rience plus immersive par rapport aux images sur un Ă©cran 2D. Cette thĂšse, qui est dans le domaine de la vision par ordinateur, s’intĂ©resse Ă  la crĂ©ation d’une gĂ©omĂ©trie multi-camĂ©ras pour gĂ©nĂ©rer une image cylindrique panoramique et vise une mise en Ɠuvre avec les camĂ©ras moins chĂšres possibles. L’objectif spĂ©cifique de ce projet est de proposer une gĂ©omĂ©trie de camĂ©ra qui va diminuer au maximum les problĂšmes d’artefacts liĂ©s au parallaxe prĂ©sent dans l’image panoramique. Nous prĂ©sentons une nouvelle approche de capture des images panoramiques cylindriques Ă  partir de plusieurs camĂ©ras disposĂ©es uniformĂ©ment autour d’un cercle. Au lieu de regarder vers l’extĂ©rieur, ce qui est la configuration traditionnelle ”radiale”, nous proposons de rendre les axes optiques tangents au cercle des camĂ©ras, une configuration ”tangentielle”. Outre une analyse et la comparaison des gĂ©omĂ©tries radiales et tangentielles, nous fournissons un montage expĂ©rimental avec de vrais panoramas obtenus dans des conditions rĂ©aliste
    • 

    corecore