1,698 research outputs found

    High-quality Panorama Stitching based on Asymmetric Bidirectional Optical Flow

    Full text link
    In this paper, we propose a panorama stitching algorithm based on asymmetric bidirectional optical flow. This algorithm expects multiple photos captured by fisheye lens cameras as input, and then, through the proposed algorithm, these photos can be merged into a high-quality 360-degree spherical panoramic image. For photos taken from a distant perspective, the parallax among them is relatively small, and the obtained panoramic image can be nearly seamless and undistorted. For photos taken from a close perspective or with a relatively large parallax, a seamless though partially distorted panoramic image can also be obtained. Besides, with the help of Graphics Processing Unit (GPU), this algorithm can complete the whole stitching process at a very fast speed: typically, it only takes less than 30s to obtain a panoramic image of 9000-by-4000 pixels, which means our panorama stitching algorithm is of high value in many real-time applications. Our code is available at https://github.com/MungoMeng/Panorama-OpticalFlow.Comment: Published at the 5th International Conference on Computational Intelligence and Applications (ICCIA 2020

    Odometria visual monocular em robĂ´s para a agricultura com camara(s) com lentes "olho de peixe"

    Get PDF
    One of the main challenges in robotics is to develop accurate localization methods that achieve acceptable runtime performances.One of the most common approaches is to use Global Navigation Satellite System such as GPS to localize robots.However, satellite signals are not full-time available in some kind of environments.The purpose of this dissertation is to develop a localization system for a ground robot.This robot is inserted in a project called RoMoVi and is intended to perform tasks like crop monitoring and harvesting in steep slope vineyards.This vineyards are localized in the Douro region which are characterized by the presence of high hills.Thus, the context of RoMoVi is not prosperous for the use of GPS-based localization systems.Therefore, the main goal of this work is to create a reliable localization system based on vision techniques and low cost sensors.To do so, a Visual Odometry system will be used.The concept of Visual Odometry is equivalent to wheel odometry but it has the advantage of not suffering from wheel slip which is present in these kind of environments due to the harsh terrain conditions.Here, motion is tracked computing the homogeneous transformation between camera frames, incrementally.However, this approach also presents some open issues.Most of the state of art methods, specially those who present a monocular camera system, don't perform good motion estimations in pure rotations.In some of them, motion even degenerates in these situations.Also, computing the motion scale is a difficult task that is widely investigated in this field.This work is intended to solve these issues.To do so, fisheye lens cameras will be used in order to achieve wide vision field of views

    Per-Pixel Calibration for RGB-Depth Natural 3D Reconstruction on GPU

    Get PDF
    Ever since the Kinect brought low-cost depth cameras into consumer market, great interest has been invigorated into Red-Green-Blue-Depth (RGBD) sensors. Without calibration, a RGBD camera’s horizontal and vertical field of view (FoV) could help generate 3D reconstruction in camera space naturally on graphics processing unit (GPU), which however is badly deformed by the lens distortions and imperfect depth resolution (depth distortion). The camera’s calibration based on a pinhole-camera model and a high-order distortion removal model requires a lot of calculations in the fragment shader. In order to get rid of both the lens distortion and the depth distortion while still be able to do simple calculations in the GPU fragment shader, a novel per-pixel calibration method with look-up table based 3D reconstruction in real-time is proposed, using a rail calibration system. This rail calibration system offers possibilities of collecting infinite calibrating points of dense distributions that can cover all pixels in a sensor, such that not only lens distortions, but depth distortion can also be handled by a per-pixel D to ZW mapping. Instead of utilizing the traditional pinhole camera model, two polynomial mapping models are employed. One is a two-dimensional high-order polynomial mapping from R/C to XW=YW respectively, which handles lens distortions; and the other one is a per-pixel linear mapping from D to ZW, which can handle depth distortion. With only six parameters and three linear equations in the fragment shader, the undistorted 3D world coordinates (XW, YW, ZW) for every single pixel could be generated in real-time. The per-pixel calibration method could be applied universally on any RGBD cameras. With the alignment of RGB values using a pinhole camera matrix, it could even work on a combination of a random Depth sensor and a random RGB sensor

    Real-time indoor assistive localization with mobile omnidirectional vision and cloud GPU acceleration

    Full text link
    In this paper we propose a real-time assistive localization approach to help blind and visually impaired people in navigating an indoor environment. The system consists of a mobile vision front end with a portable panoramic lens mounted on a smart phone, and a remote image feature-based database of the scene on a GPU-enabled server. Compact and elective omnidirectional image features are extracted and represented in the smart phone front end, and then transmitted to the server in the cloud. These features of a short video clip are used to search the database of the indoor environment via image-based indexing to find the location of the current view within the database, which is associated with floor plans of the environment. A median-filter-based multi-frame aggregation strategy is used for single path modeling, and a 2D multi-frame aggregation strategy based on the candidates’ distribution densities is used for multi-path environmental modeling to provide a final location estimation. To deal with the high computational cost in searching a large database for a realistic navigation application, data parallelism and task parallelism properties are identified in the database indexing process, and computation is accelerated by using multi-core CPUs and GPUs. User-friendly HCI particularly for the visually impaired is designed and implemented on an iPhone, which also supports system configurations and scene modeling for new environments. Experiments on a database of an eight-floor building are carried out to demonstrate the capacity of the proposed system, with real-time response (14 fps) and robust localization results

    Landslide monitoring using mobile device and cloud-based photogrammetry

    Get PDF
    PhD ThesisLandslides are one of the most commonly occurring natural disasters that can cause a serious threat to human life and society, in addition to significant economic loss. Investigation and monitoring of landslides are important tasks in geotechnical engineering in order to mitigate the hazards created by such phenomena. However, current geomatics approaches used for precise landslide monitoring are largely inappropriate for initial assessment by an engineer over small areas due to the labourintensive and costly methods often adopted. Therefore, the development of a costeffective landslide monitoring system for real-time on-site investigation is essential to aid initial geotechnical interpretation and assessment. In this research, close-range photogrammetric techniques using imagery from a mobile device camera (e.g. a modern smartphone) were investigated as a low-cost, non-contact monitoring approach to on-site landslide investigation. The developed system was implemented on a mobile platform with cloud computing technology to enable the potential for real-time processing. The system comprised the front-end service of a mobile application controlled by the operator and a back-end service employed for photogrammetric measurement and landslide monitoring analysis. In terms of the backend service, Structure-from-Motion (SfM) photogrammetry was implemented to provide fully-automated processing to offer user-friendliness to non-experts. This was integrated with developed functions that were used to enhance the processing performance and deliver appropriate photogrammetric results for assessing landslide deformations. In order to implement this system with a real-time response, the cloud-based system required data transfer using Internet services via a modern 4G/5G network. Furthermore, the relationship between the number of images and image size was investigated to optimize data processing. The potential of the developed system for monitoring landslides was investigated at two different real-world UK sites, comprising a natural earth-flow landslide and coastal cliff erosion. These investigations demonstrated that the cloud-based photogrammetric measurement system was capable of providing three-dimensional results to subdecimeter-level accuracy. The results of the initial assessments for on-site investigation could be effectively presented on the mobile device through visualisation and/or statistical quantification of the landslide changes at a local-scale.Royal Thai Government and Naresuan University for the scholarship and financial suppor

    Assessment of Structure from Motion for Reconnaissance Augmentation and Bandwidth Usage Reduction

    Get PDF
    Modern militaries rely upon remote image sensors for real-time intelligence. A typical remote system consists of an unmanned aerial vehicle, or UAV, with an attached camera. A video stream is sent from the UAV, through a bandwidth-constrained satellite connection, to an intelligence processing unit. In this research, an upgrade to this method of collection is proposed. A set of synthetic images of a scene captured by a UAV in a virtual environment is sent to a pipeline of computer vision algorithms, collectively known as Structure from Motion. The output of Structure from Motion, a three-dimensional model, is then assessed in a 3D virtual world as a possible replacement for the images from which it was created. This study shows Structure from Motion results from a modifiable spiral flight path and compares the geoaccuracy of each result. A flattening of height is observed, and an automated compensation for this flattening is performed. Each reconstruction is also compressed, and the size of the compression is compared with the compressed size of the images from which it was created. A reduction of 49-60% of required space is shown
    • …
    corecore