33 research outputs found

    High speed computing of ice thickness equation for ice sheet model

    Get PDF
    Two-dimensional (2-D) ice flow thermodynamics coupled model acts as a vital role for visualizing the ice sheet behaviours of the Antarctica region and the climate system. One of the parameters used in this model is ice thickness. Explicit method of finite difference method (FDM) is used to discretize the ice thickness equation. After that, the equation will be performed on Compute Unified Device Architecture (CUDA) programming by using Graphics Processing Unit (GPU) platform. Nowadays, the demand of GPU for solving the computational problem has been increasing due to the low price and high performance computation properties. This paper investigates the performance of GPU hardware supported by the CUDA parallel programming and capable to compute a large sparse complex system of the ice thickness equation of 2D ice flow thermodynamics model using multiple cores simultaneously and efficiently. The parallel performance evaluation (PPE) is evaluated in terms of execution time, speedup, efficiency, effectiveness and temporal performance

    STEREO MATCHING ALGORITHM BASED ON ILLUMINATION CONTROL TO IMPROVE THE ACCURACY

    Full text link

    High-Level Synthesis: Productivity, Performance, and Software Constraints

    Get PDF

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors

    Design of Binocular Stereo Vision System Via CNN-based Stereo Matching Algorithm

    Get PDF
    Stereo vision is one of the representative technologies in the 3D camera, using multiple cameras to perceive the depth information in the three-dimensional space. The binocular one has become the most widely applied method in stereo vision. So in our thesis, we design a binocular stereo vision system based on an adjustable narrow-baseline stereo camera, which can simultaneously capture the left and right images belonging to a stereo image pair. The camera calibration and rectification techniques are firstly performed to get rectified stereo pairs, serving as the input to the subsequent step, that is, searching the corresponding points between the left and right images. The stereo matching algorithm resolves the correspondence problem and plays a crucial part in our system, which produces disparity maps targeted at predicting the depths with the help of the triangulation principle. We focus on the first stage of this algorithm, proposing a CNN-based approach to calculating the matching cost by measuring the similarity level between two image patches. Two kinds of network architectures are presented and both of them are based on the siamese network. The fast network employs the cosine metric to compute the similarity level at a satisfactory accuracy and processing speed. While the slow network is aimed at learning a new metric, making the disparity prediction slightly more precise but at the cost of spending way more image handling time and counting on more parameters. The output of either network is regarded as the initial matching cost, followed by a series of post-processing methods, including cross-based cost aggregation as well as semi-global cost aggregation. With the trick of Winner-Take-All (WTA), the raw disparity map is attained and it will undergo further refinement procedures containing interpolation and image filtering. The above networks are trained and validated on three standard stereo datasets: Middlebury, KITTI 2012, and KITTI 2015. The contrast tests of CNN-based methods and census transformation have demonstrated that the former approach outperforms the later one on the mentioned datasets. The algorithm based on the fast network is adopted in our devised system. To evaluate the performance of a binocular stereo vision system, two types of error criteria are come up with, acquiring the proper range of working distance under diverse baseline lengths

    Performance Analysis between Basic Block Matching and Dynamic Programming of Stereo Matching Algorithm

    Get PDF
    One of the most important key steps of stereo vision algorithms is the disparity map implementation, where it generally utilized to decorrelate data and recover 3D scene framework of stereo image pairs. However, less accuracy of attaining the disparity map is one of the challenging problems on stereo vision approach. Thus, various methods of stereo matching algorithms have been developed and widely investigated for implementing the disparity map of stereo image pairs including the Dynamic Programming (DP) and the Basic Block Matching (BBM) methods. This paper mainly presents an evaluation between the Dynamic Programming (DP) and the Basic Block Matching (BBM) methods of stereo matching algorithms in term of disparity map accuracy, noise enhancement, and smoothness. Where the Basic Block Matching (BBM) is using the Sum of Absolute Difference (SAD) method in this research as a basic algorithm to determine the correspondence points between the target and reference images. In contrast, Dynamic Programming (DP) has been used as a global optimization approach. Besides, there will be a performance analysis including graphs results from both methods presented in this paper, which can show that both methods can be used on many stereo vision applications

    Literature Survey On Stereo Vision Disparity Map Algorithms

    Get PDF
    This paper presents a literature survey on existing disparity map algorithms. It focuses on four main stages of processing as proposed by Scharstein and Szeliski in a taxonomy and evaluation of dense two-frame stereo correspondence algorithms performed in 2002. To assist future researchers in developing their own stereo matching algorithms, a summary of the existing algorithms developed for every stage of processing is also provided. The survey also notes the implementation of previous software-based and hardware-based algorithms. Generally, the main processing module for a software-based implementation uses only a central processing unit. By contrast, a hardware-based implementation requires one or more additional processors for its processing module, such as graphical processing unit or a field programmable gate array. This literature survey also presents a method of qualitative measurement that is widely used by researchers in the area of stereo vision disparity mappings
    corecore