109,337 research outputs found

    Analysis of Edge Detection Technique for Hardware Realization

    Get PDF
    Edge detection plays an important role in image processing and computer vision applications. Different edge detection technique with distinct criteria have been proposed in various literatures. Thus an evaluation of different edge detection techniques is essential to measure their effectiveness over a wide range of natural images with varying applications. Several performance indices for quantitative evaluation of edge detectors may be found in the literature among which Edge Mis-Match error (EMM), F-Measure (FM), Figure of Merit (FOM) and Precision and Recall (PR) curve are most effective. Several experiments on different database containing a wide range of natural and synthetic images illustrate the effectiveness of Canny edge detector over other detectors for varying conditions. Moreover, due to the ever increasing demand for high speed and time critical tasks in many image processing application, we have implemented an efficient hardware architecture for Canny edge detector in VHDL. The studied implementation technique adopts parallel architecture of Field Programmable Gate Array (FPGA) to accelerate the process of edge detection via. Canny’s algorithm. In this dissertation, we have simulated the considered architecture in Modelsim 10.4a student edition to demonstrate the potential of parallel processing for edge detection. This analysis and implementation may encourage and serve as a basis building block for several complex computer vision applications. With the advent of Field Programmable Gate Arrays (FPGA), massively parallel architectures can be developed to accelerate the execution speed of several image processing algorithms. In this work, such a parallel architecture is proposed to accelerate the Canny edge detection algorithm. The architecture is simulated in Modelsim 10.4a student edition platform

    A Review on Implementation of Image Processing Algorithms using Hardware Software Co-simulation

    Get PDF
    Edge detection is necessary tool for extraction of information for further image processing operation. Many computer vision application use edge detectors as primary operators before high level image processing. Several algorithms are available for edge detection which makes use of derivative approach. Roberts, Prewitt , sobel, canny are some of the examples of edge detection methods. In this project edge detection algorithms are to be implemented over FPGA board. Proposed architecture gives an alternative by using a graphical user interface which is developed by combining MATLAB, Simulink and XSG tool. Prototype of Application Specific Integrated Circuit [ASIC] can be obtained by FPGA based implementation of edge detection algorithm. Comparative analysis using software and hardware is to be done. Instead of using traditional approach of programming FPGA, Xilinx System Generator [XSG] is used for programming and modeling FPGA. XSG has an integrated design flow to move directly to the bit stream file from simulink design environment which is necessary for programming the FPGA. Advantage of using FPGA is power efficient circuits can be fabricated; it has large memory and superior parallel computing capacity. With use of FPGA, design procedure becomes more flexible. DOI: 10.17762/ijritcc2321-8169.15010

    Blockwise Transform Image Coding Enhancement and Edge Detection

    Get PDF
    The goal of this thesis is high quality image coding, enhancement and edge detection. A unified approach using novel fast transforms is developed to achieve all three objectives. Requirements are low bit rate, low complexity of implementation and parallel processing. The last requirement is achieved by processing the image in small blocks such that all blocks can be processed simultaneously. This is similar to biological vision. A major issue is to minimize the resulting block effects. This is done by using proper transforms and possibly an overlap-save technique. The bit rate in image coding is minimized by developing new results in optimal adaptive multistage transform coding. Newly developed fast trigonometric transforms are also utilized and compared for transform coding, image enhancement and edge detection. Both image enhancement and edge detection involve generalised bandpass filtering wit fast transforms. The algorithms have been developed with special attention to the properties of biological vision systems

    Development of an fpga based image processing intellectual property core

    Get PDF
    Traditional image processing algorithms are sequential in nature. When these algorithms are implemented in a real-time system, the response time will be high. In an embedded platform, such algorithms consumes more power because of more number of clock cycles required to execute the algorithm. With the advent of Field Programmable Gate Arrays (FPGA), massively parallel architectures can be developed to accelerate the execution speed of several image processing algorithms. In this work, such a parallel architecture is proposed to accelerate the SOBEL edge detection algorithm. To simulate this architecture, a model of video acquisition system is developed. This model will convert the incoming frames to digital composite video signals which can be processed by the edge detection architecture. An external software developed in Matlab will convert the frames in to hexadecimal format, and will feed the video acquisition model. The output of the edge detection processor will be a digital composite signal. A display module will convert the digital composite video signals in to hexadecimal format. Then with the help of an external Matlab program, the original image will be reconstructed. The result shown compares the sequential and parallel environments, and shows significant improvements in FPGA based implementations. The Modelsim simulation of SOBEL based edge detection algorithm for a 256 _ 256 frame, gave a result in 0.019 seconds for a clock speed of 10MHz, where as a Matlab based simulation took 0.22 seconds to finish this operation, which is a significant acceleration. Moreover, a new software simulation platform was developed as a part of this project, which will let the developer to give input as image and the output will be reproduced in the same format, while all the background processing will be carried out in VHD

    Generic Techniques in General Purpose GPU Programming with Applications to Ant Colony and Image Processing Algorithms

    Get PDF
    In 2006 NVIDIA introduced a new unified GPU architecture facilitating general-purpose computation on the GPU. The following year NVIDIA introduced CUDA, a parallel programming architecture for developing general purpose applications for direct execution on the new unified GPU. CUDA exposes the GPU's massively parallel architecture of the GPU so that parallel code can be written to execute much faster than its sequential counterpart. Although CUDA abstracts the underlying architecture, fully utilising and scheduling the GPU is non-trivial and has given rise to a new active area of research. Due to the inherent complexities pertaining to GPU development, in this thesis we explore and find efficient parallel mappings of existing and new parallel algorithms on the GPU using NVIDIA CUDA. We place particular emphasis on metaheuristics, image processing and designing reusable techniques and mappings that can be applied to other problems and domains. We begin by focusing on Ant Colony Optimisation (ACO), a nature inspired heuristic approach for solving optimisation problems. We present a versatile improved data-parallel approach for solving the Travelling Salesman Problem using ACO resulting in significant speedups. By extending our initial work, we show how existing mappings of ACO on the GPU are unable to compete against their sequential counterpart when common CPU optimisation strategies are employed and detail three distinct candidate set parallelisation strategies for execution on the GPU. By further extending our data-parallel approach we present the first implementation of an ACO-based edge detection algorithm on the GPU to reduce the execution time and improve the viability of ACO-based edge detection. We finish by presenting a new color edge detection technique using the volume of a pixel in the HSI color space along with a parallel GPU implementation that is able to withstand greater levels of noise than existing algorithms

    A Relaxation Scheme for Mesh Locality in Computer Vision.

    Get PDF
    Parallel processing has been considered as the key to build computer systems of the future and has become a mainstream subject in Computer Science. Computer Vision applications are computationally intensive that require parallel approaches to exploit the intrinsic parallelism. This research addresses this problem for low-level and intermediate-level vision problems. The contributions of this dissertation are a unified scheme based on probabilistic relaxation labeling that captures localities of image data and the ability of using this scheme to develop efficient parallel algorithms for Computer Vision problems. We begin with investigating the problem of skeletonization. The technique of pattern match that exhausts all the possible interaction patterns between a pixel and its neighboring pixels captures the locality of this problem, and leads to an efficient One-pass Parallel Asymmetric Thinning Algorithm (OPATA\sb8). The use of 8-distance in this algorithm, or chessboard distance, not only improves the quality of the resulting skeletons, but also improves the efficiency of the computation. This new algorithm plays an important role in a hierarchical route planning system to extract high level typological information of cross-country mobility maps which greatly speeds up the route searching over large areas. We generalize the neighborhood interaction description method to include more complicated applications such as edge detection and image restoration. The proposed probabilistic relaxation labeling scheme exploit parallelism by discovering local interactions in neighboring areas and by describing them effectively. The proposed scheme consists of a transformation function and a dictionary construction method. The non-linear transformation function is derived from Markov Random Field theory. It efficiently combines evidences from neighborhood interactions. The dictionary construction method provides an efficient way to encode these localities. A case study applies the scheme to the problem of edge detection. The relaxation step of this edge-detection algorithm greatly reduces noise effects, gets better edge localization such as line ends and corners, and plays a crucial rule in refining edge outputs. The experiments on both synthetic and natural images show that our algorithm converges quickly, and is robust in noisy environment

    Edge detection in a pixel array circuit

    Get PDF
    Edge detection is commonly used to extract important features of an image, while providing a compression of image data, and thus it is one of the most important operations in image processing systems. Such a detection is usually implemented in a system of data processor(s) with its input image signals provided by a camera. However, in many applications, in which the constraints of speed, power and size of the hard-ware are critical, it will be favorable to implement the edge detection in a smart sensor, i.e. a pixel array circuit, in which each pixel operates in parallel with other to perform signal acquisition and processing. In this thesis, the design of a CMOS pixel array for edge detection is presented. The work of this thesis is in the two aspects, detection algorithms and circuit implementations. A new version of the algorithm for edge detection has been proposed, aiming at facilitating its implementation in an integrated circuit with simple inter-pixel connections and processing units in the pixels. It has been demonstrated that the proposed algorithm can result in a quality of the detection as good as the most commonly used detection algorithms. To implement this algorithm, a pixel array circuit has been designed with simple current-mode modules and logic gates in each pixel. The research effect in the circuit aspect is on solving the problems of the transistor mismatch, which makes identically-designed units behave non-uniformly, and charge injection in the current-mode circuits. With special compensation schemes implemented in the pixel circuit, the uniformity of the processing units integrated in the pixel array increased more than ten times, which improves significantly the quality of the operations in the process of the detection in the circuits. The pixel array circuit can be easily implemented with a standard CMOS technolog

    ANALISIS IMPLEMENTASI ANT COLONY OPTIMIZATION UNTUK DETEKSI TEPI PADA GPU

    Get PDF
    ABSTRAKSI: Deteksi tepi adalah proses untuk mengambil informasi pada tepi citra dengan tujuan untuk mengetahui informasi yang terkandung pada sebuah citra. GPU (Graphic Processing Unit) adalah prosesor khusus untuk pengolahan grafis pada komputer. NVIDIA mengembangkan sebuah teknologi yang bernama CUDA (Compute Unified Device Architecture). CUDA adalah arsitektur perangkat keras dan perangkat lunak untuk mengelola proses komputasi parallel pada GPU. Algoritma Ant Colony Optimization (ACO), adalah algoritma optimisasi yang terinspirasi dari perilaku semut dalam mencari rute terpendek menuju makanan. Di dalam algoritma ini, sejumlah semut sebagai agen digunakan untuk memperbaharui pheromone matriks untuk mencari ruang solusi. Dalam penelitian ini, data akan diolah dalam dua tahapan, yaitu preprocessing citra masukan dan proses deteksi tepi. Pada tahap preprocessing akan dilakukan pemrosesan citra sehingga dapat diolah pada tahapan selanjutnya. Untuk tahap preprocessing citra, pada citra RGB akan ditambahkan salt&peper noise dan gaussian noise selanjutnya diubah menjadi citra keabuan. Setelah citra RGB berubah menjadi citra keabuan, maka proses deteksi tepi dijalankan menggunakan algoritma Ant Colony Optimization (ACO). Proses deteksi tepi akan dijalankan pada CPU dan GPU. Berdasarkan hasil analsis, didapatkan hasil bahwa tidak ada perbedaan yang signifikan antara kualitas citra hasil deteksi tepi yang dijalankan pada CPU dan GPU. Sedangkan waktu komputasi pada GPU lebih cepat dibandingkan pada CPU dengan speedup sebesar 1.24 untuk citra ukuran 128x128 piksel, 1.54 untuk citra ukuran 256x256 piksel, dan 1.54 untuk citra ukuran 512x512 piksel. Kata Kunci : Deteksi tepi, Ant Colony Optimization (ACO), GPU, CUDAABSTRACT: Edge detection is the process of extracting the edge information from the image so it is decisive to understand the image content. GPU (Graphic Processing Unit) is a specialized processor for graphics processing on computer. NVIDIA has also developed a technology called CUDA (Compute Unified Device Architecture). CUDA is an architecture of hardware and software to manage parallel computing process on GPU. Ant Colony Optimization algorithms (ACO), are optimization algorithms that inspired from nature ants exploring shortest route for food. in this algorithm, applies ant as agent with update pheromone matriks in order to find the solution space. In this research, the existing data will be processed in two stages, namely preprocessing input image and edge detection process. In the preprocessing stage of processing will be done so that the image can be processed at a later stage. For image preprocessing stages, RGB image will be added salt&peper noise and gaussian noise and change into grayscale image. After RGB image turn into grayscale image, then edge detection process executed using Ant Colony Optimization (ACO) algorithms. Edge detection process will be running in CPU and GPU. From the analysis, showed that there was no significant difference in the quality image edge detection result running on CPU dan GPU. While the computing time on GPU faster than CPU with speedup of 1.24 for 128x128 pixel image, 1.42 for 256x256 pixel image, and 1.54 for 512x512 pixel image. Keyword: Edge detection, Ant Colony Optimization (ACO), GPU, CUD
    corecore