2,446 research outputs found

    Improving the Performance of Thinning Algorithms with Directed Rooted Acyclic Graphs

    Get PDF
    In this paper we propose a strategy to optimize the performance of thinning algorithms. This solution is obtained by combining three proven strategies for binary images neighborhood exploration, namely modeling the problem with an optimal decision tree, reusing pixels from the previous step of the algorithm, and reducing the code footprint by means of Directed Rooted Acyclic Graphs. A complete and open-source benchmarking suite is also provided. Experimental results confirm that the proposed algorithms clearly outperform classical implementations

    A Relaxation Scheme for Mesh Locality in Computer Vision.

    Get PDF
    Parallel processing has been considered as the key to build computer systems of the future and has become a mainstream subject in Computer Science. Computer Vision applications are computationally intensive that require parallel approaches to exploit the intrinsic parallelism. This research addresses this problem for low-level and intermediate-level vision problems. The contributions of this dissertation are a unified scheme based on probabilistic relaxation labeling that captures localities of image data and the ability of using this scheme to develop efficient parallel algorithms for Computer Vision problems. We begin with investigating the problem of skeletonization. The technique of pattern match that exhausts all the possible interaction patterns between a pixel and its neighboring pixels captures the locality of this problem, and leads to an efficient One-pass Parallel Asymmetric Thinning Algorithm (OPATA\sb8). The use of 8-distance in this algorithm, or chessboard distance, not only improves the quality of the resulting skeletons, but also improves the efficiency of the computation. This new algorithm plays an important role in a hierarchical route planning system to extract high level typological information of cross-country mobility maps which greatly speeds up the route searching over large areas. We generalize the neighborhood interaction description method to include more complicated applications such as edge detection and image restoration. The proposed probabilistic relaxation labeling scheme exploit parallelism by discovering local interactions in neighboring areas and by describing them effectively. The proposed scheme consists of a transformation function and a dictionary construction method. The non-linear transformation function is derived from Markov Random Field theory. It efficiently combines evidences from neighborhood interactions. The dictionary construction method provides an efficient way to encode these localities. A case study applies the scheme to the problem of edge detection. The relaxation step of this edge-detection algorithm greatly reduces noise effects, gets better edge localization such as line ends and corners, and plays a crucial rule in refining edge outputs. The experiments on both synthetic and natural images show that our algorithm converges quickly, and is robust in noisy environment

    Towards a Software Prototype Supporting Automatic Recognition of Sketched Business Process Models

    Get PDF
    The paper at hand presents a prototype that implements an approach to recognize the structure of business process models sketched on paper and whiteboards to create digital versions of those models. We explain the different steps from common photos of business process models sketched on paper or whiteboards to their digital version. We modified existing approaches from sketch recognition to fulfill the needs of sketched business process model recognition. Therefore, a dataset was generated that in turn is used to train different classifiers for different shapes within business process models

    A parallel thinning method based on image marking

    Get PDF
    2000-2001 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Single-iteration image skeletonization based on OPTA and Zhang-Suen algorithms

    Get PDF
    Рассматривается задача скелетизации бинарных изображений. Для построения предельно тонких связанных скелетов бинарных изображений с низкой вычислительной сложностью предложены математическая модель и алгоритм OPCA (One-Pass Combination Algorithm) одноподитерационной скелетизации на основе комбинации и упрощения моделей одноподитерационной OPTA и двухподитерационной Zhang-Suen (ZS) скелетизации. Эксперименте показано, что OPCA алгоритм позволяет повышение скорости скелетизации , уменьшение избыточности связей между пикселями скелета в несколько раз. This paper is focused on the field of the skeletonization of the binary image. Mathematical model and OPCA algorithm which based on a combination and simplification of single-iterative OPTA and twoiterative ZS are proposed for constructing extremely thin bound skeletons of binary images with low computational complexity. The experiment showed that the OPCA algorithm allows increasing the skeletonization speed, reducing the redundancy of the bonds between the skeleton pixels several times

    Automatic analysis of electronic drawings using neural network

    Get PDF
    Neural network technique has been found to be a powerful tool in pattern recognition. It captures associations or discovers regularities with a set of patterns, where the types, number of variables or diversity of the data are very great, the relationships between variables are vaguely understood, or the relationships are difficult to describe adequately with conventional approaches. In this dissertation, which is related to the research and the system design aiming at recognizing the digital gate symbols and characters in electronic drawings, we have proposed: (1) A modified Kohonen neural network with a shift-invariant capability in pattern recognition; (2) An effective approach to optimization of the structure of the back-propagation neural network; (3) Candidate searching and pre-processing techniques to facilitate the automatic analysis of the electronic drawings. An analysis and the system performance reveal that when the shift of an image pattern is not large, and the rotation is only by nx90°, (n = 1, 2, and 3), the modified Kohonen neural network is superior to the conventional Kohonen neural network in terms of shift-invariant and limited rotation-invariant capabilities. As a result, the dimensionality of the Kohonen layer can be reduced significantly compared with the conventional ones for the same performance. Moreover, the size of the subsequent neural network, say, back-propagation feed-forward neural network, can be decreased dramatically. There are no known rules for specifying the number of nodes in the hidden layers of a feed-forward neural network. Increasing the size of the hidden layer usually improves the recognition accuracy, while decreasing the size generally improves generalization capability. We determine the optimal size by simulation to attain a balance between the accuracy and generalization. This optimized back-propagation neural network outperforms the conventional ones designed by experience in general. In order to further reduce the computation complexity and save the calculation time spent in neural networks, pre-processing techniques have been developed to remove long circuit lines in the electronic drawings. This made the candidate searching more effective

    A parallel algorithm for skeletonizing images by using spiking neural P systems

    Get PDF
    Skeletonization is a common type of transformation within image analysis. In general, the image B is a skeleton of the black and white image A, if the image B is made of fewer black pixels than the image A, it does preserve its topological properties and, in some sense, keeps its meaning. In this paper, we aim to use spiking neural P systems (a computational model in the framework of membrane computing) to solve the skeletonization problem. Based on such devices, a parallel software has been implemented within the Graphics Processors Units (GPU) architecture. Some of the possible real-world applications and new lines for future research will be also dealt with in this paper.Ministerio de Ciencia e Innovación TIN2008-04487-EMinisterio de Ciencia e Innovación TIN-2009-13192Junta de Andalucía P08-TIC-0420

    Скелетизация изображений на основе комбинации одно- и двухподытерационных моделей

    Get PDF
    This paper is focused on the field of the skeletonization of the binary image. Skeletonization makes it possible to represent a binary image in the form of many thin lines, the relative position, sizes and shape of which adequately describe the size, shape and orientation in space of the corresponding image areas. Skeletonization has many variety methods. Iterative parallel algorithms provide high quality skeletons. They can be implemented using one or more sub-iterations. In each iteration, redundant pixels, the neighborhoods of which meet certain conditions, are removed layer by layer along the contour and finally they leave only the skeleton. Many one-sub-iterations algorithms are characterized by a breakdown in connectivity and the formation of excess skeleton fragments. The highest-quality skeletons are formed by the well-known single-iteration OPTA algorithm, which based on 18 binary masks, but it is sensitive to contour noise and has a high computational complexity. The Zhang and Suen two-iteration algorithm (ZS), which is based on 6 logical conditions, is widely used due to its relative simplicity. But it suffers from the problem of the blurs of the diagonal lines with a thickness of 2 pixels and the lost of the square which size is 2×2 pixels. Besides, both algorithms mentioned above do not achieve the unit pixel thickness of the skeleton lines (many non-node pixels have more than two neighbors). Mathematical model and OPCA (One-Pass Combination Algorithm) algorithm which is based on a combination and simplification of single-iterative OPTA and two-iterative ZS are proposed for constructing extremely thin bound skeletons of binary images with low computational complexity. These model and algorithm also made it possible to accelerate the speed of skeletonization, to enhance recoverability of the original image on the skeleton and to reduce the redundancy of the bonds of the skeleton elements.Рассматривается задача скелетизации бинарных изображений. Скелетизация дает возможность представить бинарное изображение в виде множества тонких линий, взаимное расположение, размеры и форма которых адекватно описывают размеры, форму и ориентацию в пространстве соответствующих областей изображения. Высокое качество скелетов обеспечивают итерационные параллельные алгоритмы. Они могут реализовываться с использованием одной или нескольких подытераций. На каждой из них происходит удаление избыточных элементов, окрестности которых удовлетворяют определенным условиям. Для многих одноподытерационных алгоритмов характерно нарушение связности и формирование избыточных фрагментов скелета. Наиболее качественные скелеты формирует известный одноподытерационный алгоритм OPTA (One-Pass Thinning Algorithm), основанный на 18 бинарных масках, который, однако, чувствителен к контурному шуму и имеет высокую вычислительную сложность. Благодаря относительной простоте широкую известность получил двухподытерационный алгоритм Zhang – Suen (ZS), основанный на шести логических условиях, но он размывает диагональные линии толщиной 2 пиксела и удаляет области размером 2×2 пиксела. Оба алгоритма не обеспечивают достижение минимальной толщины линий скелета (многие неузловые элементы имеют более двух соседей). Для построения предельно тонких связанных скелетов бинарных изображений с низкой вычислительной сложностью предлагаются математическая модель и алгоритм OPCA (One-Pass Combination Algorithm) одноподытерационной скелетизации на основе комбинации и упрощения моделей одно- и двухподытерационной скелетизации. Данные модель и алгоритм позволяют повысить скорость скелетизации, восстановить исходное изображение по скелету, снизить избыточность связей элементов скелета

    Двухшаговая скелетизация бинарных изображений на основе модели Занга-Суена и порождающей маски

    Get PDF
    The aim of the work is to limit excessive thinning and increase the resistance to contour noise of skeletons resulted from arbitrary binary image shape while maintaining a high skeletonization rate. The skeleton is a set of thin lines, the relative position, the size and shape, which conveys information of size, shape and orientation in space of the corresponding homogeneous region of the image. To ensure resistance to contour noise, skeletonization algorithms are built on the basis of several steps. Zhang-Suen algorithm is widely known by high-quality skeletons and average performance, which disadvantages are the blurring of diagonal lines with a thickness of 2 pixels and the totally disappear patterns of 2x2 pixels. To overcome them, a mathematical model that compensates the Zhang-Suen algorithm has proposed in this paper, along with a producing mask and two logical conditions for evaluating its elements.Целью работы является предельное утоньшение и повышение устойчивости к контурному шуму скелетов бинарных объектов произвольной формы при сохранении высокой скорости скелетизации. Скелет представляет собой множество тонких линий, взаимное расположение, размеры и форма которых передает информацию о размере, форме и ориентации в пространстве соответствующей однородной области изображения. Для обеспечения устойчивости к контурному шуму каждая итерация алгоритмов скелетизации разделяется на несколько шагов. Благодаря относительно качественным скелетам и средней производительности широкую известность получил двухшаговый алгоритм Занга-Суена. Его недостатками являются размытие диагональных линий толщиной 2 пикселя и удаление областей размером 2х2 пикселей. Для их устранения в статье предложена математическая модель, дополняющая модель Занга-Суена порождающей маской и двумя логическими условиями оценки ее элементов
    corecore