19 research outputs found

    Recognition of occluded traffic signs based on two-dimensional linear discriminant analysis

    No full text
    Traffic signs recognition involving digital image analysis is getting more and more popular. The main problem associated with visual recognition of traffic signs is associated with difficult conditions of image acquisition. In the paper we present a solution to the problem of signs occlusion. Presented method belongs to the group of appearance-based approaches, employing template matching working in the reduced feature space obtained by Linear Discriminant Analysis. The method deals with all types of signs, regarding their shape and color in contrast to commercial systems, installed in higher-class cars, that only detect the round speed limit signs and overtaking restrictions. Finally, we present some experiments performed on a benchmark databases with different kinds of occlusion

    Strategie tworzenia macierzy kowariancji w metodzie PCA dla danych tr贸jwymiarowych

    No full text
    The paper presents a problem of reducing dimensionality of data structured in three-dimensional matrices, like true-color RGB digital images. In this paper we consider an application of Principal Component Analysis to one of the most typical image processing tasks, namely image compression. Unlike the cases reported in the literature [5,11,12] the compression being an application of three-dimensional PCA is performed on image blocks organized as three-dimensional structures (see Fig.1). In the first step, an image, which is stored as a three-dimensional matrix is decomposed into non-overlapping 3D blocks. Then each block is projected into lower-dimensional representation (1D or 2D) according to the chosen strategy: concatenation of rows, concatenation of columns, integration of rows, integration of columns [13] and concatenation of slices. Next, the blocks are centered (subtraction of mean value) and covariance matrices are being calculated. Finally, the eigenproblem is being solved on the covariance matrices giving a set of eigenvalues and eigenvectors, which are a base for creation of transformation matrices. Each block is then multiplied by respective transformation functions created from truncated eigenvectors matrices giving its reduced representation. The experimental part of the paper shows the comparison of strategies of calculating covariance matrices in the aspect of image reconstruction quality (evaluated by Peak Signal-to-Noise Ratio).W niniejszym artykule przedstawiono problem redukcji wymiarowo艣ci danych zorganizowanych w tr贸jwymiarowych macierzach za pomoc膮 metody Analizy G艂贸wnych Sk艂adowych (PCA). W przeciwie艅stwie do znanych metod prezentowanych w literaturze [5,11,12] wybrane metody opisane w pracy zak艂adaj膮 wykonanie oblicze艅 dla danych zagregowanych, bez ich rozdzielania na kana艂y. W pierwszym kroku algorytmu obraz kolorowy (macierz tr贸jwymiarowa) jest dekomponowany na niezale偶ne sub-bloki (3D). Nast臋pnie ka偶dy z blok贸w jest poddawany projekcji 1D lub 2D zgodnie z przyj臋t膮 strategi膮: poprzez konkatenacj臋 wierszy, konkatenacj臋 kolumn, integracj臋 wierszy, integracje kolumn lub konkatenacj臋 warstw. W kolejnym kroku s膮 one centrowane i obliczane s膮 odpowiednie macierze kowariancji. Nast臋pnie obliczany jest ich rozk艂ad, kt贸ry s艂u偶y do stworzenia macierzy transformacji 3D PCA. Za ich pomoc膮 przeprowadzana jest redukcja wymiarowo艣ci danych obrazowych. W przypadku omawianym w niniejszej pracy kompresji poddany jest obraz RGB i oceniana jest jako艣膰 rekonstrukcji (PSNR) jako funkcja liczby pozostawionych wsp贸艂czynnik贸w przekszta艂cenia

    Sprz臋towe przyspieszenie klasyfikacji danych multimedialnych

    No full text
    In this paper, experimental results of a proposed hardware acceleration of feature extraction and data classifiers for multimedia are presented. This hardware is based on multi-core architecture connected with a mesh Network on Chip (NoC). The cores in the system execute both data classifiers and feature extraction for audio and image data. Using various meta heuristics the system is optimized with regards to different data communication criteria. The system was implemented on an FPGA platform with use of ImpulseC hardware description language.W artykule zosta艂y zeprezentowane wyniki eksperymentalne dotycz膮ce sprz臋towego przyspieszania ekstrakcji cech i klasyfikacji danych multimedialnych. Opracowane rozwi膮zanie sprz臋towe bazuje na architekturze wielordzeniowej, w kt贸rej ka偶dy blok realizuje przypisan膮 mu statycznie funkcjonalno艣膰. Rdzenie po艂膮czone s膮 ze sob膮 za pomoc膮 sieci wewn膮trzuk艂adowej (ang. Network on Chip, NoC) o architekturze siatki. W artykule opisano pokr贸tce autorskie oprogramowanie s艂u偶膮ce do generowania kodu sieci wewn膮trzuk艂adowej. Graficzny interfejs u偶ytkownika zosta艂 zaprezentowany na rys. 1. Narz臋dzie ma za zadanie dokonywa膰 odwzorowania wybranych funkcjonalno艣ci do poszczeg贸lnych rdzeni z wykorzystaniem takich meta-heurystyk jak algorytmy genetyczne, symulowane wy偶arzanie, poszukiwanie losowe czy algorytmu gradientowego. Jako kryterium optymalizacji mo偶na wybra膰 minimalizacj臋 ca艂kowitego przesy艂u danych, minimalizacj臋 maksymalnej liczby danych transmitowanych przez pojedyncze 艂膮cze, a tak偶e minimalizacj臋 odchylenia standardowego rozmiaru strumieni transmitowanych przez poszczeg贸lne 艂膮cza. Przyk艂adowe wyniki optymalizacji losowej dla sieci wewn膮trzuk艂adowej zosta艂y przedstawione w tab. 1, natomiast wyniki optymalizacji dla sieci wewn膮trzuk艂adowej wykorzystuj膮cej inne podej艣cia - w tab. 2. Dla systemu zoptymalizowanego w ten spos贸b zosta艂 wygnerowany opisuj膮cy go kod w j臋zyku ImpulseC, kt贸ry nast臋pnie pos艂u偶y艂 do syntezy sprz臋towej na uk艂adzie FPGA z rodziny Xilinx Virtex 5. Zaj臋to艣膰 uk艂adu XC5VSX50T dla trzech wykorzystanych klasyfikator贸w zosta艂a przedstawiona na rys. 3. Z kolei tab. 3 przedstawia liczb臋 zasob贸w wykorzystanych przez narz臋dzie syntezy wysokiego poziomu dla tych klasyfikator贸w. Technika przedstawiona w publikacji umo偶liwia okre艣lenie warunk贸w i ogranicze艅 implementacji sprz臋towej systemu s艂u偶膮cego klasyfikacji danych multimedialnych

    Detection of critical behaviour on roads by vehicle trajectory analysis

    No full text
    Detecting restricted or security critical behaviour on roads is crucial for safety protection and fluent traffic flow. In the paper we propose mechanisms for the trajectory of moving vehicle analysis using vision-based techniques applied to video sequences captured by road cameras. The effectiveness of the proposed solution is confirmed by experimental studies

    Application of cascading two-dimensional canonical correlation analysis to image matching

    No full text
    The paper presents a novel approach to Canonical Correlation Analysis (CCA) applied to visible and thermal infrared spectrum facial images. In the typical CCA framework biometrical information is transformed from original feature space into the space of canonical variates, and further processing takes place in this space. Extracted features are maximally correlated in canonical variates space, making it possible to expose, investigate and model latent relationships between measured variables. In the paper the CCA is implemented along two directions (along rows and columns of pixel matrix of dimension M x N) using a cascade scheme. The first stage of transformation proceeds along rows of data matrices. Its results are reorganized by transposition. These reorganized matrices are inputs to the second processing stage, namely basic CCA procedure performed along the rows of reorganized matrices, resulting in fact in proceeding along the columns of input data matrix. The so called cascading 2DCCA method also solves the Small Sample Size problem, because instead of the images of size MxN pixels in fact we are using N images of size M x 1 pixels and M images of size 1 x N pixels. In the paper several numerical experiments performed on FERET and Equinox databases are presented

    PEOPLE RETRIEVAL BY MEANS OF COMPOSITE PICTURES: METHODS, SYSTEMS AND PRACTICAL DECISIONS

    No full text
    We discuss the problem of people retrieval by means of composite pictures and methods of its practical realization. Earlier on, the problem was posed in the previous paper by the authors, and this paper deals with its further development. The starting premise here is that for the successful search of people by their sketches it is necessary to transform these sketches into sets of populations of sketches imitating evidence of 芦group of witnesses禄 and evidence with incomplete information in verbal portraits. Variants of structures for benchmark 芦photo-sketch禄 databases are presented, intended for modeling and practical realization of original photos retrieval by sketches, which new component is a population of sketches. Problems of preprocessing for initial sketches and original photos and its influence on the result of their comparison are discussed. Simple sketch recognition systems (Simple FaRetSys) and a problem of original photos retrieval by the sketches are considered. Shortcomings of such systems are shown and new decisions on extending and development of simple systems (Extended FaRetSys) are presented. Experiments on searching of original photos by sketches in the CUFS database of sketches and similar experiments on widely known FERET and CUFSF facial databases are presented. Three frameworks are offered for retrieval performance improvement. In the first one, original sketches are transformed into populations, and then in these populations the sketch similar to the given sketch (Forensic Sketch) is already defined. The class of the sketch found in a population 芦by definition禄 unambiguously corresponds to a class of the original photo. In the second framework, the Forensic Sketch is transformed to a population of sketches, and all original sketches in a benchmarking database are compared to sketches from populations of the Forensic Sketch. The class of matches is determined in the same manner as in the first framework. The third framework includes generation of a population of sketches, both from all original sketches, and from all Forensic Sketches. The further line of research is obvious: retrieval by matching between sketches of these two populations
    corecore