22 research outputs found

    Building an inexpensive parallel computer

    Get PDF

    A color hand gesture database for evaluating and improving algorithms on hand gesture and posture recognition

    Get PDF
    With the increase of research activities in vision-based hand posture and gesture recognition, new methods and algorithms are being developed. Although less attention is being paid to developing a standard platform for this purpose. Developing a database of hand gesture images is a necessary first step for standardizing the research on hand gesture recognition. For this purpose, we have developed an image database of hand posture and gesture images. The database contains hand images in different lighting conditions and collected using a digital camera. Details of the automatic segmentation and clipping of the hands are also discussed in this paper

    Assessment of the Local Tchebichef Moments Method for Texture Classification by Fine Tuning Extraction Parameters

    Full text link
    In this paper we use machine learning to study the application of Local Tchebichef Moments (LTM) to the problem of texture classification. The original LTM method was proposed by Mukundan (2014). The LTM method can be used for texture analysis in many different ways, either using the moment values directly, or more simply creating a relationship between the moment values of different orders, producing a histogram similar to those of Local Binary Pattern (LBP) based methods. The original method was not fully tested with large datasets, and there are several parameters that should be characterised for performance. Among these parameters are the kernel size, the moment orders and the weights for each moment. We implemented the LTM method in a flexible way in order to allow for the modification of the parameters that can affect its performance. Using four subsets from the Outex dataset (a popular benchmark for texture analysis), we used Random Forests to create models and to classify texture images, recording the standard metrics for each classifier. We repeated the process using several variations of the LBP method for comparison. This allowed us to find the best combination of orders and weights for the LTM method for texture classification

    RGB-D And Thermal Sensor Fusion: A Systematic Literature Review

    Full text link
    In the last decade, the computer vision field has seen significant progress in multimodal data fusion and learning, where multiple sensors, including depth, infrared, and visual, are used to capture the environment across diverse spectral ranges. Despite these advancements, there has been no systematic and comprehensive evaluation of fusing RGB-D and thermal modalities to date. While autonomous driving using LiDAR, radar, RGB, and other sensors has garnered substantial research interest, along with the fusion of RGB and depth modalities, the integration of thermal cameras and, specifically, the fusion of RGB-D and thermal data, has received comparatively less attention. This might be partly due to the limited number of publicly available datasets for such applications. This paper provides a comprehensive review of both, state-of-the-art and traditional methods used in fusing RGB-D and thermal camera data for various applications, such as site inspection, human tracking, fault detection, and others. The reviewed literature has been categorised into technical areas, such as 3D reconstruction, segmentation, object detection, available datasets, and other related topics. Following a brief introduction and an overview of the methodology, the study delves into calibration and registration techniques, then examines thermal visualisation and 3D reconstruction, before discussing the application of classic feature-based techniques as well as modern deep learning approaches. The paper concludes with a discourse on current limitations and potential future research directions. It is hoped that this survey will serve as a valuable reference for researchers looking to familiarise themselves with the latest advancements and contribute to the RGB-DT research field.Comment: 33 pages, 20 figure

    Stream Processing for Fast and Efficient Rotated Haar-like Features using Rotated Integral Images

    No full text
    Abstract: An extended set of Haar-like features for image sensors beyond the standard vertically and horizontally aligned Haar-like features and the 45 o twisted Haar-like features are introduced. The extended rotated Haar-like features are based on the standard Haar-like features that have been rotated based on whole integer pixel based rotations. These rotated feature values can also be calculated using rotated integral images which means that they can be fast and efficiently calculated with just 8 operations irrespective of the feature size. The integral image calculations can be offloaded to the graphical processing unit (GPU) using the stream processing paradigm. The integral image calculation on the GPU is seen to be faster than the traditional central processing unit (CPU) implementation of the algorithm, for large image sizes, allowing more complex clasifiers to be implemented in real-time

    Feature-based rapid object detection : from feature extraction to parallelisation : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Sciences at Massey University, Auckland, New Zealand

    Get PDF
    This thesis studies rapid object detection, focusing on feature-based methods. Firstly, modifications of training and detection of the Viola-Jones method are made to improve performance and overcome some of the current limitations such as rotation, occlusion and articulation. New classifiers produced by training and by converting existing classifiers are tested in face detection and hand detection. Secondly, the nature of invariant features in terms of the computational complexity, discrimination power and invariance to rotation and scaling are discussed. A new feature extraction method called Concentric Discs Moment Invariants (CDMI) is developed based on moment invariants and summed-area tables. The dimensionality of this set of features can be increased by using additional concentric discs, rather than using higher order moments. The CDMI set has useful properties, such as speed, rotation invariance, scaling invariance, and rapid contrast stretching can be easily implemented. The results of experiments with face detection shows a clear improvement in accuracy and performance of the CDMI method compared to the standard moment invariants method. Both the CDMI and its variant, using central moments from concentric squares, are used to assess the strength of the method applied to hand-written digits recognition. Finally, the parallelisation of the detection algorithm is discussed. A new model for the specific case of the Viola-Jones method is proposed and tested experimentally. This model takes advantage of the structure of classifiers and of the multi-resolution approach associated with the detection method. The model shows that high speedups can be achieved by broadcasting frames and carrying out the computation of one or more cascades in each node

    Comparação entre algoritmos de mínimos quadrados e de zona mínima para desvios de circularidade

    No full text
    Orientador: Olivio NovaskiDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecanicaResumo: Em metrologia é muito importante poder determinar com exatidão os desvios de circularidade de uma peça. Existem sempre erros associados a essas medições, causados por inúmeras fontes. Em máquinas automatizadas ou auxiliadas por computador uma dessas fontes de erro é o próprio algoritmo utilizado, questão só levantada recentemente. O objetivo principal deste trabalho é determinar o comportamento de algoritmos de zona mínima de tolerância para desvios de circularidade, mais corretos do ponto de vista conceituai, em comparação com algoritmos dos mínimos quadrados, mais utilizados na prática. Para analisar as diferenças entre os resultados foi utilizada simulação em computador. Os dois algoritmos foram desenvolvidos em linguagem C e um terceiro programa gerou aleatoriamente milhares de conjuntos de pontos semelhantes aos obtidos em uma máquina de medir por coordenadas, ü algoritmo de mínima zona de tolerância (MZC) escolhido foi o dos diagramas de Voronoi, pela garantia de otimização do resultado, baixa complexidade e facilidade de implementação. Para simular uma situação semelhante à encontrada normalmente numa inspeção de indústria, variou-se os valores de desvios de circularidade, números de pontos do conjunto e raios da peça dentro dos intervalos usuais. Os resultados mostraram que o algoritmo de Voronoi obtém um par de círculos concêntricos com separação igual ou menor que o dos mínimos quadrados. A diferença entre os resultados pode chegar a 30% ou mais, dependendo da distribuição dos pontos no conjunto. O comportamento dos desvios máximos entre os resultados pode ser determinado e depende do número de pontos amostrados da peça, e do intervalo de circularidade que está sendo medido. Para os valores simulados o resultado é independente do raio da peça. No caso da circularidade, o algoritmo dos diagramas de Voronoi mostrou um comportamento estável com um bom tempo de execução num ambiente industrial. Para algumas máquinas de medir por coordenadas (MMCs) utilizadas atualmente na indústria a incerteza é muito maior que o erro máximo causado pela escolha do algoritmo para determinados valores de circularidade e número de pontos do conjunto. Nestes casos o uso dos mínimos quadrados é justificável. Quando somente este último estiver disponível em determinada máquina, o erro máximo esperado pode ser diminuído aumentado-se o número de pontos do conjunto. Para máquinas dedicadas ou MMCs mais precisas, que têm uma incerteza menor, os algoritmos são fontes de erros significativos. Os algoritmos MZC são necessários nestes casos. Pelos resultados obtidos pode-se presumir que o desenvolvimento de algoritmos de zona mínima de tolerância para outros desvios de forma em três dimensões, tais como cilindricidade, serão de grande interesse na área metrológica.Abstract: The precisely assessment of out-of-roundness is an important issue in metrology. There are always errors associated with these measurements, caused by different sources. The computer aided measuring machines have an additional source of error, the algorithm itself. This subject was only recently in flux. The main objective in this work is to determine the behavior of minimum zone center (MZC), conceptually the most correct, in comparison with least square center (ESC), used in practice. In order to analyze the differences between the algorithms' answers, computer simulations were used. Both algorithms were developed in C language, and a third program generated thousands of sets similar to that obtained through CMMs. The algorithm of Voronoi diagrams, MZC type, was developed. The choice's reasons were the optimization assurance, low complexity and implementation facility. The out-of- roundness values, numbers of points of the set and the radius of the workpiece were varied to simulate industry environment. The results showed that the Voronoi algorithm gets a pair of concentric circles with separation equal or less then the LSC one. The difference between the results could be about 30% or greater, depending on the points' distribution. The behavior of the maximum errors between results depends on the number of points and the value interval of out-of-roundness. for some CMMs used in industry, the uncertainty of the machine is greater then the maximum error caused by algorithm's choice. This is true for various number of points and out-of-roundness. In these cases the LSC is allowed. Also when only LSC is available, the maximum expected error could be minimized increasing the number of points. For dedicated machines and CMMs more accurate, the algorithm is an important source of error. The MZC algorithms are always necessary in these cases. through the results one could presume that the development of other minimum zone algorithms, such as cilindricity and sphericity, will be very important for metrology in future.Mestrad

    Fast and Efficient Rotated Haar-like Features using Rotated Integral Images

    No full text
    This paper introduces an extended set of Haarlike features beyond the standard vertically and horizontally aligned Haar-like features [Viola and Jones, 2001a; 2001b] and the 45 o twisted Haar-like features [Lienhart and Maydt, 2002; Lienhart et al., 2003a; 2003b]. The extended rotated Haar-like features are based on the standard Haar-like features that have been rotated based on whole integer pixel based rotations. These rotated feature values can also be calculated using rotated integral images which means that they can be fast and efficiently calculated with just 8 operations irrespective of the feature size. In general each feature requires another 8 operations based on an identity integral image so that appropriate scaling corrections can be applied. These scaling corrections are needed due to the rounding errors associated with scaling the features. The errors introduced by these rotated features on natural images are small enough to allow rotated classifiers to be implemented using a classifier trained on only vertically aligned images. This is a significant improvement in training time for a classifier that is invariant to the rotations represented in the parallel classifier
    corecore