3,398 research outputs found

    Polar Fusion Technique Analysis for Evaluating the Performances of Image Fusion of Thermal and Visual Images for Human Face Recognition

    Full text link
    This paper presents a comparative study of two different methods, which are based on fusion and polar transformation of visual and thermal images. Here, investigation is done to handle the challenges of face recognition, which include pose variations, changes in facial expression, partial occlusions, variations in illumination, rotation through different angles, change in scale etc. To overcome these obstacles we have implemented and thoroughly examined two different fusion techniques through rigorous experimentation. In the first method log-polar transformation is applied to the fused images obtained after fusion of visual and thermal images whereas in second method fusion is applied on log-polar transformed individual visual and thermal images. After this step, which is thus obtained in one form or another, Principal Component Analysis (PCA) is applied to reduce dimension of the fused images. Log-polar transformed images are capable of handling complicacies introduced by scaling and rotation. The main objective of employing fusion is to produce a fused image that provides more detailed and reliable information, which is capable to overcome the drawbacks present in the individual visual and thermal face images. Finally, those reduced fused images are classified using a multilayer perceptron neural network. The database used for the experiments conducted here is Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark thermal and visual face images. The second method has shown better performance, which is 95.71% (maximum) and on an average 93.81% as correct recognition rate.Comment: Proceedings of IEEE Workshop on Computational Intelligence in Biometrics and Identity Management (IEEE CIBIM 2011), Paris, France, April 11 - 15, 201

    Examples of Artificial Perceptions in Optical Character Recognition and Iris Recognition

    Full text link
    This paper assumes the hypothesis that human learning is perception based, and consequently, the learning process and perceptions should not be represented and investigated independently or modeled in different simulation spaces. In order to keep the analogy between the artificial and human learning, the former is assumed here as being based on the artificial perception. Hence, instead of choosing to apply or develop a Computational Theory of (human) Perceptions, we choose to mirror the human perceptions in a numeric (computational) space as artificial perceptions and to analyze the interdependence between artificial learning and artificial perception in the same numeric space, using one of the simplest tools of Artificial Intelligence and Soft Computing, namely the perceptrons. As practical applications, we choose to work around two examples: Optical Character Recognition and Iris Recognition. In both cases a simple Turing test shows that artificial perceptions of the difference between two characters and between two irides are fuzzy, whereas the corresponding human perceptions are, in fact, crisp.Comment: 5th Int. Conf. on Soft Computing and Applications (Szeged, HU), 22-24 Aug 201

    Identifikasi Personal Biometrik Berdasarkan Sinyal Photoplethysmography dari Detak Jantung

    Get PDF
    Sistem biometrik sangat berguna untuk membedakan karakteristik individu seseorang. Sistem identifikasi yang paling banyak digunakan diantaranya berdasarkan metode fingerprint, face detection, iris atu hand geometry. Penelitian ini mencoba untuk meningkatkan sistem biometrik menggunakan sinyal Photoplethysmography dari detak jantung. Algoritma yang diusulkan menggunakan seluruh ektraksi fitur yang didapatkan melalui sistem untuk pengenalan biometrik. Efesiensi dari algoritma yang diusulkan didemonstrasikan oleh hasil percobaan yang didapatkan menggunakan metode klasifikasi Multilayer Perceptron, Naïve Bayes dan Random Forest berdasarkan fitur ekstraksi yang didapatkan dari proses sinyal prosesing. Didapatkan 51 subjek pada penelitian ini; sinyal PPG signals dari setiap individu didapatkan melalui sensor pada dua rentang waktu yang berbeda. 30 fitur karakteristik didapatkan dari setiap periode dan kemudian digunakan untuk proses klasifikasi. Sistem klasifikasi menggunakan metode Multilayer Perceptron, Naïve Bayes dan Random Forest; nilai true positive dari masing-masing metode adalah 94.6078 %, 92.1569 % dan 90.3922 %. Hasil yang didapatkan menunjukkan bahwa seluruh algoritma yang diusulkan dan sistem identifikasi biometrik dari pengembangan sinyal PPG ini sangat menjanjikan untuk sistem pengenalan individu manusia. ============================================================================================= The importance of biometric system can distinguish the uniqueness of personal characteristics. The most popular identification systems have concerned the method based on fingerprint, face detection, iris or hand geometry. This study is trying to improve the biometric system using Photoplethysmography signal by heart rate. The proposed algorithm calculates the contribution of all extracted features to biometric recognition. The efficiency of the proposed algorithms is demonstrated by the experiment results obtained from the Multilayer Perceptron, Naïve Bayes and Random Forest classifier applications based on the extracted features. There are fifty one persons joined for the experiments; the PPG signals of each person were recorded for two different time spans. 30 characteristic features were extracted for each period and these characteristic features are used for the purpose of classification. The results were evaluated via the Multilayer Perceptron, Naïve Bayes and Random Forest classifier models; the true positive rates are then 94.6078 %, 92.1569 % and 90.3922 %, respectively. The obtained results showed that both the proposed algorithm and the biometric identification model based on this developed PPG signal are very promising for contact less recognizing systems

    Driver Distraction Identification with an Ensemble of Convolutional Neural Networks

    Full text link
    The World Health Organization (WHO) reported 1.25 million deaths yearly due to road traffic accidents worldwide and the number has been continuously increasing over the last few years. Nearly fifth of these accidents are caused by distracted drivers. Existing work of distracted driver detection is concerned with a small set of distractions (mostly, cell phone usage). Unreliable ad-hoc methods are often used.In this paper, we present the first publicly available dataset for driver distraction identification with more distraction postures than existing alternatives. In addition, we propose a reliable deep learning-based solution that achieves a 90% accuracy. The system consists of a genetically-weighted ensemble of convolutional neural networks, we show that a weighted ensemble of classifiers using a genetic algorithm yields in a better classification confidence. We also study the effect of different visual elements in distraction detection by means of face and hand localizations, and skin segmentation. Finally, we present a thinned version of our ensemble that could achieve 84.64% classification accuracy and operate in a real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949
    corecore