25 research outputs found

    Deep multimodal biometric recognition using contourlet derivative weighted rank fusion with human face, fingerprint and iris images

    Get PDF
    The goal of multimodal biometric recognition system is to make a decision by identifying their physiological behavioural traits. Nevertheless, the decision-making process by biometric recognition system can be extremely complex due to high dimension unimodal features in temporal domain. This paper explains a deep multimodal biometric system for human recognition using three traits, face, fingerprint and iris. With the objective of reducing the feature vector dimension in the temporal domain, first pre-processing is performed using Contourlet Transform Model. Next, Local Derivative Ternary Pattern model is applied to the pre-processed features where the feature discrimination power is improved by obtaining the coefficients that has maximum variation across pre-processed multimodality features, therefore improving recognition accuracy. Weighted Rank Level Fusion is applied to the extracted multimodal features, that efficiently combine the biometric matching scores from several modalities (i.e. face, fingerprint and iris). Finally, a deep learning framework is presented for improving the recognition rate of the multimodal biometric system in temporal domain. The results of the proposed multimodal biometric recognition framework were compared with other multimodal methods. Out of these comparisons, the multimodal face, fingerprint and iris fusion offers significant improvements in the recognition rate of the suggested multimodal biometric system

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    BIG DATA и анализ высокого уровня : материалы конференции

    Get PDF
    В сборнике опубликованы результаты научных исследований и разработок в области BIG DATA and Advanced Analytics для оптимизации IT-решений и бизнес-решений, а также тематических исследований в области медицины, образования и экологии

    Digital Image Processing Applications

    Get PDF
    Digital image processing can refer to a wide variety of techniques, concepts, and applications of different types of processing for different purposes. This book provides examples of digital image processing applications and presents recent research on processing concepts and techniques. Chapters cover such topics as image processing in medical physics, binarization, video processing, and more

    Detection and classification of non-stationary signals using sparse representations in adaptive dictionaries

    Get PDF
    Automatic classification of non-stationary radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such signals are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models, making feature extraction and classification difficult. This thesis proposes an adaptive classification approach for poorly characterized targets and backgrounds based on sparse representations in non-analytical dictionaries learned from data. Conventional analytical orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of non-stationary signals, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They generally do not lead to sparse decompositions (i.e., with very few non-zero coefficients), and use in classification requires separate feature selection algorithms. Pursuit-type decompositions in analytical overcomplete (non-orthogonal) dictionaries yield sparse representations, by design, and work well for signals that are similar to the dictionary elements. The pursuit search, however, has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. One such overcomplete analytical dictionary method is also analyzed in this thesis for comparative purposes. The main thrust of the thesis is learning discriminative RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics. A pursuit search is used over the learned dictionaries to generate sparse classification features in order to identify time windows that contain a target pulse. Two state-of-the-art dictionary learning methods are compared, the K-SVD algorithm and Hebbian learning, in terms of their classification performance as a function of dictionary training parameters. Additionally, a novel hybrid dictionary algorithm is introduced, demonstrating better performance and higher robustness to noise. The issue of dictionary dimensionality is explored and this thesis demonstrates that undercomplete learned dictionaries are suitable for non-stationary RF classification. Results on simulated data sets with varying background clutter and noise levels are presented. Lastly, unsupervised classification with undercomplete learned dictionaries is also demonstrated in satellite imagery analysis

    Deep-learning feature descriptor for tree bark re-identification

    Get PDF
    L’habilité de visuellement ré-identifier des objets est une capacité fondamentale des systèmes de vision. Souvent, ces systèmes s’appuient sur une collection de signatures visuelles basées sur des descripteurs comme SIFT ou SURF. Cependant, ces descripteurs traditionnels ont été conçus pour un certain domaine d’aspects et de géométries de surface (relief limité). Par conséquent, les surfaces très texturées telles que l’écorce des arbres leur posent un défi. Alors, cela rend plus difficile l’utilisation des arbres comme points de repère identifiables à des fins de navigation (robotique) ou le suivi du bois abattu le long d’une chaîne logistique (logistique). Nous proposons donc d’utiliser des descripteurs basés sur les données, qui une fois entraîné avec des images d’écorce, permettront la ré-identification de surfaces d’arbres. À cet effet, nous avons collecté un grand ensemble de données contenant 2 400 images d’écorce présentant de forts changements d’éclairage, annotées par surface et avec la possibilité d’être alignées au pixels près. Nous avons utilisé cet ensemble de données pour échantillonner parmis plus de 2 millions de parcelle d’image de 64x64 pixels afin d’entraîner nos nouveaux descripteurs locaux DeepBark et SqueezeBark. Notre méthode DeepBark a montré un net avantage par rapport aux descripteurs fabriqués à la main SIFT et SURF. Par exemple, nous avons démontré que DeepBark peut atteindre une mAP de 87.2% lorsqu’il doit retrouver 11 images d’écorce pertinentes, i.e correspondant à la même surface physique, à une image requête parmis 7,900 images. Notre travail suggère donc qu’il est possible de ré-identifier la surfaces des arbres dans un contexte difficile, tout en rendant public un nouvel ensemble de données.The ability to visually re-identify objects is a fundamental capability in vision systems. Oftentimes,it relies on collections of visual signatures based on descriptors, such as SIFT orSURF. However, these traditional descriptors were designed for a certain domain of surface appearances and geometries (limited relief). Consequently, highly-textured surfaces such as tree bark pose a challenge to them. In turn, this makes it more difficult to use trees as identifiable landmarks for navigational purposes (robotics) or to track felled lumber along a supply chain (logistics). We thus propose to use data-driven descriptors trained on bark images for tree surface re-identification. To this effect, we collected a large dataset containing 2,400 bark images with strong illumination changes, annotated by surface and with the ability to pixel align them. We used this dataset to sample from more than 2 million 64 64 pixel patches to train our novel local descriptors DeepBark and SqueezeBark. Our DeepBark method has shown a clear advantage against the hand-crafted descriptors SIFT and SURF. For instance, we demonstrated that DeepBark can reach a mAP of 87.2% when retrieving 11 relevant barkimages, i.e. corresponding to the same physical surface, to a bark query against 7,900 images. ur work thus suggests that re-identifying tree surfaces in a challenging illuminations contextis possible. We also make public our dataset, which can be used to benchmark surfacere-identification techniques

    Implementing decision tree-based algorithms in medical diagnostic decision support systems

    Get PDF
    As a branch of healthcare, medical diagnosis can be defined as finding the disease based on the signs and symptoms of the patient. To this end, the required information is gathered from different sources like physical examination, medical history and general information of the patient. Development of smart classification models for medical diagnosis is of great interest amongst the researchers. This is mainly owing to the fact that the machine learning and data mining algorithms are capable of detecting the hidden trends between features of a database. Hence, classifying the medical datasets using smart techniques paves the way to design more efficient medical diagnostic decision support systems. Several databases have been provided in the literature to investigate different aspects of diseases. As an alternative to the available diagnosis tools/methods, this research involves machine learning algorithms called Classification and Regression Tree (CART), Random Forest (RF) and Extremely Randomized Trees or Extra Trees (ET) for the development of classification models that can be implemented in computer-aided diagnosis systems. As a decision tree (DT), CART is fast to create, and it applies to both the quantitative and qualitative data. For classification problems, RF and ET employ a number of weak learners like CART to develop models for classification tasks. We employed Wisconsin Breast Cancer Database (WBCD), Z-Alizadeh Sani dataset for coronary artery disease (CAD) and the databanks gathered in Ghaem Hospital’s dermatology clinic for the response of patients having common and/or plantar warts to the cryotherapy and/or immunotherapy methods. To classify the breast cancer type based on the WBCD, the RF and ET methods were employed. It was found that the developed RF and ET models forecast the WBCD type with 100% accuracy in all cases. To choose the proper treatment approach for warts as well as the CAD diagnosis, the CART methodology was employed. The findings of the error analysis revealed that the proposed CART models for the applications of interest attain the highest precision and no literature model can rival it. The outcome of this study supports the idea that methods like CART, RF and ET not only improve the diagnosis precision, but also reduce the time and expense needed to reach a diagnosis. However, since these strategies are highly sensitive to the quality and quantity of the introduced data, more extensive databases with a greater number of independent parameters might be required for further practical implications of the developed models

    Fluorescence Spectroscopy in Structural Studies of Plant Cell Walls

    Get PDF
    Plant cell walls represent the most abundant, renewable and biodegradable composite on Earth. Its highly complex structure consists mainly of three organic compounds: cellulose, hemicelluloses, and lignin. Cell walls have wide applications in different industries, especially for biofuels and biomaterials. Fluorescence spectroscopy is the method allowing investigation of cell wall structure thought monitoring of lignin autoflorescence and thus interactions of lignin with the other cell wall constituents. Deconvolution of fluorescence spectra reveals the number and location of spectral component peaks by calculation of the approximation of the probability density (APD) of component positions. A characteristic of complex CW fluorescence is that the emission spectrum contains multiple log–normal components originating from different fluorophores, shorter wavelengths corresponding to phenolic structures and longer wavelengths to conjugated structures in lignin. Fluorescence spectroscopy has been used for fast screening of the cell wall properties from plants of different origin (hardwood, softwood and herbaceous plant), that may be important for selection of plants for possible applications. Fluorescence spectroscopy may be applicable in the investigation of the effect of stress on the cell wall. Lignin fluorescence emission spectra, peak intensities and shifts in the positions of the long-wavelength spectral components may be indicators of changes in cell wall structure during the stress. There is an increasing application of quantum dots (QDs) in plant science, as fluorescent markers. The isolated cell wall is an appropriate object for study of the interactions with nanoparticles. The results of different physico-chemical techniques including fluorescence spectroscopy combined with spectral deconvolution, show that in the cell walls, CdSe QDs predominantly bind to cellulose, via OH groups, and to lignin, via the conjugated C=C/C–C chains. Variability of bond types in lignin is related to the involvement of this polymer in plant response to various types of stress, by introducing local structural modifications in the cell wall. Different lignin model compounds have been used in order to reveal spectroscopic properties of lignin. Lignin model polymers were synthesized from three monomers, coniferyl alcohol, ferulic acid and p-coumaric acid mixed in various ratios, simulating lignin synthesis in the real cell walls. Further, by using fluorescence spectroscopy and appropriate mathematical methods, it is possible to get deeper insight into the structural characteristics of the molecule. Future investigations will be based on synthetic cell walls and on variation in a portion of all three main components: cellulose, hemicelluloses and lignin, also having in mind results of fine structural modifications in lignin model compounds
    corecore