50 research outputs found

    How Scopus is Shaping the Research Publications of Feature Fusion-Based Image Retrieval

    Get PDF
    Research trends have shown an increase in the preferences for feature fusion-based image retrieval. The primary objective of this study is to show the current state of research regarding image retrieval and feature fusion. The research papers indexed in the Scopus database are considered here for quantitative analysis. A bibliometric analysis of the research publications indexed in Scopus is presented in this study. During this study, 461 documents from 276 different sources are obtained. The important keywords, sources, authors, countries, and funding agencies are presented, which will help future researchers in research directions

    Face Liveness Detection using Feature Fusion Using Block Truncation Code Technique

    Get PDF
    Nowadays the system which holds private and confidential data are being protected using biometric password such as finger recognition, voice recognition, eyries and face recognition. Face recognition match the current user face with faces present in the database of that security system and it has one major drawback that it never works better if it doesn’t have liveness detection. These face recognition system can be spoofed using various traits. Spoofing is accessing a system software or data by harming the biometric recognition security system. These biometric systems can be easily attacked by spoofs like peoples face images, masks and videos which are easily available from social media. The proposed work mainly focused on detecting the spoofing attack by training the system. Spoofing methods like photo, mask or video image can be easily identified by this method. This paper proposed a fusion technique where different features of an image are combining together so that it can give best accuracy in terms of distinguish between spoof and live face. Also a comparative study is done of machine learning classifiers to find out which classifiers gives best accuracy

    Improved Classification of Histopathological images using the feature fusion of Thepade sorted block truncation code and Niblack thresholding

    Get PDF
    Histopathology is the study of disease-affected tissues, and it is particularly helpful in diagnosis and figuring out how severe and rapidly a disease is spreading. It also demonstrates how to recognize a variety of human tissues and analyze the alterations brought on by sickness. Only through histopathological pictures can a specific collection of disease characteristics, such as lymphocytic infiltration of malignancy, be determined. The "gold standard" for diagnosing practically all cancer forms is a histopathological picture. Diagnosis and prognosis of cancer at an early stage are essential for treatment, which has become a requirement in cancer research. The importance and advantages of classification of cancer patients into more-risk or less-risk divisions have motivated many researchers to study and improve the application of machine learning (ML) methods. It would be interesting to explore the performance of multiple ML algorithms in classifying these histopathological images. Something crucial in this field of ML for differentiating images is feature extraction. Features are the distinctive identifiers of an image that provide a brief about it. Features are drawn out for discrimination between the images using a variety of handcrafted algorithms. This paper presents a fusion of features extracted with Thepade sorted block truncation code (TSBTC) and Niblack thresholding algorithm for the classification of histopathological images. The experimental validation is done using 960 images present in the Kimiapath-960 dataset of histopathological images with the help of performance metrics like sensitivity, specificity and accuracy. Better performance is observed by an ensemble of TSBTC N-ary and Niblack's thresholding features as 97.92% of accuracy in 10-fold cross-validation

    Expressive Color Visual Secret Sharing with Color to Gray & Back and Cosine Transform

    Get PDF
    Color Visual Secret Sharing (VSS) is an essential form of VSS. It is so because nowadays, most people like to share visual data as a color image. There are color VSS schemes capable of dealing with halftone color images or color images with selected colors, and some dealing with natural color images, which generate low quality of recovered secret. The proposed scheme deals with a color image in the RGB domain and generates gray shares for color images using color to gray and back through compression. These shares are encrypted into an innocent-looking gray cover image using a Discrete Cosine Transform (DCT) to make meaningful shares. Reconstruct a high-quality color image through the gray shares extracted from an innocent-looking gray cover image. Thus, using lower bandwidth for transmission and less storage

    Covid19 Identification from Chest X-ray Images using Machine Learning Classifiers with GLCM Features

    Get PDF
    From staying quarantined at home, practicing work from home to moving outside wearing masks and carrying sanitizers, every individual has now become so adaptive to so called 'New Normal' post series of lockdowns across the countries. The situation triggered by novel Coronavirus has changed the behaviour of every individual towards every other living as well as non-living entity. In the Wuhan city of China, multiple cases were reported of pneumonia caused due to unknown reasons. The concerned medical authorities confirmed the cause to be Coronavirus. The symptoms seen in these cases were not much different than those seen in case of pneumonia. Earlier the research has been carried out in the field of pneumonia identification and classification through X-ray images of chest. The difficulty in identifying Covid19 infection at initial stage is due to high resemblance of its symptoms with the infection caused due to pneumonia. Hence it is trivial to well distinguish cases of coronavirus from pneumonia that may help in saving life of patients. The paper uses chest X-ray images to identify Covid19 infection in lungs using machine learning classifiers and ensembles with Gray-Level Cooccurrence Matrix (GLCM) features. The advocated methodology extracts statistical texture features from X-ray images by computing a GLCM for each image. The matrix is computed by considering various stride combinations. These GLCM features are used to train the machine learning classifiers and ensembles. The paper explores both the multiclass classification (X-ray images are classified into one of the three classes namely Covid19 affected, Pneumonia affected and normal lungs) and binary classification (Covid19 affected and other). The dataset used for evaluating performance of the method is open sourced and can be accessed easily. Proposed method being simple and computationally effective achieves noteworthy performance in terms of Accuracy, F-Measure, MCC, PPV and Sensitivity. In sum, the best stride combination of GLCM and ensemble of machine learning classifiers is suggested as vital outcome of the proposed method for effective Covid19 identification from chest X-ray images

    Gray Image Colorization using Thepade’s Transform Error Vector Rotation With Cosine, Walsh, Haar Transforms and various Similarity Measures

    Get PDF
    The paper presents various gray image colorization methods based on vector quantization for performing automatic colorization. To colorize gray target image by extracting color pixels from source color image, Thepade’s Transform Error Vector Rotation vector quantization methods such as Thepade’s Cosine Error Vector Rotation (TCEVR), Thepade’s Walsh Error Vector Rotation (TWEVR) and Thepade’s Haar Error Vector Rotation (THEVR) are used along with varied similarity measures. The quality of colorization of gray image is subjective to the source color image and target gray image (to be colored). Here the image test bed of 25 images is used to recolor the gray equivalent of the original color images for qualitative performance comparison of proposed colorization methods with help of PSNR between original color and recolored images. Colorization is performed using diverse similarity measures which belong to different families. These nine similarity measures are used for mapping gray image pixels with relatively corresponding multichorme image pixels. When these similarity measures are assessed for their comparison for colorizing the target gray image, it is observed that Chebychev outruns all other similarity measures and the worst performance is consistently given by Jaccard and Hamming distances. Among all the considered colorization methods Thepade’s Haar Error Vector Rotation is much suitable algorithm for performing gray image colorization. DOI: 10.17762/ijritcc2321-8169.150516

    A Bibliometric Analysis of Face Anti Spoofing

    Get PDF
    Face Recognition Systems are used widely in all areas as a medium of authentication, the ease of implementation and accuracy provides it with a broader scope. The face recognition systems are vulnerable to some extent and are attacked by performing different types of attacks using a variety of techniques. The term used to describe the measures taken to prevent these types of attacks is known as face anti spoofing. Research has been carried on since decades to design systems that are robust against these attacks. The focus of the work in this paper is to explore the area of face anti spoofing, research done in terms of quantitative analysis and its impact. The keyword analysis table indicates face recognition as a widely used keyword followed by biometrics and face anti spoofing as per the Scopus dataset search. The citation analysis indicates that texture based systems have contributed majorly for Face anti spoofing detection. The Bibliometric analysis done in this paper tends to provide some future research directions in the area of Face anti spoofing analyse the trends of research done
    corecore