6,298 research outputs found

    Process of Fingerprint Authentication using Cancelable Biohashed Template

    Get PDF
    Template protection using cancelable biometrics prevents data loss and hacking stored templates, by providing considerable privacy and security. Hashing and salting techniques are used to build resilient systems. Salted password method is employed to protect passwords against different types of attacks namely brute-force attack, dictionary attack, rainbow table attacks. Salting claims that random data can be added to input of hash function to ensure unique output. Hashing salts are speed bumps in an attacker’s road to breach user’s data. Research proposes a contemporary two factor authenticator called Biohashing. Biohashing procedure is implemented by recapitulated inner product over a pseudo random number generator key, as well as fingerprint features that are a network of minutiae. Cancelable template authentication used in fingerprint-based sales counter accelerates payment process. Fingerhash is code produced after applying biohashing on fingerprint. Fingerhash is a binary string procured by choosing individual bit of sign depending on a preset threshold. Experiment is carried using benchmark FVC 2002 DB1 dataset. Authentication accuracy is found to be nearly 97\%. Results compared with state-of art approaches finds promising

    The image biomarker standardization initiative: Standardized convolutional filters for reproducible radiomics and enhanced clinical insights

    Get PDF
    Standardizing convolutional filters that enhance specific structures and patterns in medical imaging enables reproducible radiomics analyses, improving consistency and reliability for enhanced clinical insights. Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations × three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking

    Deep ensemble model-based moving object detection and classification using SAR images

    Get PDF
    In recent decades, image processing and computer vision models have played a vital role in moving object detection on the synthetic aperture radar (SAR) images. Capturing of moving objects in the SAR images is a difficult task. In this study, a new automated model for detecting moving objects is proposed using SAR images. The proposed model has four main steps, namely, preprocessing, segmentation, feature extraction, and classification. Initially, the input SAR image is pre-processed using a histogram equalization technique. Then, the weighted Otsu-based segmentation algorithm is applied for segmenting the object regions from the pre-processed images. When using the weighted Otsu, the segmented grayscale images are not only clear but also retain the detailed features of grayscale images. Next, feature extraction is carried out by gray-level co-occurrence matrix (GLCM), median binary patterns (MBPs), and additive harmonic mean estimated local Gabor binary pattern (AHME-LGBP). The final step is classification using deep ensemble models, where the objects are classified by employing the ensemble deep learning technique, combining the models like the bidirectional long short-term memory (Bi-LSTM), recurrent neural network (RNN), and improved deep belief network (IDBN), which is trained with the features extracted previously. The combined models increase the accuracy of the results significantly. Furthermore, ensemble modeling reduces the variance and modeling method bias, which decreases the chances of overfitting. Compared to a single contributing model, ensemble models perform better and make better predictions. Additionally, an ensemble lessens the spread or dispersion of the model performance and prediction accuracy. Finally, the performance of the proposed model is related to the conventional models with respect to different measures. In the mean-case scenario, the proposed ensemble model has a minimum error value of 0.032, which is better related to other models. In both median- and best-case scenario studies, the ensemble model has a lower error value of 0.029 and 0.015

    Lip2Speech : lightweight multi-speaker speech reconstruction with Gabor features

    Get PDF
    In environments characterised by noise or the absence of audio signals, visual cues, notably facial and lip movements, serve as valuable substitutes for missing or corrupted speech signals. In these scenarios, speech reconstruction can potentially generate speech from visual data. Recent advancements in this domain have predominantly relied on end-to-end deep learning models, like Convolutional Neural Networks (CNN) or Generative Adversarial Networks (GAN). However, these models are encumbered by their intricate and opaque architectures, coupled with their lack of speaker independence. Consequently, achieving multi-speaker speech reconstruction without supplementary information is challenging. This research introduces an innovative Gabor-based speech reconstruction system tailored for lightweight and efficient multi-speaker speech restoration. Using our Gabor feature extraction technique, we propose two novel models: GaborCNN2Speech and GaborFea2Speech. These models employ a rapid Gabor feature extraction method to derive lowdimensional mouth region features, encompassing filtered Gabor mouth images and low-dimensional Gabor features as visual inputs. An encoded spectrogram serves as the audio target, and a Long Short-Term Memory (LSTM)-based model is harnessed to generate coherent speech output. Through comprehensive experiments conducted on the GRID corpus, our proposed Gabor-based models have showcased superior performance in sentence and vocabulary reconstruction when compared to traditional end-to-end CNN models. These models stand out for their lightweight design and rapid processing capabilities. Notably, the GaborFea2Speech model presented in this study achieves robust multi-speaker speech reconstruction without necessitating supplementary information, thereby marking a significant milestone in the field of speech reconstruction

    A Survey on an Effective Identification and Analysis for Brain Tumour Diagnosis using Machine Learning Technique

    Get PDF
    The hottest issue in medicine is image analysis. It has drawn a lot of researchers since it can effectively assess the severity of the condition and forecast the outcome. The noise trimming outcomes, on the other hand, have reduced with more complex trained images, which has tended to result in a lower prediction exactness score. So, a novel Machine Learning prediction framework has been built in this present study. This work also tries to predict brain tumours and evaluate their severity using MRI brain scans. Using the boosting function, the best results for error pruning are produced. The Proposed Solution function was then used to successfully complete the feature analysis and tumour prediction operations. The intended framework is evaluated in the Python environment, and a comparative analysis is performed to examine the prediction improvement score. It was discovered that an original MLPM model had the best tumour prediction precision

    Detection and diabetic retinopathy grading using digital retinal images

    Get PDF
    Diabetic Retinopathy is an eye disorder that affects people suffering from diabetes. Higher sugar levels in blood leads to damage of blood vessels in eyes and may even cause blindness. Diabetic retinopathy is identified by red spots known as microanuerysms and bright yellow lesions called exudates. It has been observed that early detection of exudates and microaneurysms may save the patient’s vision and this paper proposes a simple and effective technique for diabetic retinopathy. Both publicly available and real time datasets of colored images captured by fundus camera have been used for the empirical analysis. In the proposed work, grading has been done to know the severity of diabetic retinopathy i.e. whether it is mild, moderate or severe using exudates and micro aneurysms in the fundus images. An automated approach that uses image processing, features extraction and machine learning models to predict accurately the presence of the exudates and micro aneurysms which can be used for grading has been proposed. The research is carried out in two segments; one for exudates and another for micro aneurysms. The grading via exudates is done based upon their distance from macula whereas grading via micro aneurysms is done by calculating their count. For grading using exudates, support vector machine and K-Nearest neighbor show the highest accuracy of 92.1% and for grading using micro aneurysms, decision tree shows the highest accuracy of 99.9% in prediction of severity levels of the disease

    Deep learning-based tool for radiomics analysis of cancer 3D multicellular spheroids

    Get PDF
    Cancer 3D multicellular spheroids are a fundamental in vitro tool for studying in vivo tumors. Volume is the main feature used for evaluating the drug and treatment effects, but several other features can be estimated even from a simple 2D image. For high-content screening analysis, the bottleneck is the segmentation stage, which is essential for detecting the spheroids in the images and then proceeding to the feature extraction stage for performing radiomic analysis. Thanks to new deep learning models, it is possible to optimize the process for adapting the analysis to big datasets. One of the most promising approaches is the use of convolutional neural networks (CNNs), which have shown remarkable results in various medical imaging applications. By training a CNN on a large dataset of annotated images, it can learn to recognize patterns and features that are relevant for segmenting spheroids in new images. This approach has several advantages, such as manual or semi-automatic segmentation, which are time-consuming and prone to inter-observer variability. Moreover, CNNs can be fine-tuned for specific tasks and can handle different types of data, such as multi-modal or multi-dimensional images. Starting from the first version of Analysis of SPheroids (AnaSP), an open-source software for estimating morphological features of spheroids, we implemented a new module for automatically segmenting brightfield images by exploiting CNNs. In this work, several deep learning segmentation models have been trained and compared using ground truth masks. Then, a module based on an 18-layer deep residual network (ResNet18) was integrated into AnaSP, releasing AnaSP 2.0, a version of the tool optimized for high-content screening analysis

    A Comprehensive Atlas of Perineuronal Net Distribution and Colocalization with Parvalbumin in the Adult Mouse Brain

    Get PDF
    Perineuronal nets (PNNs) surround specific neurons in the brain and are involved in various forms of plasticity and clinical conditions. However, our understanding of the PNN role in these phenomena is limited by the lack of highly quantitative maps of PNN distribution and association with specific cell types. Here, we present a comprehensive atlas of Wisteria Floribunda Agglutinin (WFA) positive PNNs and colocalization with parvalbumin (PV) cells for over 600 regions of the adult mouse brain. Data analysis shows that PV expression is a good predictor of PNN aggregation. In the cortex, PNNs are dramatically enriched in layer 4 of all primary sensory areas in correlation with thalamocortical input density, and their distribution mirrors intracortical connectivity patterns. Gene expression analysis identifies many PNN correlated genes. Strikingly, PNN anticorrelated transcripts are enriched in synaptic plasticity genes, generalizing PNN role as circuit stability factors

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
    corecore