59,923 research outputs found
On the Application of Data Clustering Algorithm used in Information Retrieval for Satellite Imagery Segmentation
This study proposes an automated technique for segmenting satellite imagery using unsupervised learning. Autoencoders, a type of neural network, are employed for dimensionality reduction and feature extraction. The study evaluates different segmentation architectures and encoders and identifies the best performing combination as the DeepLabv3+ architecture with a ResNet-152 encoder. This approach achieves high performance scores across multiple metrics and can be beneficial in various fields, including agriculture, land use monitoring, and disaster response
Performance Measure of Scanned Tamil Land Documents using Neural Network Approach
Recognition is process to find out the noisy or distorted image to make an accurate image. Classification and recognition technique which implemented on scanned Tamil land document. In the pre-processing stage, the given dataset is filtered by using median filter. After that, segmentation process is applied for every word image is splitted into character. Then, feature extraction is done by Gabor wavelet. For post processing stage, classification is process to check out the dataset which using neural network technique like supervised learning method or unsupervised learning method to find out the correct and Incorrect classification measure using confusion matrix. Hence, finally implemented Gabor wavelet technique to find the feature extraction and selection and then classification is done new pattern recognition technique using MATLAB. We also find out performance of plotting function like Training state, Regression, Gradient and validation.
DOI: 10.17762/ijritcc2321-8169.150519
Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals
Spiking neural networks (SNNs) enable power-efficient implementations due to
their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN
that uses unsupervised learning to extract discriminative features from speech
signals, which can subsequently be used in a classifier. The architecture
consists of a spiking convolutional/pooling layer followed by a fully connected
spiking layer for feature discovery. The convolutional layer of leaky,
integrate-and-fire (LIF) neurons represents primary acoustic features. The
fully connected layer is equipped with a probabilistic spike-timing-dependent
plasticity learning rule. This layer represents the discriminative features
through probabilistic, LIF neurons. To assess the discriminative power of the
learned features, they are used in a hidden Markov model (HMM) for spoken digit
recognition. The experimental results show performance above 96% that compares
favorably with popular statistical feature extraction methods. Our results
provide a novel demonstration of unsupervised feature acquisition in an SNN
Multi-modal Medical Neurological Image Fusion using Wavelet Pooled Edge Preserving Autoencoder
Medical image fusion integrates the complementary diagnostic information of
the source image modalities for improved visualization and analysis of
underlying anomalies. Recently, deep learning-based models have excelled the
conventional fusion methods by executing feature extraction, feature selection,
and feature fusion tasks, simultaneously. However, most of the existing
convolutional neural network (CNN) architectures use conventional pooling or
strided convolutional strategies to downsample the feature maps. It causes the
blurring or loss of important diagnostic information and edge details available
in the source images and dilutes the efficacy of the feature extraction
process. Therefore, this paper presents an end-to-end unsupervised fusion model
for multimodal medical images based on an edge-preserving dense autoencoder
network. In the proposed model, feature extraction is improved by using wavelet
decomposition-based attention pooling of feature maps. This helps in preserving
the fine edge detail information present in both the source images and enhances
the visual perception of fused images. Further, the proposed model is trained
on a variety of medical image pairs which helps in capturing the intensity
distributions of the source images and preserves the diagnostic information
effectively. Substantial experiments are conducted which demonstrate that the
proposed method provides improved visual and quantitative results as compared
to the other state-of-the-art fusion methods.Comment: 8 pages, 5 figures, 6 table
- …