19 research outputs found
Novel CBIR System Based on Ripplet Transform Using Interactive Neuro-Fuzzy Technique
Content Based Image Retrieval (CBIR) system is an emerging research area in effective digital data management and retrieval paradigm. In this article, a novel CBIR system based on a new Multiscale Geometric Analysis (MGA)-tool, called Ripplet Transform Type-I (RT) is presented. To improve the retrieval result and to reduce the computational complexity, the proposed scheme utilizes a Neural Network (NN) based classifier for image pre-classification, similarity matching using Manhattan distance measure and relevance feedback mechanism (RFM) using fuzzy entropy based feature evaluation technique. Extensive experiments were carried out to evaluate the effectiveness of the proposed technique. The performance of the proposed CBIR system is evaluated using a 2 £ 5-fold cross validation followed by a statistical analysis. The experimental results suggest that the proposed system based on RT, performs better than many existing CBIR schemes based on other transforms, and the difference is statistically significant
Significant medical image compression techniques: a review
Telemedicine applications allow the patient and doctor to communicate with each other through network services. Several medical image compression techniques have been suggested by researchers in the past years. This review paper offers a comparison of the algorithms and the performance by analysing three factors that influence the choice of compression algorithm, which are image quality, compression ratio, and compression speed. The results of previous research have shown that there is a need for effective algorithms for medical imaging without data loss, which is why the lossless compression process is used to compress medical records. Lossless compression, however, has minimal compression ratio efficiency. The way to get the optimum compression ratio is by segmentation of the image into region of interest (ROI) and non-ROI zones, where the power and time needed can be minimised due to the smaller scale. Recently, several researchers have been attempting to create hybrid compression algorithms by integrating different compression techniques to increase the efficiency of compression algorithms
A Nobel Approach to Retrieveactual Image from a Compressedoneby using Dequantisation Technique
Image Compression addresses the problem of reducing the amount of data required to represent the digital image. Image compression and decompression are very popular processes in image processing. Image compression is a way in which the data to be transmitted are compressed into a smaller version and then transmitted. Compression is achieved by the removal of one or more of three basic data redundancies: (1) Coding redundancy, which is present when less than optimal (i.e. the smallest length) code words are used; (2) Interpixel redundancy, which results from correlations between the pixels of an imag
Alzheimer’s detection through neuro imaging and subsequent fusion for clinical diagnosis
In recent years, vast improvement has been observed in the field of medical research. Alzheimer's is the most common cause for dementia. Alzheimer's disease (AD) is a chronic disease with no cure, and it continues to pose a threat to millions of lives worldwide. The main purpose of this study is to detect the presence of AD from magnetic resonance imaging (MRI) scans through neuro imaging and to perform fusion process of both MRI and positron emission tomography (PET) scans of the same patient to obtain a fused image with more detailed information. Detection of AD is done by calculating the gray matter and white matter volumes of the brain and subsequently, a ratio of calculated volume is taken which helps the doctors in deciding whether the patient is affected with or without the disease. Image fusion is carried out after preliminary detection of AD for MRI scan along with PET scan. The main objective is to combine these two images into a single image which contains all the possible information together. The proposed approach yields better results with a peak signal to noise ratio of 60.6 dB, mean square error of 0.0176, entropy of 4.6 and structural similarity index of 0.8
Smoothing of ultrasound images using a new selective average filter
Ultrasound images are strongly affected by speckle noise making visual and computational analysis of the
structures more difficult. Usually, the interference caused by this kind of noise reduces the efficiency of
extraction and interpretation of the structural features of interest. In order to overcome this problem, a
new method of selective smoothing based on average filtering and the radiation intensity of the image
pixels is proposed. The main idea of this new method is to identify the pixels belonging to the borders
of the structures of interest in the image, and then apply a reduced smoothing to these pixels, whilst
applying more intense smoothing to the remaining pixels. Experimental tests were conducted using synthetic
ultrasound images with speckle noisy added and real ultrasound images from the female pelvic
cavity. The new smoothing method is able to perform selective smoothing in the input images, enhancing
the transitions between the different structures presented. The results achieved are promising, as the
evaluation analysis performed shows that the developed method is more efficient in removing speckle
noise from the ultrasound images compared to other current methods. This improvement is because it is
able to adapt the filtering process according to the image contents, thus avoiding the loss of any relevant
structural features in the input images
Recommended from our members
Multi-Cross Sampling and Frequency-Division Reconstruction for Image Compressed Sensing
AAAI Technical Track on Computer Vision IVThe lecture presentation, slides, conference paper and transcript are available online at: https://underline.io/lecture/92149-multi-cross-sampling-and-frequency-division-reconstruction-for-image-compressed-sensing .Deep Compressed Sensing (DCS) has attracted considerable interest due to its superior quality and speed compared to traditional algorithms. However, current approaches employ simplistic convolutional downsampling to acquire measurements, making it difficult to retain high-level features of the original signal for better image reconstruction. Furthermore, these approaches often overlook the presence of both
high- and low-frequency information within the network, despite their critical role in achieving high-quality reconstruction. To address these challenges, we propose a novel Multi-Cross Sampling and Frequency Division Network (MCFDNet) for image CS. The Dynamic Multi-Cross Sampling (DMCS) module, a sampling network of MCFD-Net, incorporates
pyramid cross convolution and dual-branch sampling with multi-level pooling. Additionally, it introduces an attention mechanism between perception blocks to enhance adaptive learning effects. In the second deep reconstruction stage, we design a Frequency Division Reconstruction Module (FDRM). This module employs a discrete wavelet transform
to extract high- and low-frequency information from images. It then applies multi-scale convolution and selfsimilarity attention compensation separately to both types of information before merging the output reconstruction results. MCFD-Net integrates the DMCS and FDRM to construct an end-to-end learning network. Extensive CS experiments
conducted on multiple benchmark datasets demonstrate that our MCFD-Net outperforms state-of-the-art approaches, while also exhibiting superior noise robustness