15 research outputs found

    DTCWTASODCNN: DTCWT based Weighted Fusion Model for Multimodal Medical Image Quality Improvement with ASO Technique & DCNN

    Get PDF
    Medical image fusion approaches are sub-categorized as single-mode as well as multimodal fusion strategies. The limitations of single-mode fusion approaches can be resolved by introducing a multimodal fusion approach. Multimodal medical image fusion approach is formed by integrating two or more medical images of similar or dissimilar modalities aims to enhance the image quality and to preserve the image information. Hence, this paper introduced a new way to meld multimodal medical images via utilizing developed weighted fusion model relied on Dual Tree Complex Wavelet Transform (DTCWT) for fusing the multimodal medical image. Here, the two medical images are considered for image fusion process and we have implied DTCWT to the medical images for generating four sub-bands partition of the source medical images. The Renyientropy-based weighted fusion model is used to combine the weighted coefficient of DTCWT of images. The final fusion process is carried out using Atom Search Sine Cosine Algorithm (ASSCA)-based Deep Convolutional Neural Network (DCNN). Moreover, the simulation work output demonstrated for developed fusion model gained the superior outcomes relied on key indicators named as Mutual Information i.e. MI, Peak Signal to Noise Ratio abbreviated as PSNR as well as Root Mean Square Error, in short RMSE with the values of 1.554, 40.45 dB as well as 5.554, correspondingly

    HoEnTOA: Holoentropy and Taylor Assisted Optimization based Novel Image Quality Enhancement Algorithm for Multi-Focus Image Fusion

    Get PDF
    875-886In machine vision as well as image processing applications, multi-focus image fusion strategy carries a prominent exposure. Normally, image fusion is a method of merging of information extracted out of two or more than two source images fused to produce a solitary image, which is much more instructive as well as much suitable for computer processing and visual perception. In this research paper authors have devised a novel image quality enhancement algorithm by fusing multi-focus images, in short, termed as HoEnTOA. Initially, contourlet transform is incorporated to both of the input images for generation of four respective sub-bands of each of input image. After converting into sub-bands further holoentropy along with proposed HoEnTOA is introduced to fuse multi-focus images. Here, the developed HoEnTOA is integration of Taylor series with ASSCA. After fusion, the inverse contourlet transform is incorporated for obtaining last fused image. Thus, the proposed HoEnTOA effectively performs the image fusion and has demonstrated better performance utilizing the five metrics i.e. Root Mean Square Error with a minimum value of 3.687, highest universal quality index value of 0.984, maximum Peak Signal to Noise Ratio of 42.08dB, maximal structural similarity index measurement of 0.943, as well as maximum mutual information of 1.651

    WeAbDeepCNN: Weighted Average Model and ASSCA based Two Level Fusion Scheme For Multi-Focus Images

    Get PDF
    Fusion of images is a strategy that merges various moderately focused images or non-focused images of a single scene to generate a fully focused, clear and sharp image. The goal of this research is to discover the focused regions and further combination of focused regions of different source images into solitary image. However, there exist several issues in image fusion that involves contrast reduction, block artifacts, and artificial edges. To solve this issue, a two level fusion scheme has been devised, which involves weighted average model along with Atom Search Sine Cosine algorithm-based Deep Convolutional Neural Network (ASSCA-based Deep CNN) and may be abbreviated as “WeAbDeepCNN” i.e. weighted average model and ASSCA based Deep CNN. In the study two images are fed to initial fusion module, which is performed using weighted average model. The fusion score are generated whose values are determined in an optimal manner. Thus, final fusion is performed using proposed ASSCA-based Deep CNN. The Deep CNN training is carried out with proposed ASSCA, which is devised by combining Sine Cosine Algorithm, abbreviated as SCA, as well as atom search optimization (ASO). The proposed ASSCA-based Deep CNN offers improved performance in contrast to current state of the art techniques with a highest value 1.52 of mutual information (MI), with a highest value of 32.55 dB of maximum Peak Signal to Noise Ratio i.e. PSNR as well as  value of 7.59 of Minimum Root Mean Square Error (RMSE)

    WeAbDeepCNN: Weighted Average Model and ASSCA based Two Level Fusion Scheme For Multi-Focus Images

    Get PDF
    905-914Fusion of images is a strategy that merges various moderately focused images or non-focused images of a single scene to generate a fully focused, clear and sharp image. The goal of this research is to discover the focused regions and further combination of focused regions of different source images into solitary image. However, there exist several issues in image fusion that involves contrast reduction, block artifacts, and artificial edges. To solve this issue, a two level fusion scheme has been devised, which involves weighted average model along with Atom Search Sine Cosine algorithm-based Deep Convolutional Neural Network (ASSCA-based Deep CNN) and may be abbreviated as “WeAbDeepCNN” i.e. weighted average model and ASSCA based Deep CNN. In the study two images are fed to initial fusion module, which is performed using weighted average model. The fusion score are generated whose values are determined in an optimal manner. Thus, final fusion is performed using proposed ASSCA-based Deep CNN. The Deep CNN training is carried out with proposed ASSCA, which is devised by combining Sine Cosine Algorithm, abbreviated as SCA, as well as atom search optimization (ASO). The proposed ASSCA-based Deep CNN offers improved performance in contrast to current state of the art techniques with a highest value 1.52 of mutual information (MI), with a highest value of 32.55 dB of maximum Peak Signal to Noise Ratio i.e. PSNR as well as value of 7.59 of Minimum Root Mean Square Error (RMSE)

    HoEnTOA: Holoentropy and Taylor Assisted Optimization based Novel Image Quality Enhancement Algorithm for Multi-Focus Image Fusion 

    Get PDF
    In machine vision as well as image processing applications, multi-focus image fusion strategy carries a prominent exposure. Normally, image fusion is a method of merging of information extracted out of two or more than two source images fused to produce a solitary image, which is much more instructive as well as much suitable for computer processing and visual perception. In this research paper authors have devised a novel image quality enhancement algorithm by fusing multi-focus images, in short, termed as HoEnTOA. Initially, contourlet transform is incorporated to both of the input images for generation of four respective sub-bands of each of input image. After converting into sub-bands further holoentropy along with proposed HoEnTOA is introduced to fuse multi-focus images. Here, the developed HoEnTOA is integration of Taylor series with ASSCA. After fusion, the inverse contourlet transform is incorporated for obtaining last fused image. Thus, the proposed HoEnTOA effectively performs the image fusion and has demonstrated better performance utilizing the five metrics i.e. Root Mean Square Error with a minimum value of 3.687, highest universal quality index value of 0.984, maximum Peak Signal to Noise Ratio of 42.08dB, maximal structural similarity index measurement of 0.943, as well as maximum mutual information of 1.651

    Two-photon excitation fluorescence in ophthalmology: safety and improved imaging for functional diagnostics

    Get PDF
    Two-photon excitation fluorescence (TPEF) is emerging as a powerful imaging technique with superior penetration power in scattering media, allowing for functional imaging of biological tissues at a subcellular level. TPEF is commonly used in cancer diagnostics, as it enables the direct observation of metabolism within living cells. The technique is now widely used in various medical fields, including ophthalmology. The eye is a complex and delicate organ with multiple layers of different cell types and tissues. Although this structure is ideal for visual perception, it generates aberrations in TPEF eye imaging. However, adaptive optics can now compensate for these aberrations, allowing for improved imaging of the eyes of animal models for human diseases. The eye is naturally built to filter out harmful wavelengths, but these wavelengths can be mimicked and thereby utilized in diagnostics via two-photon (2Ph) excitation. Recent advances in laser-source manufacturing have made it possible to minimize the exposure of in vivo measurements within safety, while achieving sufficient signals to detect for functional images, making TPEF a viable option for human application. This review explores recent advances in wavefront-distortion correction in animal models and the safety of use of TPEF on human subjects, both of which make TPEF a potentially powerful tool for ophthalmological diagnostics

    DTCWTASODCNN: DTCWT based Weighted Fusion Model for Multimodal Medical Image Quality Improvement with ASO Technique & DCNN

    No full text
    850-858Medical image fusion approaches are sub-categorized as single-mode as well as multimodal fusion strategies. The limitations of single-mode fusion approaches can be resolved by introducing a multimodal fusion approach. Multimodal medical image fusion approach is formed by integrating two or more medical images of similar or dissimilar modalities aims to enhance the image quality and to preserve the image information. Hence, this paper introduced a new way to meld multimodal medical images via utilizing developed weighted fusion model relied on Dual Tree Complex Wavelet Transform (DTCWT) for fusing the multimodal medical image. Here, the two medical images are considered for image fusion process and we have implied DTCWT to the medical images for generating four sub-bands partition of the source medical images. The Renyientropy-based weighted fusion model is used to combine the weighted coefficient of DTCWT of images. The final fusion process is carried out using Atom Search Sine Cosine Algorithm (ASSCA)-based Deep Convolutional Neural Network (DCNN). Moreover, the simulation work output demonstrated for developed fusion model gained the superior outcomes relied on key indicators named as Mutual Information i.e. MI, Peak Signal to Noise Ratio abbreviated as PSNR as well as Root Mean Square Error, in short RMSE with the values of 1.554, 40.45 dB as well as 5.554, correspondingly

    Haar Adaptive Taylor-ASSCA-DCNN: A Novel Fusion Model for Image Quality Enhancement

    No full text
    568-578In medical imaging, image fusion has a prominent exposure in extracting complementary information out of varying medical image modalities. The utilization of different medical image modality had imperatively improved treatment information. Each kind of modality contains specific data regarding subject being imaged. Various techniques are devised for solving the issue of fusion, but the major issue of these techniques is key features loss in fused image, which also leads to unwanted artefacts. This paper devises an Adaptive optimization driven deep model fusing for medical images to obtain the essential information for diagnosis and research purpose. Through our proposed fusion scheme based on Haar wavelet and Adaptive Taylor ASSCA Deep CNN we have developed fusion rules to amalgamate pairs of Magnetic Resonance Imaging i.e. MRI like T1, T2. Through experimental analysis our proposed method shown for preserving edge as well as component related information moreover tumour detection efficiency has also been increased. Here, as input, two MRI images have been considered. Then Haar wavelet is adapted on both MRI images for transformation of images in low as well as high frequency sub-groups. Then, the fusion is done with correlation-based weighted model. After fusion, produced output is imposed to final fusion, which is executed through Deep Convolution Neural Network (DCNN). The Deep CNN is trained here utilizing Adaptive Taylor Atom Search Sine Cosine Algorithm (Adaptive Taylor ASSCA). Here, the Adaptive Taylor ASSCA is obtained by integrating adaptive concept in Taylor ASSCA. The highest MI of 1.672532 have been attained using db2 wavelet for image pair 1, highest PSNR 42.20993dB using db 2 wavelet for image pair 5 and lowest RMSE 5.204896 using sym 2 wavelet for image pair 5, have been shown proposed Adaptive Taylor ASO + SCA-based Deep CNN

    Role of endoscopic ultrasound-guided fine-needle aspiration in adrenal lesions: analysis of 32 patients

    No full text
    Objective: Endoscopic ultrasound-guided fine-needle aspiration cytology (EUS-FNAC) is a precise and safe technique that provides both radiological and pathological diagnosis with a better diagnostic yield and minimal adverse events. EUS-FNAC led to the remarkable increase in the detection rate of incidentaloma found during radiologic staging or follow-up in various malignancy or unrelated conditions. Aims: We did this preliminary study with an aim to evaluate the role of EUS-FNA in diagnosing and classifying adrenal lesions, clinical impact, and compare the outcome with the previously published literature. Materials and Methods: We included 32 consecutive cases (both retrospective and prospective) of EUS-guided adrenal aspirate performed over a period of 3.3 years. The indications for the aspirate in decreasing order were metastasis (most common carcinoma gall bladder) > primary adrenal mass > disseminated tuberculosis > pyrexia of unknown origin. On EUS, 28 cases revealed space occupying lesion or mass (two cases bilateral) and four cases revealed diffuse enlargement (two cases bilateral) with a mean size of 21 mm. Results: The cytology reports were benign adrenal aspirate (43.8%), metastatic adenocarcinoma (15.6%), histoplasmosis (9.4%), tuberculosis (9.4%), round cell tumor (6.2%), adrenocortical carcinoma (3.1%), and descriptive (3.1%). Three cases (9.4%) yielded inadequate sample. The TNM staging was altered in 22.23% of the cases by result of adrenal aspirate. Conclusions: EUS-FNA of the adrenal gland is a safe, quick, and sensitive and real-time diagnostic technique, which requires an integrated approach of clinician, endoscopist, and cytopathologist for high precision in diagnosis. Although the role of EUS-FNA for right adrenal is not much described, we found adequate sample yield in all the four patients that underwent the procedure

    Predicting Student Performance with Adaptive Aquila Optimization-based Deep Convolution Neural Network

    No full text
    1152-1164Predicting student performance is the major problem for enhancing the educational procedures. A level of student’s performance may be influenced by several factors like job of parents, sexual category and average scores obtained in prior years. Student’s performance prediction is a challenging chore, which can help educational staffs and students of educational institutions to follow the progress of students in their academic activities. Student performance enhancement and progress in educational quality are the most vital part of educational organizations. Presently, it is essential for an educational organization to predict the performance of students. Existing methods utilized only previous student performances for prediction without including other significant behaviors of students. For addressing such problems, a proficient model is proposed for prediction of student performance utilizing proposed Adaptive Aquila Optimization-allied Deep Convolution Neural Network (DCNN). In this process, data transformation is initiated using the Yeo-Johnson transformation method. Subsequently, feature selection is performed using Fisher Score to identify the most relevant features. Following feature selection, data augmentation techniques are applied to enhance the dataset. Finally, student performance is predicted through the utilization of a DCNN, with a focus on fine-tuning the network parameters for optimal performance. This fine-tuning is achieved through the use of the Adaptive Aquila Optimizer (AAO), ensuring the network is poised to deliver the best possible results in predicting student outcomes. Proposed AAO-based DCNN has achieved minimal error values of Mean Square Error, Root Mean Square Error, Mean Absolute Error, Mean Absolute Percentage Error, Mean Absolute Relative Error, Mean Squared Relative Error, and Root Mean Squared Relative Error, respectively
    corecore