10 research outputs found

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    Pan-sharpening Using Spatial-frequency Method

    Get PDF
    Over the years, researchers have formulated various techniques for pan sharpening that attempt to minimize the spectral distortion, i.e., retain the maximum spectral fidelity of the MS images. On the other hand, if the use of the PAN-sharpened image is just to produce maps for better visual interpretation, then the spectral distortion is not of much concern, as the goal is to produce images with high contrast. To solve the color distortion problem, methods based on spatial frequency domain have been introduced and have demonstrated superior performance in terms of producing high spectral fidelity pan-sharpened images over spatial-scale methods

    DTCWTASODCNN: DTCWT based Weighted Fusion Model for Multimodal Medical Image Quality Improvement with ASO Technique & DCNN

    Get PDF
    Medical image fusion approaches are sub-categorized as single-mode as well as multimodal fusion strategies. The limitations of single-mode fusion approaches can be resolved by introducing a multimodal fusion approach. Multimodal medical image fusion approach is formed by integrating two or more medical images of similar or dissimilar modalities aims to enhance the image quality and to preserve the image information. Hence, this paper introduced a new way to meld multimodal medical images via utilizing developed weighted fusion model relied on Dual Tree Complex Wavelet Transform (DTCWT) for fusing the multimodal medical image. Here, the two medical images are considered for image fusion process and we have implied DTCWT to the medical images for generating four sub-bands partition of the source medical images. The Renyientropy-based weighted fusion model is used to combine the weighted coefficient of DTCWT of images. The final fusion process is carried out using Atom Search Sine Cosine Algorithm (ASSCA)-based Deep Convolutional Neural Network (DCNN). Moreover, the simulation work output demonstrated for developed fusion model gained the superior outcomes relied on key indicators named as Mutual Information i.e. MI, Peak Signal to Noise Ratio abbreviated as PSNR as well as Root Mean Square Error, in short RMSE with the values of 1.554, 40.45 dB as well as 5.554, correspondingly

    A review on the rule-based filtering structure with applications on computational biomedical images

    Get PDF
    concepts in the filtering structure. It is crucial for understanding and discussing different principles associated with fuzzy filter design procedures. A number of typical fuzzy multichannel filtering approaches are provided in order to clarify the different fuzzy filter designs and compare different algorithms. In particular, in most practical applications (i.e., biomedical image analysis), the emphasis is placed primarily on fuzzy filtering algorithms, with the main advantages of restoration of corrupted medical images and the interpretation capability, along with the capability of edge preservation and relevant image information for accurate diagnosis of diseases

    Fusion of magnetic resonance and ultrasound images for endometriosis detection

    Get PDF
    Endometriosis is a gynecologic disorder that typically affects women in their reproductive age and is associated with chronic pelvic pain and infertility. In the context of pre-operative diagnosis and guided surgery, endometriosis is a typical example of pathology that requires the use of both magnetic resonance (MR) and ultrasound (US) modalities. These modalities are used side by sidebecause they contain complementary information. However, MRI and US images have different spatial resolutions, fields of view and contrasts and are corrupted by different kinds of noise, which results in important challenges related to their analysis by radiologists. The fusion of MR and US images is a way of facilitating the task of medical experts and improve the pre-operative diagnosis and the surgery mapping. The object of this PhD thesis is to propose a new automatic fusion method for MRI and US images. First, we assume that the MR and US images to be fused are aligned, i.e., there is no geometric distortion between these images. We propose a fusion method for MR and US images, which aims at combining the advantages of each modality, i.e., good contrast and signal to noise ratio for the MR image and good spatial resolution for the US image. The proposed algorithm is based on an inverse problem, performing a super-resolution of the MR image and a denoising of the US image. A polynomial function is introduced to modelthe relationships between the gray levels of the MR and US images. However, the proposed fusion method is very sensitive to registration errors. Thus, in a second step, we introduce a joint fusion and registration method for MR and US images. Registration is a complicated task in practical applications. The proposed MR/US image fusion performs jointly super-resolution of the MR image and despeckling of the US image, and is able to automatically account for registration errors. A polynomial function is used to link ultrasound and MR images in the fusion process while an appropriate similarity measure is introduced to handle the registration problem. The proposed registration is based on a non-rigid transformation containing a local elastic B-spline model and a global affine transformation. The fusion and registration operations are performed alternatively simplifying the underlying optimization problem. The interest of the joint fusion and registration is analyzed using synthetic and experimental phantom images

    A Multi-Modal Incompleteness Ontology model (MMIO) to enhance 4 information fusion for image retrieval

    Get PDF
    This research has been supported in part by National Science and Technology Development (NSTDA), Thailand. Project No: SCH-NR2011-851

    Non-Standard Imaging Techniques

    Get PDF
    The first objective of the thesis is to investigate the problem of reconstructing a small-scale object (a few millimeters or smaller) in 3D. In Chapter 3, we show how this problem can be solved effectively by a new multifocus multiview 3D reconstruction procedure which includes a new Fixed-Lens multifocus image capture and a calibrated image registration technique using analytic homography transformation. The experimental results using the real and synthetic images demonstrate the effectiveness of the proposed solutions by showing that both the fixed-lens image capture and multifocus stacking with calibrated image alignment significantly reduce the errors in the camera poses and produce more complete 3D reconstructed models as compared with those by the conventional moving lens image capture and multifocus stacking. The second objective of the thesis is modelling the dual-pixel (DP) camera. In Chapter 4, to understand the potential of the DP sensor for computer vision applications, we study the formation of the DP pair which links the blur and the depth information. A mathematical DP model is proposed which can benefit depth estimation by the blur. These explorations motivate us to propose an end-to-end DDDNet (DP-based Depth and Deblur Network) to jointly estimate the depth and restore the image . Moreover, we define a reblur loss, which reflects the relationship of the DP image formation process with depth information, to regularize our depth estimate in training. To meet the requirement of a large amount of data for learning, we propose the first DP image simulator which allows us to create datasets with DP pairs from any existing RGBD dataset. As a side contribution, we collect a real dataset for further research. Extensive experimental evaluation on both synthetic and real datasets shows that our approach achieves competitive performance compared to state-of-the-art approaches. Another (third) objective of this thesis is to tackle the multifocus image fusion problem, particularly for long multifocus image sequences. Multifocus image stacking/fusion produces an in-focus image of a scene from a number of partially focused images of that scene in order to extend the depth of field. One of the limitations of the current state of the art multifocus fusion methods is not considering image registration/alignment before fusion. Consequently, fusing unregistered multifocus images produces an in-focus image containing misalignment artefacts. In Chapter 5, we propose image registration by projective transformation before fusion to remove the misalignment artefacts. We also propose a method based on 3D deconvolution to retrieve the in-focus image by formulating the multifocus image fusion problem as a 3D deconvolution problem. The proposed method achieves superior performance compared to the state of the art methods. It is also shown that, the proposed projective transformation for image registration can improve the quality of the fused images. Moreover, we implement a multifocus simulator to generate synthetic multifocus data from any RGB-D dataset. The fourth objective of this thesis is to explore new ways to detect the polarization state of light. To achieve the objective, in Chapter 6, we investigate a new optical filter namely optical rotation filter for detecting the polarization state with a fewer number of images. The proposed method can estimate polarization state using two images, one with the filter and another without. The accuracy of estimating the polarization parameters using the proposed method is almost similar to that of the existing state of the art method. In addition, the feasibility of detecting the polarization state using only one RGB image captured with the optical rotation filter is also demonstrated by estimating the image without the filter from the image with the filter using a generative adversarial network

    NSCT‐PCNN image fusion based on image gradient motivation

    No full text
    Pulse coupled neural network (PCNN) is widely used in image processing because of its unique biological characteristics, which is suitable for image fusion. When combining PCNN with non‐subsampled contourlet (NSCT) model, it is applied in overcoming the difficulty of coefficients selection for subband of the NSCT model. However in the original model, only the grey values of image pixels are used as input, without considering that the subjective vision of human eyes lacks the sensitivity to the local factors of the image. In this study, the improved pulse‐coupled neural network model has replaced the grey‐scale value of the image and introduced the weighted product of the strength of the gradient of the image and the local phase coherence as the model input. Finally, compared with other multi‐scale decompositions‐based image fusion and other improved NSCT‐PCNN algorithms, the algorithm presented in this study outperforms them in terms of objective criteria and visual appearance
    corecore