13,299 research outputs found

    Toward reduction of artifacts in fused images

    Get PDF
    Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT metho

    Simultaneous in vivo positron emission tomography and magnetic resonance imaging

    Get PDF
    Positron emission tomography (PET) and magnetic resonance imaging (MRI) are widely used in vivo imaging technologies with both clinical and biomedical research applications. The strengths of MRI include high-resolution, high-contrast morphologic imaging of soft tissues; the ability to image physiologic parameters such as diffusion and changes in oxygenation level resulting from neuronal stimulation; and the measurement of metabolites using chemical shift imaging. PET images the distribution of biologically targeted radiotracers with high sensitivity, but images generally lack anatomic context and are of lower spatial resolution. Integration of these technologies permits the acquisition of temporally correlated data showing the distribution of PET radiotracers and MRI contrast agents or MR-detectable metabolites, with registration to the underlying anatomy. An MRI-compatible PET scanner has been built for biomedical research applications that allows data from both modalities to be acquired simultaneously. Experiments demonstrate no effect of the MRI system on the spatial resolution of the PET system and <10% reduction in the fraction of radioactive decay events detected by the PET scanner inside the MRI. The signal-to-noise ratio and uniformity of the MR images, with the exception of one particular pulse sequence, were little affected by the presence of the PET scanner. In vivo simultaneous PET and MRI studies were performed in mice. Proof-of-principle in vivo MR spectroscopy and functional MRI experiments were also demonstrated with the combined scanner

    Stereo and ToF Data Fusion by Learning from Synthetic Data

    Get PDF
    Time-of-Flight (ToF) sensors and stereo vision systems are both capable of acquiring depth information but they have complementary characteristics and issues. A more accurate representation of the scene geometry can be obtained by fusing the two depth sources. In this paper we present a novel framework for data fusion where the contribution of the two depth sources is controlled by confidence measures that are jointly estimated using a Convolutional Neural Network. The two depth sources are fused enforcing the local consistency of depth data, taking into account the estimated confidence information. The deep network is trained using a synthetic dataset and we show how the classifier is able to generalize to different data, obtaining reliable estimations not only on synthetic data but also on real world scenes. Experimental results show that the proposed approach increases the accuracy of the depth estimation on both synthetic and real data and that it is able to outperform state-of-the-art methods
    corecore