52 research outputs found

    超小波分析及应用

    Get PDF
    尽管小波变换在数据压缩和去噪声等领域取得良好的效果,可分离的二维小波变换(不是直接构造出),采用先对行做一次一维小波变换,再对列做一次一维小波变换扩展而来。或者直接用二个可分离的一维函数基直接构造的二维变换,从数学角度都不是真正的二维函数。基函数的支撑区域由区间扩展为正方形,基函数形状的方向性较差,该问题制约着小波变换的进一步应用。同时,由于采用亚抽样技术,在目标提取时会造成信息模糊,对信息利用会产生较大的影响。众所周知,如果某个基函数能与被逼近的函数较好地匹配,则其相应的投影系数较大,变换的能量集中度较高。可见对于平滑区域,小波变换的表示效率较高,而对于图像中方向性较强的边缘以及纹理,由于两者匹配较差,导致其表示效率欠佳。在高维情况下,小波分析并不能充分利用数据本身特有的几何特征,并不是最优的或 “最稀疏”的函数表示方法。 多尺度几何发展的目的和动力正是要致力于发展一种新的高维函数的最优表示方法。 为克服小波分析的缺点,人们一直找其改进的方法。我们将这些方法统称超小波分析方法(Beyond Wavelet)。提到超小波分析,首先进行定义超小波分析。超小波分析就是把近来人们为改变小波分析的不足,提出常用基于小技术基础之上的系列变换,即Curvelet、Ridgelet、Contourlet、Bandelet、Beamlet、Directionlet、Wedgelet和Surfacelet变换的统称,也有人称X-let(包括Wavelet)。国家自然科学基金(No.60472081)和航空基础科学基金(No.05F07001)资

    Image Sparse Inpainting Based on Directional Wavelet Transform

    Get PDF
    图像修复有着重要的应用价值,稀疏表示作为前沿的信号处理方法,也已经使用在图像修复中。但是,传统方法在稀疏图像的时候利用的是预先给定的图像基,不能自适应图像,因此稀疏表示能力有限。文中提出从参考图中估计出图像的最佳几何方向使得稀疏变换能自适应图像几何信息,提供更稀疏的图像表示方法。稀疏修复通过最小化1范数模型进行求解。实验结果表明,所提方法较传统的二维小波变换可以更好地保留图像中边缘和纹理,获得更高的峰值信噪比。Image inpainting plays an improtant role in application area.As an advanced signal processing method,sparse representation has been used in image inpainting.However given image bases are adopted in trad tional methods at the process of inpainting sparse images,it has no adaptive capability and sparse representation capability is limited.The best geometry direction is estimated according to reference images so that sparse trans form is adaptive to geometry informations of images.And a representation method of sparser images is provided.Sparse inpainting is calculated by a minimized norm model such as l1.Experimental results show that edges and texture in images are preserved more perfectly comparing with traditional 2D wavelet transform methods.So higher peak signal to noise ratio is got.广东省数字信号与图像处理技术重点实验室开放基金(54600321); 福建省高校产学合作科技重大项目(2011H6025

    New Image Interpolation Method Based on Ramp Edge Model

    Get PDF
    基于斜坡边缘模型的经典插值方法把所有边缘归为强边缘,导致弱边缘过分增强而失真。针对该问题提出基于斜坡边缘模型的图像插值新方法(nIIbrEd),对强弱边缘采用不同方法,考虑边缘宽度随图像放大而增大的情况,对放大图像进行修复。实验结果证明,nIIbrEd使放大图像的边缘更自然且清晰,取得了更好的纹理效果。Classical interpolation method based on ramp edge model considers all the edges as strong edges,which results in weak edges' distortion.Aiming at this problem,a New Image Interpolation method Based on Ramp EDge model(NIIBRED) is proposed,which uses different methods for strong edges and weak edges.This method considers that the edges generated in the enlarged image do not have the same width,and reconstructs the enlarged image.Experimental results show that NIIBRED can make the enlarged image's edges more natural and clearer,and obtain better texture effects.国家自然科学基金资助项目(60472081);航空科学基金资助项目(05F07001

    A Novel Video Denoising Method with 3D Context Model Based on Surfacelet Transform

    Get PDF
    本文将3D Context模型应用于Surfacelet变换域,提出一种新的视频去噪方法.Surfacelet变换(ST)是一种新的3D变换,具有多方向分解、各向异性和低冗余度等性质.根据视频信号ST域内系数和噪声分布的特征,将2DContext模型拓展到3D,按照能量分布将ST系数分成多个子块,每个子块有独立的能量和阈值估计.实验结果表明,本文算法噪声抑制效果明显优于分层2D去噪声方法和其它现有的3D方法,去噪视频的PSNR值提高了约2dB.从视觉效果来看,本文算法在去除噪声的同时,能很好的保留视频图像细节,运动物体非常平滑,有效解决传统算法中存在的拖影、闪烁等问题,尤其适合于包含剧烈运动和丰富纹理图像的视频.We propose a novel video denoising method with 3D Context Model in Surfacelet Transform Domain(3DCMST)in this paper.In order to take advantage of the characteristic of the coefficients,the Context model was extended from 2D to 3D.The ST coefficients were divided into several parts according to their energy distribution by 3D Context model and each part had independent energy estimate and threshold.Experimental results show that the proposed method achieves better denoising performance than other 3D or hierarchical 2D denoising methods,and remarkably improves the PSNR of video about 2dB.In terms of visual quality,the proposed method can effectively preserve the video detail,and the trajectory of motion object is very smooth,which is especially adequate to process the video frames with acute movement and plenty of texture.国家自然科学基金(No.60472081);; 航空科学基金(No.05F07001

    比赛项目排序问题的贪婪算子遗传算法

    Get PDF
    摘要:将比赛项目的排序问题转化为图论问题中的货郎担问题( TSP) ,利用TSP 较为成熟的遗传算法进行 求解。这样防止了搜索过程陷入局部最优。针对遗传算法收敛速度慢的特点, 对遗传算法进行了改进, 引入贪婪交叉算子来加快算法的收敛速度, 得到冲突总人次数为8 的优良结果。在对算法进行合理性分析时, 从理论上论证了算法的优劣

    Optimized Local Superposition in Wireless Sensor Networks with t-average-mutual-coherence

    Get PDF
    Compressed sensing (CS) is a new technology for recovering sparse data from undersampled measurements. It shows great potential to reduce energy for sensor networks. First, a basic global superposition model is proposed to obtain the measurements of sensor data, where a sampling matrix is modeled as the channel impulse response (CIR) matrix while the sparsifying matrix is expressed as the distributed wavelet transform (DWT). However, both the sampling and sparsifying matrixes depend on the location of sensors, so this model is highly coherent. This violates the assumption of CS and easily produces high data recovery error. In this paper, in order to reduce the coherence, we propose to control the transmit power of some nodes with the help of t-average-mutual-coherence, and recovery quality are greatly improved. Finally, to make the approach more realistic and energy-e±cient, the CIR superposition is restricted in local clusters. Two key parameters, the radius of power control region and the radius of local clusters, are optimized based on the coherence and resource consideration in sensor networks. Simulation results demonstrate that the proposed scheme provides a high recovery quality for networked data and verify that t-average-mutual-coherence is a good criterion for optimizing the performance of CS in our scenario.Qualcomm-Tsinghua-Xiamen University Joint Research Program; National Natural Science Foundation of China under grant 61172097;Fellowship of Postgraduates' Oversea Study Program for Building High-Level Universities from the China Scholarship Council

    基于Surfacelet 变换的3D Context 模型视频去噪新方法

    Get PDF
    本文将3D Context 模型应用于Surfacelet 变换域,提出一种新的视频去噪方法. Surfacelet 变换(ST) 是一种新的3D 变换,具有多方向分解、各向异性和低冗余度等性质. 根据视频信号ST 域内系数和噪声分布的特征,将2D Context 模型拓展到3D ,按照能量分布将ST系数分成多个子块,每个子块有独立的能量和阈值估计. 实验结果表明,本文算法噪声抑制效果明显优于分层2D 去噪声方法和其它现有的3D 方法,去噪视频的PSNR 值提高了约2dB. 从视觉效果来看,本文算法在去除噪声的同时,能很好的保留视频图像细节,运动物体非常平滑,有效解决传统算法中存在的拖影、闪烁等问题,尤其适合于包含剧烈运动和丰富纹理图像的视频. Title: A Novel Video Denoising Method with 3D Context Model Based on Surfacelet Transform  Abstract :  We propose a novel video denoising method with 3D Context Model in Surfacelet Transform Domain (3DCMST) in this paper. In order to take advantage of the characteristic of the coefficients ,the Context model was extended from 2D to 3D. The ST coefficients were divided into several parts according to their energy distribution by 3D Context model and each part had independent energy estimate and threshold. Experimental results show that the proposed method achieves better denoising performance than other 3D or hierarchical 2D denoising methods , and remarkably improves the PSNR of video about 2dB. In terms of visual quality ,the proposed method can effectively preserve the video detail ,and the trajectory of motion object is very smooth ,which is especially adequate to process the video frames with acute movement and plenty of texture. Key words :  video denoising ;Surfacelet transform;3D Context model ; directional filter bank国家自然科学基金(No. 60472081) ;航空科学基金(No. 05F07001

    Image Fusion Algorithm Based on Features Motivated Multi-channel Pulse Coupled Neural Networks

    Get PDF
    Pulse coupled neural networks (PCNN) is a mammal visual cortex-inspired artificial neural networks. Owing to the coupling links in neurons, PCNN is successful to utilize the local information, thus it has been successfully employed in image fusion. However, in traditional PCNN for image fusion, value of per pixel is used to motivate per neuron. In this paper, image feature of per pixel, e.g. gradient and local energy, is used to motivate per neuron and generate firing maps. Each firing map is corresponding to one type feature. Furthermore, a new multi-channel PCNN is presented to combine these firing maps via a weighting function which measures the contribution of these features to the fused image quality. Finally, pixels with maximum firing times, when firing times of source images are compared, are selected as the pixels of the fused image. Experimental results demonstrate that the proposed algorithm outperforms Wavelet-based and Wavelet-PCNN-based fusion algorithms.supported by Navigation Science Foundation of China under grant no. 05F07001 and National Natural Science Foundation of China under grant no. 6047208

    改进拉普拉斯能量和的尖锐频率局部化Contourlet域多聚焦图像融合方法

    Get PDF
    In order to suppress the pseudo-Gibbs phenomena around singularities of fused images and to reduce significant amounts of aliasing components located far away from desired supports when the original Contourlet is employed in the image fusion,a multifocus image fusion method in Sharp Frequency Localized Contourlet Transform(SFLCT) domain based on a sum-modified-Laplacian is proposed.The SFLCT,instead of the original Contourlet,is utilized as the multiscale transform to decompose the original multifocus images into subbands.Then,typical measurements for the multifocus image fusion in a spatial domain are introduced to the Contourlet domain and Sum-modified-Laplacian(SML),and the criterion to distinguish SFLCT coefficients from the clear parts or from blurry parts of images are employed in SFCLT subbands to select the SFLCT transform coefficients.Finally,the inverse SFLCT is used to reconstruct fused images.Moreover,a cycle spinning method is applied to compensate for the lack of translation invariance property and to suppress the pseudo-Gibbs phenomena of fused images.Using the proposed fusion method,experimental results demonstrate that the mutual information has improved by 5.87% and transferred edge information QAB/F has improved by 2.70% as compared with those of the cycle spinning wavelet method,and has improved by 1.77% and 1.29% as compared with those of the cycle spinning Contourlet method.Meanwhile,the proposed fusion method has advantages of good visual effect over the block-based spatial SML method and shift-invariant wavelet method

    Sum-modified-Laplacian-based Multifocus Image Fusion Method in Sharp Frequency Localized Contourlet Transform Domain

    Get PDF
    为了克服Contourlet 融合在远离支撑区间上出现的混叠成分,抑制融合图像在奇异处产生伪吉布斯现象,提出改进拉普拉斯能量和的尖锐频率局部化Contourlet ( Sharp Frequency Localized Contourlet Transform-SFLCT)域多聚焦图像融合方法。首先,采用SFLCT 而不是原始的Contourlet 对多聚焦图像进行分解。接着,将多聚焦图像空域融合方法中评价图像清晰度的指标引入到SFLCT 变换域,采用拉普拉斯能量来选择变换域系数。然后,逆SFLCT 重构得到融合结果。最后,采用循环平移(Cycle Spinning)来提高SFLCT 的平移不变性,有效抑制融合图像在奇异处产生伪吉布斯现象。实验结果表明:对于多聚焦图像,所提方法比循环平移小波变换互信息提高5.87%, QAB/F 提高2.70%,比循环平移Contourlet 方法互信息提高1.77%,QAB/F 提高1.29%,视觉效果优于典型的空域分块拉普拉斯能量方法和平移不变小波变换方法 ============ Abstract: In order to suppress pseudo-Gibbs phenomena around singularities of fused image and reduce significant amount of aliasing components which are located far away from the desired support when the original contourlet is employed in image fusion, Sum-modified-Laplacian-based multifocus image fusion method in sharp frequency localized contourlet transform (SFLCT) domain is proposed. First, SFLCT, instead of the original contourlet, is utilized as the multiscale transform to decompose the source multifocus images into subbands. Second, typical measurements for multifocus image fusion in spatial domain are introduced into contourlet domain and Sum-modified-Laplacian (SML), evidenced in this paper with the best capability to distinguish SFLCT coefficients is from the clear parts or blurry parts of images, is employed in SFCLT subbands as measurement to select SFLCT transform coefficients. Third, inverse SFLCT is used to reconstruct fused image. Finally, cycle spinning is applied to compensate for the lack of translation invariance property and suppress pseudo-Gibbs phenomena of fused images. Using the proposed fusion method, experimental results demonstrate that mutual information is improved by 5.87% and transferred edge information QAB/F is improved by 2.70% compared with cycle spinning wavelet method, while mutual information is improved by 1.77% and QAB/F is improved by 1.29% compared with cycle spinning contourlet method. Meanwhile the proposed fusion method outperforms block-based spatial SML method and shift-invariant wavelet method in term of visual appearance.国家自然科学基金(No.60472081),航空基础科学基金(No.05F07001
    corecore