2,222 research outputs found

    Blind Curvelet based Denoising of Seismic Surveys in Coherent and Incoherent Noise Environments

    Full text link
    The localized nature of curvelet functions, together with their frequency and dip characteristics, makes the curvelet transform an excellent choice for processing seismic data. In this work, a denoising method is proposed based on a combination of the curvelet transform and a whitening filter along with procedure for noise variance estimation. The whitening filter is added to get the best performance of the curvelet transform under coherent and incoherent correlated noise cases, and furthermore, it simplifies the noise estimation method and makes it easy to use the standard threshold methodology without digging into the curvelet domain. The proposed method is tested on pseudo-synthetic data by adding noise to real noise-less data set of the Netherlands offshore F3 block and on the field data set from east Texas, USA, containing ground roll noise. Our experimental results show that the proposed algorithm can achieve the best results under all types of noises (incoherent or uncorrelated or random, and coherent noise)

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    An Algorithm for Real-Time Blind Image Quality Comparison and Assessment

    Get PDF
    This research aims at providing means to image comparison from different image processing algorithms for performance assessment purposes. Reconstruction of images corrupted by blur and noise requires specialized filtering techniques. Due to the immense effect of these corruptive parameters, it is often impossible to evaluate the quality of a reconstructed image produced by one technique versus another. The algorithm presented here is capable of performing this comparison analytically and quantitatively at a low computational cost (real-time) and high efficiency. The parameters used for comparison are the degree of blurriness, information content, and the amount of various types of noise associated with the reconstructed image. Based on a heuristic analysis of these parameters the algorithm assesses the reconstructed image and quantify the quality of the image by characterizing important aspects of visual quality. Extensive effort has been set forth to obtain real-world noise and blur conditions so that the various test cases presented here could justify the validity of this approach well. The tests performed on the database of images produced valid results for the algorithms consistently. This paper presents the description and validation (along with test results) of the proposed algorithm for blind image quality assessment.DOI:http://dx.doi.org/10.11591/ijece.v2i1.112 

    Independent component analysis (ICA) applied to ultrasound image processing and tissue characterization

    Get PDF
    As a complicated ubiquitous phenomenon encountered in ultrasound imaging, speckle can be treated as either annoying noise that needs to be reduced or the source from which diagnostic information can be extracted to reveal the underlying properties of tissue. In this study, the application of Independent Component Analysis (ICA), a relatively new statistical signal processing tool appeared in recent years, to both the speckle texture analysis and despeckling problems of B-mode ultrasound images was investigated. It is believed that higher order statistics may provide extra information about the speckle texture beyond the information provided by first and second order statistics only. However, the higher order statistics of speckle texture is still not clearly understood and very difficult to model analytically. Any direct dealing with high order statistics is computationally forbidding. On the one hand, many conventional ultrasound speckle texture analysis algorithms use only first or second order statistics. On the other hand, many multichannel filtering approaches use pre-defined analytical filters which are not adaptive to the data. In this study, an ICA-based multichannel filtering texture analysis algorithm, which considers both higher order statistics and data adaptation, was proposed and tested on the numerically simulated homogeneous speckle textures. The ICA filters were learned directly from the training images. Histogram regularization was conducted to make the speckle images quasi-stationary in the wide sense so as to be adaptive to an ICA algorithm. Both Principal Component Analysis (PCA) and a greedy algorithm were used to reduce the dimension of feature space. Finally, Support Vector Machines (SVM) with Radial Basis Function (RBF) kernel were chosen as the classifier for achieving best classification accuracy. Several representative conventional methods, including both low and high order statistics based methods, and both filtering and non-filtering methods, have been chosen for comparison study. The numerical experiments have shown that the proposed ICA-based algorithm in many cases outperforms other algorithms for comparison. Two-component texture segmentation experiments were conducted and the proposed algorithm showed strong capability of segmenting two visually very similar yet different texture regions with rather fuzzy boundaries and almost the same mean and variance. Through simulating speckle with first order statistics approaching gradually to the Rayleigh model from different non-Rayleigh models, the experiments to some extent reveal how the behavior of higher order statistics changes with the underlying property of tissues. It has been demonstrated that when the speckle approaches the Rayleigh model, both the second and higher order statistics lose the texture differentiation capability. However, when the speckles tend to some non-Rayleigh models, methods based on higher order statistics show strong advantage over those solely based on first or second order statistics. The proposed algorithm may potentially find clinical application in the early detection of soft tissue disease, and also be helpful for better understanding ultrasound speckle phenomenon in the perspective of higher order statistics. For the despeckling problem, an algorithm was proposed which adapted the ICA Sparse Code Shrinkage (ICA-SCS) method for the ultrasound B-mode image despeckling problem by applying an appropriate preprocessing step proposed by other researchers. The preprocessing step makes the speckle noise much closer to the real white Gaussian noise (WGN) hence more amenable to a denoising algorithm such as ICS-SCS that has been strictly designed for additive WGN. A discussion is given on how to obtain the noise-free training image samples in various ways. The experimental results have shown that the proposed method outperforms several classical methods chosen for comparison, including first or second order statistics based methods (such as Wiener filter) and multichannel filtering methods (such as wavelet shrinkage), in the capability of both speckle reduction and edge preservation

    Dirty RF Signal Processing for Mitigation of Receiver Front-end Non-linearity

    Get PDF
    Moderne drahtlose Kommunikationssysteme stellen hohe und teilweise gegensätzliche Anforderungen an die Hardware der Funkmodule, wie z.B. niedriger Energieverbrauch, große Bandbreite und hohe Linearität. Die Gewährleistung einer ausreichenden Linearität ist, neben anderen analogen Parametern, eine Herausforderung im praktischen Design der Funkmodule. Der Fokus der Dissertation liegt auf breitbandigen HF-Frontends für Software-konfigurierbare Funkmodule, die seit einigen Jahren kommerziell verfügbar sind. Die praktischen Herausforderungen und Grenzen solcher flexiblen Funkmodule offenbaren sich vor allem im realen Experiment. Eines der Hauptprobleme ist die Sicherstellung einer ausreichenden analogen Performanz über einen weiten Frequenzbereich. Aus einer Vielzahl an analogen Störeffekten behandelt die Arbeit die Analyse und Minderung von Nichtlinearitäten in Empfängern mit direkt-umsetzender Architektur. Im Vordergrund stehen dabei Signalverarbeitungsstrategien zur Minderung nichtlinear verursachter Interferenz - ein Algorithmus, der besser unter "Dirty RF"-Techniken bekannt ist. Ein digitales Verfahren nach der Vorwärtskopplung wird durch intensive Simulationen, Messungen und Implementierung in realer Hardware verifiziert. Um die Lücken zwischen Theorie und praktischer Anwendbarkeit zu schließen und das Verfahren in reale Funkmodule zu integrieren, werden verschiedene Untersuchungen durchgeführt. Hierzu wird ein erweitertes Verhaltensmodell entwickelt, das die Struktur direkt-umsetzender Empfänger am besten nachbildet und damit alle Verzerrungen im HF- und Basisband erfasst. Darüber hinaus wird die Leistungsfähigkeit des Algorithmus unter realen Funkkanal-Bedingungen untersucht. Zusätzlich folgt die Vorstellung einer ressourceneffizienten Echtzeit-Implementierung des Verfahrens auf einem FPGA. Abschließend diskutiert die Arbeit verschiedene Anwendungsfelder, darunter spektrales Sensing, robuster GSM-Empfang und GSM-basiertes Passivradar. Es wird gezeigt, dass nichtlineare Verzerrungen erfolgreich in der digitalen Domäne gemindert werden können, wodurch die Bitfehlerrate gestörter modulierter Signale sinkt und der Anteil nichtlinear verursachter Interferenz minimiert wird. Schließlich kann durch das Verfahren die effektive Linearität des HF-Frontends stark erhöht werden. Damit wird der zuverlässige Betrieb eines einfachen Funkmoduls unter dem Einfluss der Empfängernichtlinearität möglich. Aufgrund des flexiblen Designs ist der Algorithmus für breitbandige Empfänger universal einsetzbar und ist nicht auf Software-konfigurierbare Funkmodule beschränkt.Today's wireless communication systems place high requirements on the radio's hardware that are largely mutually exclusive, such as low power consumption, wide bandwidth, and high linearity. Achieving a sufficient linearity, among other analogue characteristics, is a challenging issue in practical transceiver design. The focus of this thesis is on wideband receiver RF front-ends for software defined radio technology, which became commercially available in the recent years. Practical challenges and limitations are being revealed in real-world experiments with these radios. One of the main problems is to ensure a sufficient RF performance of the front-end over a wide bandwidth. The thesis covers the analysis and mitigation of receiver non-linearity of typical direct-conversion receiver architectures, among other RF impairments. The main focus is on DSP-based algorithms for mitigating non-linearly induced interference, an approach also known as "Dirty RF" signal processing techniques. The conceived digital feedforward mitigation algorithm is verified through extensive simulations, RF measurements, and implementation in real hardware. Various studies are carried out that bridge the gap between theory and practical applicability of this approach, especially with the aim of integrating that technique into real devices. To this end, an advanced baseband behavioural model is developed that matches to direct-conversion receiver architectures as close as possible, and thus considers all generated distortions at RF and baseband. In addition, the algorithm's performance is verified under challenging fading conditions. Moreover, the thesis presents a resource-efficient real-time implementation of the proposed solution on an FPGA. Finally, different use cases are covered in the thesis that includes spectrum monitoring or sensing, GSM downlink reception, and GSM-based passive radar. It is shown that non-linear distortions can be successfully mitigated at system level in the digital domain, thereby decreasing the bit error rate of distorted modulated signals and reducing the amount of non-linearly induced interference. Finally, the effective linearity of the front-end is increased substantially. Thus, the proper operation of a low-cost radio under presence of receiver non-linearity is possible. Due to the flexible design, the algorithm is generally applicable for wideband receivers and is not restricted to software defined radios
    corecore