350 research outputs found

    Directional edge and texture representations for image processing

    Get PDF
    An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations

    Audio watermarking using transformation techniques

    Get PDF
    Watermarking is a technique, which is used in protecting digital information like images, videos and audio as it provides copyrights and ownership. Audio watermarking is more challenging than image watermarking due to the dynamic supremacy of hearing capacity over the visual field. This thesis attempts to solve the quantization based audio watermarking technique based on both the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). The underlying system involves the statistical characteristics of the signal. This study considers different wavelet filters and quantization techniques. A comparison is performed on diverge algorithms and audio signals to help examine the performance of the proposed method. The embedded watermark is a binary image and different encryption techniques such as Arnold Transform and Linear Feedback Shift Register (LFSR) are considered. The watermark is distributed uniformly in the areas of low frequencies i.e., high energy, which increases the robustness of the watermark. Further, spreading of watermark throughout the audio signal makes the technique robust against desynchronized attacks. Experimental results show that the signals generated by the proposed algorithm are inaudible and robust against signal processing techniques such as quantization, compression and resampling. We use Matlab (version 2009b) to implement the algorithms discussed in this thesis. Audio transformation techniques for compression in Linux (Ubuntu 9.10) are applied on the signal to simulate the attacks such as re-sampling, re-quantization, and mp3 compression; whereas, Matlab program for de-synchronized attacks like jittering and cropping. We envision that the proposed algorithm may work as a tool for securing intellectual properties of the musicians and audio distribution companies because of its high robustness and imperceptibility

    Guided Matching Pursuit and its Application to Sound Source Separation

    Get PDF
    In the last couple of decades there has been an increasing interest in the application of source separation technologies to musical signal processing. Given a signal that consists of a mixture of musical sources, source separation aims at extracting and/or isolating the signals that correspond to the original sources. A system capable of high quality source separation could be an invaluable tool for the sound engineer as well as the end user. Applications of source separation include, but are not limited to, remixing, up-mixing, spatial re-configuration, individual source modification such as filtering, pitch detection/correction and time stretching, music transcription, voice recognition and source-specific audio coding to name a few. Of particular interest is the problem of separating sources from a mixture comprising two channels (2.0 format) since this is still the most commonly used format in the music industry and most domestic listening environments. When the number of sources is greater than the number of mixtures (which is usually the case with stereophonic recordings) then the problem of source separation becomes under-determined and traditional source separation techniques, such as “Independent Component Analysis” (ICA) cannot be successfully applied. In such cases a family of techniques known as “Sparse Component Analysis” (SCA) are better suited. In short a mixture signal is decomposed into a new domain were the individual sources are sparsely represented which implies that their corresponding coefficients will have disjoint (or almost) disjoint supports. Taking advantage of this property along with the spatial information within the mixture and other prior information that could be available, it is possible to identify the sources in the new domain and separate them by going back to the time domain. It is a fact that sparse representations lead to higher quality separation. Regardless, the most commonly used front-end for a SCA system is the ubiquitous short-time Fourier transform (STFT) which although is a sparsifying transform it is not the best choice for this job. A better alternative is the matching pursuit (MP) decomposition. MP is an iterative algorithm that decomposes a signal into a set of elementary waveforms called atoms chosen from an over-complete dictionary in such a way so that they represent the inherent signal structures. A crucial part of MP is the creation of the dictionary which directly affects the results of the decomposition and subsequently the quality of source separation. Selecting an appropriate dictionary could prove a difficult task and an adaptive approach would be appropriate. This work proposes a new MP variant termed guided matching pursuit (GMP) which adds a new pre-processing step into the main sequence of the MP algorithm. The purpose of this step is to perform an analysis of the signal and extract important features, termed guide maps, that are used to create dynamic mini-dictionaries comprising atoms which are expected to correlate well with the underlying signal structures thus leading to focused and more efficient searches around particular supports of the signal. This algorithm is accompanied by a modular and highly flexible MATLAB implementation which is suited to the processing of long duration audio signals. Finally the new algorithm is applied to the source separation of two-channel linear instantaneous mixtures and preliminary testing demonstrates that the performance of GMP is on par with the performance of state of the art systems

    Separation of musical sources and structure from single-channel polyphonic recordings

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Iterative Separation of Note Events from Single-Channel Polyphonic Recordings

    Get PDF
    This thesis is concerned with the separation of audio sources from single-channel polyphonic musical recordings using the iterative estimation and separation of note events. Each event is defined as a section of audio containing largely harmonic energy identified as coming from a single sound source. Multiple events can be clustered to form separated sources. This solution is a model-based algorithm that can be applied to a large variety of audio recordings without requiring previous training stages. The proposed system embraces two principal stages. The first one considers the iterative detection and separation of note events from within the input mixture. In every iteration, the pitch trajectory of the predominant note event is automatically selected from an array of fundamental frequency estimates and used to guide the separation of the event's spectral content using two different methods: time-frequency masking and time-domain subtraction. A residual signal is then generated and used as the input mixture for the next iteration. After convergence, the second stage considers the clustering of all detected note events into individual audio sources. Performance evaluation is carried out at three different levels. Firstly, the accuracy of the note-event-based multipitch estimator is compared with that of the baseline algorithm used in every iteration to generate the initial set of pitch estimates. Secondly, the performance of the semi-supervised source separation process is compared with that of another semi-automatic algorithm. Finally, a listening test is conducted to assess the audio quality and naturalness of the separated sources when they are used to create stereo mixes from monaural recordings. Future directions for this research focus on the application of the proposed system to other music-related tasks. Also, a preliminary optimisation-based approach is presented as an alternative method for the separation of overlapping partials, and as a high resolution time-frequency representation for digital signals

    Models and analysis of vocal emissions for biomedical applications

    Get PDF
    This book of Proceedings collects the papers presented at the 3rd International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications, MAVEBA 2003, held 10-12 December 2003, Firenze, Italy. The workshop is organised every two years, and aims to stimulate contacts between specialists active in research and industrial developments, in the area of voice analysis for biomedical applications. The scope of the Workshop includes all aspects of voice modelling and analysis, ranging from fundamental research to all kinds of biomedical applications and related established and advanced technologies

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    corecore