6 research outputs found

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Vector Quantization Techniques for Approximate Nearest Neighbor Search on Large-Scale Datasets

    Get PDF
    The technological developments of the last twenty years are leading the world to a new era. The invention of the internet, mobile phones and smart devices are resulting in an exponential increase in data. As the data is growing every day, finding similar patterns or matching samples to a query is no longer a simple task because of its computational costs and storage limitations. Special signal processing techniques are required in order to handle the growth in data, as simply adding more and more computers cannot keep up.Nearest neighbor search, or similarity search, proximity search or near item search is the problem of finding an item that is nearest or most similar to a query according to a distance or similarity measure. When the reference set is very large, or the distance or similarity calculation is complex, performing the nearest neighbor search can be computationally demanding. Considering today’s ever-growing datasets, where the cardinality of samples also keep increasing, a growing interest towards approximate methods has emerged in the research community.Vector Quantization for Approximate Nearest Neighbor Search (VQ for ANN) has proven to be one of the most efficient and successful methods targeting the aforementioned problem. It proposes to compress vectors into binary strings and approximate the distances between vectors using look-up tables. With this approach, the approximation of distances is very fast, while the storage space requirement of the dataset is minimized thanks to the extreme compression levels. The distance approximation performance of VQ for ANN has been shown to be sufficiently well for retrieval and classification tasks demonstrating that VQ for ANN techniques can be a good replacement for exact distance calculation methods.This thesis contributes to VQ for ANN literature by proposing five advanced techniques, which aim to provide fast and efficient approximate nearest neighbor search on very large-scale datasets. The proposed methods can be divided into two groups. The first group consists of two techniques, which propose to introduce subspace clustering to VQ for ANN. These methods are shown to give the state-of-the-art performance according to tests on prevalent large-scale benchmarks. The second group consists of three methods, which propose improvements on residual vector quantization. These methods are also shown to outperform their predecessors. Apart from these, a sixth contribution in this thesis is a demonstration of VQ for ANN in an application of image classification on large-scale datasets. It is shown that a k-NN classifier based on VQ for ANN performs on par with the k-NN classifiers, but requires much less storage space and computations

    Optimisation of Tamper Localisation and Recovery Watermarking Techniques

    Get PDF
    Digital watermarking has found many applications in many fields, such as: copyright tracking, media authentication, tamper localisation and recovery, hardware control, and data hiding. The idea of digital watermarking is to embed arbitrary data inside a multimedia cover without affecting the perceptibility of the multimedia cover itself. The main advantage of using digital watermarking over other techniques, such as signature based techniques, is that the watermark is embedded into the multimedia cover itself and will not be removed even with the format change. Image watermarking techniques are categorised according to their robustness against modification into: fragile, semi-fragile, and robust watermarking. In fragile watermarking any change to the image will affect the watermark, this makes fragile watermarking very useful in image authentication applications, as in medical and forensic fields, where any tampering of the image is: detected, localised, and possibly recovered. Fragile watermarking techniques are also characterised by a higher capacity when compared to semi-fragile and robust watermarking. Semifragile watermarking techniques resist some modifications, such as lossy compression and low pass filtering. Semi-fragile watermarking can be used in authentication and copyright validation applications whenever the amount of embedded information is small and the expected modifications are not severe. Robust watermarking techniques are supposed to withstand more severe modifications, such as rotation and geometrical bending. Robust watermarking is used in copyright validation applications, where copyright information in the image must remains accessible even after severe modification. This research focuses on the application of image watermarking in tamper localisation and recovery and it aims to provide optimisation for some of its aspects. The optimisation aims to produce watermarking techniques that enhance one or more of the following aspects: consuming less payload, having better recovery quality, recovering larger tampered area, requiring less calculations, and being robust against the different counterfeiting attacks. Through the survey of the main existing techniques, it was found that most of them are using two separate sets of data for the localisation and the recovery of the tampered area, which is considered as a redundancy. The main focus in this research is to investigate employing image filtering techniques in order to use only one set of data for both purposes, leading to a reduced redundancy in the watermark embedding and enhanced capacity. Four tamper localisation and recovery techniques were proposed, three of them use one set of data for localisation and recovery while the fourth one is designed to be optimised and gives a better performance even though it uses separate sets of data for localisation and recovery. The four techniques were analysed and compared to two recent techniques in the literature. The performance of the proposed techniques vary from one technique to another. The fourth technique shows the best results regarding recovery quality and Probability of False Acceptance (PFA) when compared to the other proposed techniques and the two techniques in the literature, also, all proposed techniques show better recovery quality when compared to the two techniques in the literature

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010
    corecore