149 research outputs found

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Adaptive OFDM Radar for Target Detection and Tracking

    Get PDF
    We develop algorithms to detect and track targets by employing a wideband orthogonal frequency division multiplexing: OFDM) radar signal. The frequency diversity of the OFDM signal improves the sensing performance since the scattering centers of a target resonate variably at different frequencies. In addition, being a wideband signal, OFDM improves the range resolution and provides spectral efficiency. We first design the spectrum of the OFDM signal to improve the radar\u27s wideband ambiguity function. Our designed waveform enhances the range resolution and motivates us to use adaptive OFDM waveform in specific problems, such as the detection and tracking of targets. We develop methods for detecting a moving target in the presence of multipath, which exist, for example, in urban environments. We exploit the multipath reflections by utilizing different Doppler shifts. We analytically evaluate the asymptotic performance of the detector and adaptively design the OFDM waveform, by maximizing the noncentrality-parameter expression, to further improve the detection performance. Next, we transform the detection problem into the task of a sparse-signal estimation by making use of the sparsity of multiple paths. We propose an efficient sparse-recovery algorithm by employing a collection of multiple small Dantzig selectors, and analytically compute the reconstruction performance in terms of the ell1ell_1-constrained minimal singular value. We solve a constrained multi-objective optimization algorithm to design the OFDM waveform and infer that the resultant signal-energy distribution is in proportion to the distribution of the target energy across different subcarriers. Then, we develop tracking methods for both a single and multiple targets. We propose an tracking method for a low-grazing angle target by realistically modeling different physical and statistical effects, such as the meteorological conditions in the troposphere, curved surface of the earth, and roughness of the sea-surface. To further enhance the tracking performance, we integrate a maximum mutual information based waveform design technique into the tracker. To track multiple targets, we exploit the inherent sparsity on the delay-Doppler plane to develop an computationally efficient procedure. For computational efficiency, we use more prior information to dynamically partition a small portion of the delay-Doppler plane. We utilize the block-sparsity property to propose a block version of the CoSaMP algorithm in the tracking filter

    Neural network based image representation for small scale object recognition

    Get PDF
    Object recognition can be abstractedly viewed as a two-stage process. The features learning stage selects key information that can represent the input image in a compact, robust, and discriminative manner in some feature space. Then the classification stage learns the rules to differentiate object classes based on the representations of their images in feature space. Consequently, if the first stage can produce a highly separable features set, simple and cost-effective classifiers can be used to make the recognition system more applicable in practice. Features, or representations, used to be engineered manually with different assumptions about the data population to limit the complexity in a manageable range. As more practical problems are tackled, those assumptions are no longer valid, and so are the representations built on them. More parameters and test cases have to be considered in those new challenges, that causes manual engineering to become too complicated. Machine learning approaches ease those difficulties by allowing computer to learn to identify the appropriate representation automatically. As the number of parameters increases with the divergence of data, it is always beneficial to eliminate irrelevant information from input data to reduce the complexity of learning. Chapter 3 of the thesis reports the study case where removal of colour leads to an improvement in recognition accuracy. Deep learning appears to be a very strong representation learner with new achievements coming in monthly basic. While training the phase of deep structures requires huge amount of data, tremendous calculation, and careful calibration, the inferencing phase is affordable and straightforward. Utilizing knowledge in trained deep networks is therefore promising for efficient feature extraction in smaller systems. Many approaches have been proposed under the name of “transfer learning”, aimed to take advantage of that “deep knowledge”. However, the results achieved so far could be classified as a learning room for improvement. Chapter 4 presents a new method to utilize a trained deep convolutional structure as a feature extractor and achieved state-of-the-art accuracy on the Washington RGBD dataset. Despite some good results, the potential of transfer learning is just barely exploited. On one hand, a dimensionality reduction can be used to make the deep neural network representation even more computationally efficient and allow a wider range of use cases. Inspired by the structure of the network itself, a new random orthogonal projection method for the dimensionality reduction is presented in the first half of Chapter 5. The t-SNE mimicking neural network for low-dimensional embedding is also discussed in this part with promising results. In another approach, feature encoding can be used to improve deep neural network features for classification applications. Thanks to the spatially organized structure, deep neural network features can be considered as local image descriptors, and thus the traditional feature encoding approaches such as the Fisher vector can be applied to improve those features. This method combines the advantages of both discriminative learning and generative learning to boost the features performance in difficult scenarios such as when data is noisy or incomplete. The problem of high dimensionality in deep neural network features is alleviated with the use of the Fisher vector based on sparse coding, where infinite number of Gaussian mixtures was used to model the feature space. In the second half of Chapter 5, the regularized Fisher encoding was shown to be effective in improving classification results on difficult classes. Also, the low cost incremental k-means learning was shown to be a potential dictionary learning approach that can be used to replace the slow and computationally expensive sparse coding method

    Adaptive steady-state analysis of circuits using wavelets

    Get PDF
    This thesis presents research into utilizing the sparse representations of waveforms that are possible in the wavelet domain to increase the computational efficiency of the steady-state analysis of electric circuits. The system of non-linear equations that represent the circuits are formulated in the wavelet domain and solved using Newton-Raphson method. Factoring the Jacobian matrix each iteration is a major contributor to the computational time required for solving the circuit equations with NewtonRaphson method. This research aims to reduce the computational time of factoring the Jacobian matrix and has led to the following contributions: 1. A study on the effect of wavelet selection on the sparsity of the Jacobian matrix and nodal variable vectors: Results show that there is no one wavelet that provides the sparsest Jacobian matrices in every case but the Haar wavelet tends to be a good choice if Jacobian matrix sparsity is a concern. However, the time domain provides sparser Jacobian matrices than all of the wavelets tested. Selection of a wavelet to provide the sparsest nodal variable vectors is much more difficult and no one wavelet stood out as providing sparser vectors than the others. 2. A method for increasing the sparsity of the Jacobian matrix via removal of low amplitude entries: The threshold to determine which elements to remove is adaptively controlled during the simulation. Results show that there can be a significant decrease in Jacobian matrix density with adaptive thresholding but the Haar wavelet tends to provide the sparsest matrices with the test cases. The results show that adaptive Jacobian matrix thresholding can lead to a speedup over the non-thresholded wavelet domain steady-state analysis. In some cases, this speedup was enough to lead to a speedup over the time domain when the non-thresholded simulations ran slower than the time domain. 3. Two new methods that reduce the problem size by taking advantage of the sparse representations that are possible with the nodal variable vectors in the wavelet domain: A unique feature of one of these methods is that it allows for the automatic selection of a wavelet for each nodal variable. Results show a speedup over wavelet domain steady-state analysis for some test cases. There were some test cases where there was a slowdown compared to wavelet domain steady-state analysis which was caused by the computational overhead associated with these methods. With one circuit, the number of columns in the Jacobian matrix was not reduced for most iterations. More work is required to determine if this is due to the method used to select columns from the Jacobian matrix, the method used to control the error introduced into the update vectors by the column reduction method, or if there are some problems that cannot benefit from the column reduction method

    ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер

    Get PDF
    For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений

    Non-convex Optimization for Machine Learning

    Full text link
    A vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks. The freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) relaxed optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization. On the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques - popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties. This monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. The monograph will lead the reader through several widely used non-convex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.Comment: The official publication is available from now publishers via http://dx.doi.org/10.1561/220000005

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    corecore