1,533 research outputs found

    The Application of Blind Source Separation to Feature Decorrelation and Normalizations

    Get PDF
    We apply a Blind Source Separation BSS algorithm to the decorrelation of Mel-warped cepstra. The observed cepstra are modeled as a convolutive mixture of independent source cepstra. The algorithm aims to minimize a cross-spectral correlation at different lags to reconstruct the source cepstra. Results show that using "independent" cepstra as features leads to a reduction in the WER.Finally, we present three different enhancements to the BSS algorithm. We also present some results of these deviations of the original algorithm

    Filtering Random Graph Processes Over Random Time-Varying Graphs

    Get PDF
    Graph filters play a key role in processing the graph spectra of signals supported on the vertices of a graph. However, despite their widespread use, graph filters have been analyzed only in the deterministic setting, ignoring the impact of stochastic- ity in both the graph topology as well as the signal itself. To bridge this gap, we examine the statistical behavior of the two key filter types, finite impulse response (FIR) and autoregressive moving average (ARMA) graph filters, when operating on random time- varying graph signals (or random graph processes) over random time-varying graphs. Our analysis shows that (i) in expectation, the filters behave as the same deterministic filters operating on a deterministic graph, being the expected graph, having as input signal a deterministic signal, being the expected signal, and (ii) there are meaningful upper bounds for the variance of the filter output. We conclude the paper by proposing two novel ways of exploiting randomness to improve (joint graph-time) noise cancellation, as well as to reduce the computational complexity of graph filtering. As demonstrated by numerical results, these methods outperform the disjoint average and denoise algorithm, and yield a (up to) four times complexity redution, with very little difference from the optimal solution

    Forecasting high waters at Venice Lagoon using chaotic time series analisys and nonlinear neural netwoks

    Get PDF
    Time series analysis using nonlinear dynamics systems theory and multilayer neural networks models have been applied to the time sequence of water level data recorded every hour at 'Punta della Salute' from Venice Lagoon during the years 1980-1994. The first method is based on the reconstruction of the state space attractor using time delay embedding vectors and on the characterisation of invariant properties which define its dynamics. The results suggest the existence of a low dimensional chaotic attractor with a Lyapunov dimension, DL, of around 6.6 and a predictability between 8 and 13 hours ahead. Furthermore, once the attractor has been reconstructed it is possible to make predictions by mapping local-neighbourhood to local-neighbourhood in the reconstructed phase space. To compare the prediction results with another nonlinear method, two nonlinear autoregressive models (NAR) based on multilayer feedforward neural networks have been developed. From the study, it can be observed that nonlinear forecasting produces adequate results for the 'normal' dynamic behaviour of the water level of Venice Lagoon, outperforming linear algorithms, however, both methods fail to forecast the 'high water' phenomenon more than 2-3 hours ahead.Publicad

    On adaptive filter structure and performance

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre- DSC:D75686/87 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Digital Signal Processing and Machine Learning System Design using Stochastic Logic

    Get PDF
    University of Minnesota Ph.D. dissertation. July 2017. Major: Electrical/Computer Engineering. Advisor: Keshab Parhi. 1 computer file (PDF); xxii, 172 pages.Digital signal processing (DSP) and machine learning systems play a crucial role in the fields of big data and artificial intelligence. The hardware design of these systems is extremely critical to meet stringent application requirements such as extremely small size, low power consumption, and high reliability. Following the path of Moore's Law, the density and performance of hardware systems are dramatically improved at an exponential pace. The increase in the number of transistors on a chip, which plays the main role in improvement in the density of circuit design, causes rapid increase in circuit complexity. Therefore, low area consumption is one of the key challenges for IC design, especially for portable devices. Another important challenge for hardware design is reliability. A chip fabricated using nanoscale complementary metal-oxide-semiconductor (CMOS) technologies will be prone to errors caused by fluctuations in threshold voltage, supply voltage, doping levels, aging, timing errors and soft errors. Design of nanoscale failure-resistant systems is currently of significant interest, especially as the technology scales below 10 nm. Stochastic Computing (SC) is a novel approach to address these challenges in system and circuit design. This dissertation considers the design of digital signal processing and machine learning systems in stochastic logic. The stochastic implementations of finite impulse response (FIR) and infinite impulse response (IIR) filters based on various lattice structures are presented. The implementations of complex functions such as trigonometric, exponential, and sigmoid, are derived based on truncated versions of their Maclaurin series expansions. We also present stochastic computation of polynomials using stochastic subtractors and factorization. The machine learning systems including artificial neural network (ANN) and support vector machine (SVM) in stochastic logic are also presented. First, we propose novel implementations for linear-phase FIR filters in stochastic logic. The proposed design is based on lattice structures. Compared to direct-form linear-phase FIR filters, linear-phase lattice filters require twice the number of multipliers but the same number of adders. The hardware complexities of stochastic implementations of linear-phase FIR filters for direct-form and lattice structures are comparable. We propose stochastic implementation of IIR filters using lattice structures where the states are orthogonal and uncorrelated. We present stochastic IIR filters using basic, normalized and modified lattice structures. Simulation results demonstrate high signal-to-error ratio and fault tolerance in these structures. Furthermore, hardware synthesis results show that these filter structures require lower hardware area and power compared to two's complement realizations. Second, We present stochastic logic implementations of complex arithmetic functions based on truncated versions of their Maclaurin series expansions. It is shown that a polynomial can be implemented using multiple levels of NAND gates based on Horner's rule, if the coefficients are alternately positive and negative and their magnitudes are monotonically decreasing. Truncated Maclaurin series expansions of arithmetic functions are used to generate polynomials which satisfy these constraints. The input and output in these functions are represented by unipolar representation. For a polynomial that does not satisfy these constraints, it still can be implemented based on Horner's rule if each factor of the polynomial satisfies these constraints. format conversion is proposed for arithmetic functions with input and output represented in different formats, such as cos πx\text{cos}\,\pi x given x∈[0,1]x\in[0,1] and sigmoid(x)\text{sigmoid(x)} given x∈[−1,1]x\in[-1,1]. Polynomials are transformed to equivalent forms that naturally exploit format conversions. The proposed stochastic logic circuits outperform the well-known Bernstein polynomial based and finite-state-machine (FSM) based implementations. Furthermore, the hardware complexity and the critical path of the proposed implementations are less than the Bernstein polynomial based and FSM based implementations for most cases. Third, we address subtraction and polynomial computations using unipolar stochastic logic. It is shown that stochastic computation of polynomials can be implemented by using a stochastic subtractor and factorization. Two approaches are proposed to compute subtraction in stochastic unipolar representation. In the first approach, the subtraction operation is approximated by cascading multi-levels of OR and AND gates. The accuracy of the approximation is improved with the increase in the number of stages. In the second approach, the stochastic subtraction is implemented using a multiplexer and a stochastic divider. We propose stochastic computation of polynomials using factorization. Stochastic implementations of first-order and second-order factors are presented for different locations of polynomial roots. From experimental results, it is shown that the proposed stochastic logic circuits require less hardware complexity than the previous stochastic polynomial implementation using Bernstein polynomials. Finally, this thesis presents novel architectures for machine learning based classifiers using stochastic logic. Three types of classifiers are considered. These include: linear support vector machine (SVM), artificial neural network (ANN) and radial basis function (RBF) SVM. These architectures are validated using seizure prediction from electroencephalogram (EEG) as an application example. To improve the accuracy of proposed stochastic classifiers, an approach of data-oriented linear transform for input data is proposed for EEG signal classification using linear SVM classifiers. Simulation results in terms of the classification accuracy are presented for the proposed stochastic computing and the traditional binary implementations based datasets from two patients. It is shown that accuracies of the proposed stochastic linear SVM are improved by 3.88\% and 85.49\% for datasets from patient-1 and patient-2, respectively, by using the proposed linear-transform for input data. Compared to conventional binary implementation, the accuracy of the proposed stochastic ANN is improved by 5.89\% for the datasets from patient-1. For patient-2, the accuracy of the proposed stochastic ANN is improved by 7.49\% by using the proposed linear-transform for input data. Additionally, compared to the traditional binary linear SVM and ANN, the hardware complexity, power consumption and critical path of the proposed stochastic implementations are reduced significantly

    Chaotic multi-objective optimization based design of fractional order PI{\lambda}D{\mu} controller in AVR system

    Get PDF
    In this paper, a fractional order (FO) PI{\lambda}D\mu controller is designed to take care of various contradictory objective functions for an Automatic Voltage Regulator (AVR) system. An improved evolutionary Non-dominated Sorting Genetic Algorithm II (NSGA II), which is augmented with a chaotic map for greater effectiveness, is used for the multi-objective optimization problem. The Pareto fronts showing the trade-off between different design criteria are obtained for the PI{\lambda}D\mu and PID controller. A comparative analysis is done with respect to the standard PID controller to demonstrate the merits and demerits of the fractional order PI{\lambda}D\mu controller.Comment: 30 pages, 14 figure

    Intelligent control for scalable video processing

    Get PDF
    In this thesis we study a problem related to cost-effective video processing in software by consumer electronics devices, such as digital TVs. Video processing is the task of transforming an input video signal into an output video signal, for example to improve the quality of the signal. This transformation is described by a video algorithm. At a high level, video processing can be seen as the task of processing a sequence of still pictures, called frames. Video processing in consumer electronic devices is subject to strict time constraints. In general, the successively processed frames are needed periodically in time. If a frame is not processed in time, then a quality reduction of the output signal may be perceived. Video processing in software is often characterized by highly ??uctuating, content-dependent processing times of frames. There is often a considerable gap between the worst-case and average-case processing times of frames. In general, assigning processing time to a software video processing task based on its worstcase needs is not cost effective. We consider a software video processing task to which has been assigned insuf??cient processing time to process the most computeintensive frames in time. As a result, a severe quality reduction of the output signal may occur. To optimize the quality of the output signal, given the limited amount of processing time that is available to the task, we do the following. First we use a technique called asynchronous processing, which allows the task to make more effective use of the available processing time by working ahead. Second, we make use of scalable video algorithms. A scalable video algorithm can process frames at different quality levels. The higher the applied quality level for a frame, the higher is the resulting picture quality, but also the more processing time is needed. Due to the combination of asynchronous processing and scalable processing, a larger fraction of the frames can be processed in time, however at the cost of a sometimes lower picture quality. The problem we consider is to select the quality level for each frame. The objective that we try to optimize re??ects the user-perceived quality, and is given by a combination of the number of frames that are not processed in time, the quality levels applied for the processed frames, and changes in the applied quality level between successive frames. The video signal to be processed is not known in advance, which means that we have to make a quality-level decision for each frame without knowing in which processing time this will result, and without knowing the complexity of the subsequent frames. As a ??rst solution approach we modeled this problem as a Markov decision process. The input of the model is given by the budgetted processing time for the task, and statistics on the processing times of frames at the different quality levels. Solving the Markov decision process results in a Markov strategy that can be used to select a quality level for each frame to be processed, based on the amount of time that is available for processing until the deadline of the frame. Our ??rst solution approach works well if the processing times of successive frames are independent. In practice, however, the processing times of successive frames can be highly correlated, because successive frames are often very similar. Our second solution approach, which can be seen as an extension of our ??rst approach, takes care of the dependencies in the processing times of successive frames. The idea is that we introduce a measure for the complexity of successively processed frames, based on structural ??uctuations in the processing times of the frames. Before processing, we solve the Markov decision process several times, for different values of the complexity measure. During video processing we regularly determine the complexity measure for the frames that have just been processed, and based on this measure we dynamically adapt the Markov policy that is applied to select the quality level for the next frame. The Markov strategies that we use are computed based on processing-time statistics of a particular collection of video sequences. Hence, these statistics can differ from the statistics of the video sequence that is processed. Therefore we also worked out a third solution approach in which we use a learning algorithm to select the quality levels for frames. The algorithm starts with hardly any processing-time statistics, but it has to learn these statistics from run-time experience. Basically, the learning algorithm implicitly solves the Markov decision process at run time, making use of the increasing amount of information that becomes available. The algorithm also takes care of dependencies in the processing time of successive frames, using the same complexity measure as in our second solution approach. From computer simulations we learned that our second and third solution approaches perform close to a theoretical upper bound, determined by a reference strategy that selects the quality levels for frames based on complete knowledge of the processing times of all frames to be processed. Although our solutions are successful in computer simulations, they still have to be tested in a real system

    Towards Enhanced Diagnosis of Diseases using Statistical Analysis of Genomic Copy Number Data

    Get PDF
    Genomic copy number data are a rich source of information about the biological systems they are collected from. They can be used for the diagnoses of various diseases by identifying the locations and extent of aberrations in DNA sequences. However, copy number data are often contaminated with measurement noise which drastically affects the quality and usefulness of the data. The objective of this project is to apply some of the statistical filtering and fault detection techniques to improve the accuracy of diagnosis of diseases by enhancing the accuracy of determining the locations of such aberrations. Some of these techniques include multiscale wavelet-based filtering and hypothesis testing based fault detection. The filtering techniques include Mean Filtering (MF), Exponentially Weighted Moving Average (EWMA), Standard Multiscale Filtering (SMF) and Boundary Corrected Translation Invariant filtering (BCTI). The fault detection techniques include the Shewhart chart, EWMA and Generalized Likelihood Ratio (GLR). The performance of these techniques is illustrated using Monte Carlo simulations and through their application on real copy number data. Based on the Monte Carlo simulations, the non-linear filtering techniques performed better than the linear techniques, with BCTI performing with the least error . At an SNR of 1, BCTI technique had an average mean squared error of 2.34% whereas mean filtering technique had the highest error of 5.24%. As for the fault detection techniques, GLR had the lowest missed detection rate of 1.88% at a fixed false alarm rate of around 4%. At around the same false alarm rate, the Shewhart chart had the highest missed detection of 67.4%. Furthermore, these techniques were applied on real genomic copy number data sets. These included data from breast cancer cell lines (MPE600) and colorectal cancer cell lines (SW837)

    Differentiable Artificial Reverberation

    Full text link
    Artificial reverberation (AR) models play a central role in various audio applications. Therefore, estimating the AR model parameters (ARPs) of a target reverberation is a crucial task. Although a few recent deep-learning-based approaches have shown promising performance, their non-end-to-end training scheme prevents them from fully exploiting the potential of deep neural networks. This motivates to introduce differentiable artificial reverberation (DAR) models which allows loss gradients to be back-propagated end-to-end. However, implementing the AR models with their difference equations "as is" in the deep-learning framework severely bottlenecks the training speed when executed with a parallel processor like GPU due to their infinite impulse response (IIR) components. We tackle this problem by replacing the IIR filters with finite impulse response (FIR) approximations with the frequency-sampling method (FSM). Using the FSM, we implement three DAR models -- differentiable Filtered Velvet Noise (FVN), Advanced Filtered Velvet Noise (AFVN), and Feedback Delay Network (FDN). For each AR model, we train its ARP estimation networks for analysis-synthesis (RIR-to-ARP) and blind estimation (reverberant-speech-to-ARP) task in an end-to-end manner with its DAR model counterpart. Experiment results show that the proposed method achieves consistent performance improvement over the non-end-to-end approaches in both objective metrics and subjective listening test results.Comment: Manuscript submitted to TASL

    Applied Signal Processing

    Get PDF
    Being an inter-disciplinary subject, Signal Processing has application in almost all scientific fields. Applied Signal Processing tries to link between the analog and digital signal processing domains. Since the digital signal processing techniques have evolved from its analog counterpart, this book begins by explaining the fundamental concepts in analog signal processing and then progresses towards the digital signal processing. This will help the reader to gain a general overview of the whole subject and establish links between the various fundamental concepts. While the focus of this book is on the fundamentals of signal processing, the understanding of these topics greatly enhances the confident use as well as further development of the design and analysis of digital systems for various engineering and medical applications. Applied Signal Processing also prepares readers to further their knowledge in advanced topics within the field of signal processing
    • …
    corecore