1,916 research outputs found

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems

    Quaternion Information Theoretic Learning Adaptive Algorithms for Nonlinear Adaptive

    Get PDF
    Information Theoretic Learning (ITL) is gaining popularity for designing adaptive filters for a non-stationary or non-Gaussian environment [1] [2] . ITL cost functions such as the Minimum Error Entropy (MEE) have been applied to both linear and nonlinear adaptive filtering with better overall performance compared with the typical mean squared error (MSE) and least-squares type adaptive filtering, especially for nonlinear systems in higher-order statistic noise environments [3]. Quaternion valued data processing is beneficial in applications such as robotics and image processing, particularly for performing transformations in 3-dimensional space. Particularly the benefit for quaternion valued processing includes performing data transformations in a 3 or 4-dimensional space in a more convenient fashion than using vector algebra [4, 5, 6, 7, 8]. Adaptive filtering in quaterion domain operates intrinsically based on the augmented statistics which the quaternion input vector covariance is taken into account naturally and as a result it incorporates component-wise real valued cross-correlation or the coupling within the dimensions of the quaternion input [9]. The generalized Hamilton-real calculus (GHR) for the quaternion data simplified product and chain rules and allows us to calculate the gradient and Hessian of quaternion based cost function of the learning algorithms eciently [10][11] . The quaternion reproducing kernel Hilbert spaces and its uniqueness provide a mathematical foundation to develop the quaternion value kernel learning algorithms [12]. The reproducing property of the feature space replace the inner product of feature samples with kernel evaluation. In this dissertation, we first propose a kernel adaptive filter for quaternion data based on minimum error entropy cost function. The new algorithm is based on error entropy function and is referred to as the quaternion kernel minimum error entropy (QKMEE) algorithm [13]. We apply generalized Hamilton-real (GHR) calculus that is applicable to quaternion Hilbert space for evaluating the cost function gradient to develop the QKMEE algorithm. The minimum error entropy (MEE) algorithm [3, 14, 15] minimizes Renyis quadratic entropy of the error between the lter output and desired response or indirectly maximizing the error information potential. ITL methodology improves the performance of adaptive algorithm in biased or non-Gaussian signals and noise enviorments compared to the mean squared error (MSE) criterion algorithms such as the kernel least mean square algorithm. Second, we develop a kernel adaptive filter for quaternion data based on normalized minimum error entropy cost function [14]. We apply generalized Hamilton-real GHR) calculus that is applicable to Hilbert space for evaluating the cost function gradient to develop the quaternion kernel normalized minimum error entropy (QKNMEE) algorithm [16]. The new proposed algorithm enhanced QKMEE algorithm where the filter update stepsize selection will be independent of the input power and the kernel size. Third, we develop a kernel adaptive lter for quaternion domain data, based on information theoretic learning cost function which could be useful for quaternion based kernel applications of nonlinear filtering. The new algorithm is based on error entropy function with fiducial point and is referred to as the quaternion kernel minimum error entropy with fiducial point (QKMEEF) algorithm [17]. In our previous work we developed quaternion kernel adaptive lter based on minimum error entropy referred to as the QKMEE algorithm [13]. Since entropy does not change with the mean of the distribution, the algorithm may converge to a set of optimal weights without having zero mean error. Traditionally, to make the zero mean output error, the output during testing session was biased with the mean of errors of training session. However, for non-symmetric or heavy tails error PDF the estimation of error mean is problematic [18]. The minimum error entropy criterion, minimizes Renyi\u27s quadratic entropy of the error between the filter output and desired response or indirectly maximizing the error information potential [19]. Here, the approach is applied to quaternions. Adaptive filtering in quaterion domain intrinsically incorporates component-wise real valued cross-correlation or the coupling within the dimensions of the quaternion input. We apply generalized Hamilton-real (GHR) calculus that is applicable to Hilbert space for evaluating the cost function gradient to develop the Quaternion Minimum Error Entropy Algorithm with Fiducial point. Simulation results are used to show the behavior of the new algorithm (QKMEEF) when signal is non-Gaussian in presence of unimodal noise versus bi-modal noise distributions. Simulation results also show that the new algorithm QKMEEF can track and predict the 4-Dimensional non-stationary process signals where there are correlations between components better than quadruple real-valued KMEEF and Quat-KLMS algorithms. Fourth, we develop a kernel adaptive filter for quaternion data, using stochastic information gradient (SIG) cost function based on the information theoretic learning (ITL) approach. The new algorithm (QKSIG) is useful for quaternion-based kernel applications of nonlinear ltering [20]. Adaptive filtering in quaterion domain intrinsically incorporates component-wise real valued cross-correlation or the coupling within the dimensions of the quaternion input. We apply generalized Hamilton-real (GHR) calculus that is applicable to quaternion Hilbert space for evaluating the cost function gradient. The QKSIG algorithm minimizes Shannon\u27s entropy of the error between the filter output and desired response and minimizes the divergence between the joint densities of input-desired and input-output pairs. The SIG technique reduces the computational complexity of the error entropy estimation. Here, ITL with SIG approach is applied to quaternion adaptive filtering for three different reasons. First, it reduces the algorithm computational complexity compared to our previous work quaternion kernel minimum error entropy algorithm (QKMEE). Second, it improves the filtering performance by considering the coupling within the dimensions of the quaternion input. Third, it performs better in biased or non-Gaussian signal and noise environments due to ITL approach. We present convergence analysis and steady-state performance analysis results of the new algorithm (QKSIG). Simulation results are used to show the behavior of the new algorithm QKSIG in quaternion non-Gaussian signal and noise environments compared to the existing ones such as quadruple real-valued kernel stochastic information gradient (KSIG) and quaternion kernel LMS (QKLMS) algorithms. Fifth, we develop a kernel adaptive filter for quaternion data, based on stochastic information gradient (SIG) cost function with self adjusting step-size. The new algorithm (QKSIG-SAS) is based on the information theoretic learning (ITL) approach. The new algorithm (QKSIG-SAS) has faster speed of convergence as compared to our previous work QKSIG algorithm

    Data-driven deconvolution for large eddy simulations of Kraichnan turbulence

    Get PDF
    In this article, we demonstrate the use of artificial neural networks as optimal maps which are utilized for convolution and deconvolution of coarse-grained fields to account for sub-grid scale turbulence effects. We demonstrate that an effective eddy-viscosity is predicted by our purely data-driven large eddy simulation framework without explicit utilization of phenomenological arguments. In addition, our data-driven framework precludes the knowledge of true sub-grid stress information during the training phase due to its focus on estimating an effective filter and its inverse so that grid-resolved variables may be related to direct numerical simulation data statistically. The proposed predictive framework is also combined with a statistical truncation mechanism for ensuring numerical realizability in an explicit formulation. Through this we seek to unite structural and functional modeling strategies for modeling non-linear partial differential equations using reduced degrees of freedom. Both a priori and a posteriori results are shown for a two-dimensional decaying turbulence case in addition to a detailed description of validation and testing. A hyperparameter sensitivity study also shows that the proposed dual network framework simplifies learning complexity and is viable with exceedingly simple network architectures. Our findings indicate that the proposed framework approximates a robust and stable sub-grid closure which compares favorably to the Smagorinsky and Leith hypotheses for capturing the theoretical k3k^{-3} scaling in Kraichnan turbulence

    On the choice of parameters of the cost function in nested modular RNN's

    Get PDF
    We address the choice of the coefficients in the cost function of a modular nested recurrent neural-network (RNN) architecture, known as the pipelined recurrent neural network (PRNN). Such a network can cope with the problem of vanishing gradient, experienced in prediction with RNN’s. Constraints on the coefficients of the cost function, in the form of a vector norm, are considered. Unlike the previous cost function for the PRNN, which included a forgetting factor motivated by the recursive least squares (RLS) strategy, the proposed forms of cost function provide “forgetting” of the outputs of adjacent modules based upon the network architecture. Such an approach takes into account the number of modules in the PRNN, through the unit norm constraint on the coefficients of the cost function of the PRNN. This is shown to be particularly suitable, since due to inherent nesting in the PRNN, every module gives its full contribution to the learning process, whereas the unit norm constrained cost function introduces a sense of forgetting in the memory management of the PRNN. The PRNN based upon a modified cost function outperforms existing PRNN schemes in the time series prediction simulations presented

    Data-driven sub-grid model development for large eddy simulations of turbulence

    Get PDF
    Turbulence modeling remains an active area of research due to its significant impact on a diverse set of challenges such as those pertaining to the aerospace and geophysical communities. Researchers continue to search for modeling strategies that improve the representation of high-wavenumber content in practical computational fluid dynamics applications. The recent successes of machine learning in the physical sciences have motivated a number of studies into the modeling of turbulence from a data-driven point of view. In this research, we utilize physics-informed machine learning to reconstruct the effect of unresolved frequencies (i.e., small-scale turbulence) on grid-resolved flow-variables obtained through large eddy simulation. In general, it is seen that the successful development of any data-driven strategy relies on two phases - learning and a-posteriori deployment. The former requires the synthesis of labeled data from direct numerical simulations of our target phenomenon whereas the latter requires the development of stability preserving modifications instead of a direct deployment of learning predictions. These stability preserving techniques may be through prediction modulation - where learning outputs are deployed via an intermediate statistical truncation. They may also be through the utilization of model classifiers where the traditional L2L_2-minimization strategy is avoided for a categorical cross-entropy error which flags for the most stable model deployment at a point on the computational grid. In this thesis, we outline several investigations utilizing the aforementioned philosophies and come to the conclusion that sub-grid turbulence models built through the utilization of machine learning are capable of recovering viable statistical trends in stabilized a-posteriori deployments for Kraichnan and Kolmogorov turbulence. Therefore, they represent a promising tool for the generation of closures that may be utilized in flows that belong to different configurations and have different sub-grid modeling requirements

    The Estimation Methods for an Integrated INS/GPS UXO Geolocation System

    Get PDF
    This work was supported by a project funded by the US Army Corps of Engineers, Strategic Environment Research and Development Program, contract number W912HQ- 08-C-0044.This report was also submitted to the Graduate School of the Ohio State University in partial fulfillment of the PhD degree in Geodetic Science.Unexploded ordnance (UXO) is the explosive weapons such as mines, bombs, bullets, shells and grenades that failed to explode when they were employed. In North America, especially in the US, the UXO is the result of weapon system testing and troop training by the DOD. The traditional UXO detection method employs metal detectors which measure distorted signals of local magnetic fields. Based on detected magnetic signals, holes are dug to remove buried UXO. However, the detection and remediation of UXO contaminated sites using the traditional methods are extremely inefficient in that it is difficult to distinguish the buried UXO from the noise of geologic magnetic sources or anthropic clutter items. The reliable discrimination performance of UXO detection system depends on the employed sensor technology as well as on the data processing methods that invert the collected data to infer the UXO. The detection systems require very accurate positioning (or geolocation) of the detection units to detect and discriminate the candidate UXO from the non-hazardous clutter, greater position and orientation precision because the inversion of magnetic or EMI data relies on their precise relative locations, orientation, and depth. The requirements of position accuracy for MEC geolocation and characterization using typical state-of-the-art detection instrumentation are classified according to levels of accuracy outlined in: the screening level with position tolerance of 0.5 m (as standard deviation), area mapping (less than 0.05 m), and characterize and discriminate level of accuracy (less than 0.02m). The primary geolocation system is considered as a dual-frequency GPS integrated with a three dimensional inertial measurement unit (IMU); INS/GPS system. Selecting the appropriate estimation method has been the key problem to obtain highly precise geolocation of INS/GPS system for the UXO detection performance in dynamic environments. For this purpose, the Extended Kalman Filter (EKF) has been used as the conventional algorithm for the optimal integration of INS/GPS system. However, the newly introduced non-linear based filters can deal with the non-linear nature of the positioning dynamics as well as the non-Gaussian statistics for the instrument errors, and the non-linear based estimation methods (filtering/smoothing) have been developed and proposed. Therefore, this study focused on the optimal estimation methods for the highly precise geolocation of INS/GPS system using simulations and analyses of two Laboratory tests (cart-based and handheld geolocation system). First, the non-linear based filters (UKF and UKF) have been shown to yield superior performance than the EKF in various specific simulation tests which are designed similar to the UXO geolocation environment (highly dynamic and small area). The UKF yields 50% improvement in the position accuracy over the EKF particularly in the curved sections (medium-grade IMUs case). The UKF also performed significantly better than EKF and shows comparable improvement over the UKF when the IMU noise probability iii density function is symmetric and non-symmetric. Also, since the UXO detection survey does not require the real-time operations, each of the developed filters was modified to accommodate the standard Rauch-Tung-Striebel (RTS) smoothing algorithms. The smoothing methods are applied to the typical UXO detection trajectory; the position error was reduced significantly using a minimal number of control points. Finally, these simulation tests confirmed that tactical-grade IMUs (e.g. HG1700 or HG1900) are required to bridge gaps of high-accuracy ranging solution systems longer than 1 second. Second, these result of the simulation tests were validated from the laboratory tests using navigation-grade and medium-grade accuracy IMUs. To overcome inaccurate a priori knowledge of process noise of the system, the adaptive filtering methods have been applied to the EKF and UKF and they are called the AEKS and AUKS. The neural network aided adaptive nonlinear filtering/smoothing methods (NN-EKS and NN-UKS) which are augmented with RTS smoothing method were compared with the AEKS and AUKS. Each neural network-aided, adaptive filter/smoother improved the position accuracy in both straight and curved sections. The navigation grade IMU (H764G) can achieve the area mapping level of accuracy when the gap of control points is about 8 seconds. The medium grade IMUs (HG1700 and HG1900) with NN-AUKS can maintain less than 10cm under the same conditions as above. Also, the neural network aiding can decrease the difference of position error between the straight and the curved section. Third, in the previous simulation test, the UPF performed better than the other filters. However since the UPF needs a large number of samples to represent the a posteriori statistics in high-dimensional space, the RBPF can be used as an alternative to avoid the inefficiency of particle filter. The RBPF is tailored to precise geolocation for UXO detection using IMU/GPS system and yielded improved estimation results with a small number of samples. The handheld geolocation system using HG1900 with a nonlinear filter-based smoother can achieve the discrimination level of accuracy if the update rate of control points is less than 0.5Hz and 1Hz for the sweep and swing respectively. Also, the sweep operation is more preferred than the swing motion because the position accuracy of the sweep test was better than that of the swing test

    Towards Robust and Adaptive Speech Recognition Models

    Full text link
    corecore