22,772 research outputs found

    A comparison of univariate methods for forecasting electricity demand up to a day ahead

    Get PDF
    This empirical paper compares the accuracy of six univariate methods for short-term electricity demand forecasting for lead times up to a day ahead. The very short lead times are of particular interest as univariate methods are often replaced by multivariate methods for prediction beyond about six hours ahead. The methods considered include the recently proposed exponential smoothing method for double seasonality and a new method based on principal component analysis (PCA). The methods are compared using a time series of hourly demand for Rio de Janeiro and a series of half-hourly demand for England and Wales. The PCA method performed well, but, overall, the best results were achieved with the exponential smoothing method, leading us to conclude that simpler and more robust methods, which require little domain knowledge, can outperform more complex alternatives

    Relating Objective and Subjective Performance Measures for AAM-based Visual Speech Synthesizers

    Get PDF
    We compare two approaches for synthesizing visual speech using Active Appearance Models (AAMs): one that utilizes acoustic features as input, and one that utilizes a phonetic transcription as input. Both synthesizers are trained using the same data and the performance is measured using both objective and subjective testing. We investigate the impact of likely sources of error in the synthesized visual speech by introducing typical errors into real visual speech sequences and subjectively measuring the perceived degradation. When only a small region (e.g. a single syllable) of ground-truth visual speech is incorrect we find that the subjective score for the entire sequence is subjectively lower than sequences generated by our synthesizers. This observation motivates further consideration of an often ignored issue, which is to what extent are subjective measures correlated with objective measures of performance? Significantly, we find that the most commonly used objective measures of performance are not necessarily the best indicator of viewer perception of quality. We empirically evaluate alternatives and show that the cost of a dynamic time warp of synthesized visual speech parameters to the respective ground-truth parameters is a better indicator of subjective quality

    Accelerating Deep Learning with Shrinkage and Recall

    Full text link
    Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and the architecture size is large. Inspired from the shrinking technique used in accelerating computation of Support Vector Machines (SVM) algorithm and screening technique used in LASSO, we propose a shrinking Deep Learning with recall (sDLr) approach to speed up deep learning computation. We experiment shrinking Deep Learning with recall (sDLr) using Deep Neural Network (DNN), Deep Belief Network (DBN) and Convolution Neural Network (CNN) on 4 data sets. Results show that the speedup using shrinking Deep Learning with recall (sDLr) can reach more than 2.0 while still giving competitive classification performance.Comment: The 22nd IEEE International Conference on Parallel and Distributed Systems (ICPADS 2016

    A Long Short-Term Memory Recurrent Neural Network Framework for Network Traffic Matrix Prediction

    Full text link
    Network Traffic Matrix (TM) prediction is defined as the problem of estimating future network traffic from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from experience to classify, process and predict time series with time lags of unknown size. LSTMs have been shown to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose a LSTM RNN framework for predicting short and long term Traffic Matrix (TM) in large networks. By validating our framework on real-world data from GEANT network, we show that our LSTM models converge quickly and give state of the art TM prediction performance for relatively small sized models.Comment: Submitted for peer review. arXiv admin note: text overlap with arXiv:1402.1128 by other author

    Hand classification of fMRI ICA noise components

    Get PDF
    We present a practical "how-to" guide to help determine whether single-subject fMRI independent components (ICs) characterise structured noise or not. Manual identification of signal and noise after ICA decomposition is required for efficient data denoising: to train supervised algorithms, to check the results of unsupervised ones or to manually clean the data. In this paper we describe the main spatial and temporal features of ICs and provide general guidelines on how to evaluate these. Examples of signal and noise components are provided from a wide range of datasets (3T data, including examples from the UK Biobank and the Human Connectome Project, and 7T data), together with practical guidelines for their identification. Finally, we discuss how the data quality, data type and preprocessing can influence the characteristics of the ICs and present examples of particularly challenging datasets

    Aggregated functional data model for Near-Infrared Spectroscopy calibration and prediction

    Full text link
    Calibration and prediction for NIR spectroscopy data are performed based on a functional interpretation of the Beer-Lambert formula. Considering that, for each chemical sample, the resulting spectrum is a continuous curve obtained as the summation of overlapped absorption spectra from each analyte plus a Gaussian error, we assume that each individual spectrum can be expanded as a linear combination of B-splines basis. Calibration is then performed using two procedures for estimating the individual analytes curves: basis smoothing and smoothing splines. Prediction is done by minimizing the square error of prediction. To assess the variance of the predicted values, we use a leave-one-out jackknife technique. Departures from the standard error models are discussed through a simulation study, in particular, how correlated errors impact on the calibration step and consequently on the analytes' concentration prediction. Finally, the performance of our methodology is demonstrated through the analysis of two publicly available datasets.Comment: 27 pages, 7 figures, 7 table

    Decoding social intentions in human prehensile actions: Insights from a combined kinematics-fMRI study

    Get PDF
    Consistent evidence suggests that the way we reach and grasp an object is modulated not only by object properties (e.g., size, shape, texture, fragility and weight), but also by the types of intention driving the action, among which the intention to interact with another agent (i.e., social intention). Action observation studies ascribe the neural substrate of this `intentional' component to the putative mirror neuron (pMNS) and the mentalizing (MS) systems. How social intentions are translated into executed actions, however, has yet to be addressed. We conducted a kinematic and a functional Magnetic Resonance Imaging (fMRI) study considering a reach-to-grasp movement performed towards the same object positioned at the same location but with different intentions: passing it to another person (social condition) or putting it on a concave base (individual condition). Kinematics showed that individual and social intentions are characterized by different profiles, with a slower movement at the level of both the reaching (i.e., arm movement) and the grasping (i.e., hand aperture) components. fMRI results showed that: (i) distinct voxel pattern activity for the social and the individual condition are present within the pMNS and the MS during action execution; (ii) decoding accuracies of regions belonging to the pMNS and the MS are correlated, suggesting that these two systems could interact for the generation of appropriate motor commands. Results are discussed in terms of motor simulation and inferential processes as part of a hierarchical generative model for action intention understanding and generation of appropriate motor commands
    corecore