3 research outputs found

    Realtime MEG source localization

    Get PDF
    Iterative gradient methods like Levenberg-Marquardt (LM) are in widespread use for source localization from electroencephalographic (EEG) and magnetoencephalographic (MEG) signals. Unfortunately LM depends sensitively on the initial guess, particularly (and counterintuitively) at higher signal-to-noise ratios, necessitating repeated runs. This, combined with LM's high per-step cost, makes its computational burden quite high. To reduce this burden, we trained a multilayer perceptron (MLP) as a real-time localizer. We used an analytical model of quasistatic electromagnetic propagation through the head to map randomly chosen dipoles to sensor activities, and trained an MLP to invert this mapping in the presence of various sorts of noise. With realistic noise, our MLP is about five hundred times faster than n-start-LM with n = 4 to match accuracies, while our hybrid MLP-start-LM is about four times more accurate and thirteen times faster than 4-start-LM

    Realtime MEG source localization

    Get PDF
    Iterative gradient methods like Levenberg-Marquardt (LM) are in widespread use for source localization from electroencephalographic (EEG) and magnetoencephalographic (MEG) signals. Unfortunately LM depends sensitively on the initial guess, particularly (and counterintuitively) at higher signal-to-noise ratios, necessitating repeated runs. This, combined with LM's high per-step cost, makes its computational burden quite high. To reduce this burden, we trained a multilayer perceptron (MLP) as a real-time localizer. We used an analytical model of quasistatic electromagnetic propagation through the head to map randomly chosen dipoles to sensor activities, and trained an MLP to invert this mapping in the presence of various sorts of noise. With realistic noise, our MLP is about five hundred times faster than n-start-LM with n = 4 to match accuracies, while our hybrid MLP-start-LM is about four times more accurate and thirteen times faster than 4-start-LM

    ForeNet: fourier recurrent neural networks for time series prediction.

    Get PDF
    Ying-Qian Zhang.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 115-124).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Objective --- p.2Chapter 1.3 --- Contributions --- p.3Chapter 1.4 --- Thesis Overview --- p.4Chapter 2 --- Literature Review --- p.6Chapter 2.1 --- Takens' Theorem --- p.6Chapter 2.2 --- Linear Models for Prediction --- p.7Chapter 2.2.1 --- Autoregressive Model --- p.7Chapter 2.2.2 --- Moving Average Model --- p.8Chapter 2.2.3 --- Autoregressive-moving Average Model --- p.9Chapter 2.2.4 --- Fitting a Linear Model to a Given Time Series --- p.9Chapter 2.2.5 --- State-space Reconstruction --- p.10Chapter 2.3 --- Neural Network Models for Time Series Processing --- p.11Chapter 2.3.1 --- Feed-forward Neural Networks --- p.11Chapter 2.3.2 --- Recurrent Neural Networks --- p.14Chapter 2.3.3 --- Training Algorithms for Recurrent Networks --- p.18Chapter 2.4 --- Combining Neural Networks and other approximation techniques --- p.22Chapter 3 --- ForeNet: Model and Representation --- p.24Chapter 3.1 --- Fourier Recursive Prediction Equation --- p.24Chapter 3.1.1 --- Fourier Analysis of Time Series --- p.25Chapter 3.1.2 --- Recursive Form --- p.25Chapter 3.2 --- Fourier Recurrent Neural Network Model (ForeNet) --- p.27Chapter 3.2.1 --- Neural Networks Representation --- p.28Chapter 3.2.2 --- Architecture of ForeNet --- p.29Chapter 4 --- ForeNet: Implementation --- p.32Chapter 4.1 --- Improvement on ForeNet --- p.33Chapter 4.1.1 --- Number of Hidden Neurons --- p.33Chapter 4.1.2 --- Real-valued Outputs --- p.34Chapter 4.2 --- Parameters Initialization --- p.37Chapter 4.3 --- Application of ForeNet: the Process of Time Series Prediction --- p.38Chapter 4.4 --- Some Implications --- p.39Chapter 5 --- ForeNet: Initialization --- p.40Chapter 5.1 --- Unfolded Form of ForeNet --- p.40Chapter 5.2 --- Coefficients Analysis --- p.43Chapter 5.2.1 --- "Analysis of the Coefficients Set, vn " --- p.43Chapter 5.2.2 --- "Analysis of the Coefficients Set, μn(d) " --- p.44Chapter 5.3 --- Experiments of ForeNet Initialization --- p.47Chapter 5.3.1 --- Objective and Experiment Setting --- p.47Chapter 5.3.2 --- Prediction of Sunspot Series --- p.49Chapter 5.3.3 --- Prediction of Mackey-Glass Series --- p.53Chapter 5.3.4 --- Prediction of Laser Data --- p.56Chapter 5.3.5 --- Three More Series --- p.59Chapter 5.4 --- Some Implications on the Proposed Initialization Method --- p.63Chapter 6 --- ForeNet: Learning Algorithms --- p.67Chapter 6.1 --- Complex Real Time Recurrent Learning (CRTRL) --- p.68Chapter 6.2 --- Batch-mode Learning --- p.70Chapter 6.3 --- Time Complexity --- p.71Chapter 6.4 --- Property Analysis and Experimental Results --- p.72Chapter 6.4.1 --- Efficient initialization:compared with random initialization --- p.74Chapter 6.4.2 --- Complex-valued network:compared with real-valued net- work --- p.78Chapter 6.4.3 --- Simple architecture:compared with ring-structure RNN . --- p.79Chapter 6.4.4 --- Linear model: compared with nonlinear ForeNet --- p.80Chapter 6.4.5 --- Small number of hidden units --- p.88Chapter 6.5 --- Comparison with Some Other Models --- p.89Chapter 6.5.1 --- Comparison with AR model --- p.91Chapter 6.5.2 --- Comparison with TDNN Networks and FIR Networks . --- p.93Chapter 6.5.3 --- Comparison to a few more results --- p.94Chapter 6.6 --- Summarization --- p.95Chapter 7 --- Learning and Prediction: On-Line Training --- p.98Chapter 7.1 --- On-Line Learning Algorithm --- p.98Chapter 7.1.1 --- Advantages and Disadvantages --- p.98Chapter 7.1.2 --- Training Process --- p.99Chapter 7.2 --- Experiments --- p.101Chapter 7.3 --- Predicting Stock Time Series --- p.105Chapter 8 --- Discussions and Conclusions --- p.109Chapter 8.1 --- Limitations of ForeNet --- p.109Chapter 8.2 --- Advantages of ForeNet --- p.111Chapter 8.3 --- Future Works --- p.112Bibliography --- p.11
    corecore