884 research outputs found
Identifying the roles of theta and beta during empathy: an EEG power spectra and laterality index study
Previous research into empathy has focused on behavioural or fMRI based methodologies, with very few electroencephalographic (EEG) studies on the topic. In particular there is a need to clarify, firstly, the differences in EEG activity between emotional and cognitive empathic tasks. Secondly, whether theta power is more closely linked to cognitive demand or the degree of stimulus valence experienced during empathy. Lastly, there is a need to examine whether beta is more linked to empathy specific processing, emotional valence processing or willingness to engage with the task. To examine these issues the current study recorded the EEG activity of university students whilst they completed six tasks which differed based on whether emotional or cognitive processing was required, how much cognitive demand the task required and whether the task required empathy or not. The results showed that theta, beta and alpha activity were higher in non-empathy tasks than in empathy tasks. Also, that theta activity was asynchronous during the non-empathic emotional task. This lead to the conclusions that theta is more likely linked to stimulus valence in empathy tasks and that beta is more likely linked to emotional valence processing or willingness to engage in a task than it is with empathy specific processin
A model for prediction of Course VI core course registration.
Massachusetts Institute of Technology. Dept. of Electrical Engineering. Thesis. 1972. B.S.MICROFICHE COPY ALSO AVAILABLE IN BARKER ENGINEERING LIBRARY.Bibliography: leaf 52.B.S
Real-time speckle sensing and suppression with project 1640 at Palomar
Palomar’s Project 1640 (P1640) is the first stellar coronagraph to regularly use active coronagraphic wavefront control (CWFC). For this it has a hierarchy of offset wavefront sensors (WFS), the most important of which is the higher-order WFS (called CAL), which tracks quasi-static modes between 2-35 cycles-per-aperture. The wavefront is measured in the coronagraph at 0.01 Hz rates, providing slope targets to the upstream Palm 3000 adaptive optics (AO) system. The CWFC handles all non-common path distortions up to the coronagraphic focal plane mask, but does not sense second order modes between the WFSs and the science integral field unit (IFU); these modes determine the system’s current limit. We have two CWFC operating modes: (1) P-mode, where we only control phases, generating double-sided darkholes by correcting to the largest controllable spatial frequencies, and (2) E-mode, where we can control amplitudes and phases, generating single-sided dark-holes in specified regions-of-interest. We describe the performance and limitations of both these modes, and discuss the improvements we are considering going forward
Hypercube matrix computation task
A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18)
Approximate exploitability: Learning a best response in large games
A standard metric used to measure the approximate optimality of policies in
imperfect information games is exploitability, i.e. the performance of a policy
against its worst-case opponent. However, exploitability is intractable to
compute in large games as it requires a full traversal of the game tree to
calculate a best response to the given policy. We introduce a new metric,
approximate exploitability, that calculates an analogous metric using an
approximate best response; the approximation is done by using search and
reinforcement learning. This is a generalization of local best response, a
domain specific evaluation metric used in poker. We provide empirical results
for a specific instance of the method, demonstrating that our method converges
to exploitability in the tabular and function approximation settings for small
games. In large games, our method learns to exploit both strong and weak
agents, learning to exploit an AlphaZero agent
- …