29 research outputs found
Population level inference for multivariate MEG analysis
Multivariate analysis is a very general and powerful technique for analysing Magnetoencephalography (MEG) data. An outstanding problem however is how to make inferences that are consistent over a group of subjects as to whether there are condition-specific differences in data features, and what are those features that maximise these differences. Here we propose a solution based on Canonical Variates Analysis (CVA) model scoring at the subject level and random effects Bayesian model selection at the group level. We apply this approach to beamformer reconstructed MEG data in source space. CVA estimates those multivariate patterns of activation that correlate most highly with the experimental design; the order of a CVA model is then determined by the number of significant canonical vectors. Random effects Bayesian model comparison then provides machinery for inferring the optimal order over the group of subjects. Absence of a multivariate dependence is indicated by the null model being the most likely. This approach can also be applied to CVA models with a fixed number of canonical vectors but supplied with different feature sets. We illustrate the method by identifying feature sets based on variable-dimension MEG power spectra in the primary visual cortex and fusiform gyrus that are maximally discriminative of data epochs before versus after visual stimulation
Working memory replay prioritizes weakly attended events
One view of working memory posits that maintaining a series of events requires their sequential and equal mnemonic replay. Another view is that the content of working memory maintenance is prioritized by attention. We decoded the dynamics for retaining a sequence of items using magnetoencephalography, wherein participants encoded sequences of three stimuli depicting a face, a manufactured object, or a natural item and maintained them in working memory for 5000 ms. Memory for sequence position and stimulus details were probed at the end of the maintenance period. Decoding of brain activity revealed that one of the three stimuli dominated maintenance independent of its sequence position or category; and memory was enhanced for the selectively replayed stimulus. Analysis of event-related responses during the encoding of the sequence showed that the selectively replayed stimuli were determined by the degree of attention at encoding. The selectively replayed stimuli had the weakest initial encoding indexed by weaker visual attention signals at encoding. These findings do not rule out sequential mnemonic replay but reveal that attention influences the content of working memory maintenance by prioritizing replay of weakly encoded events. We propose that the prioritization of weakly encoded stimuli protects them from interference during the maintenance period, whereas the more strongly encoded stimuli can be retrieved from long-term memory at the end of the delay period
Replay of very early encoding representations during recollection
Long-term memories are linked to cortical representations of perceived events, but it is unclear which types of representations can later be recollected. Using magnetoencephalography-based decoding, we examined which brain activity patterns elicited during encoding are later replayed during recollection in the human brain. The results show that the recollection of images depicting faces and scenes is associated with a replay of neural representations that are formed at very early (180 ms) stages of encoding. This replay occurs quite rapidly, 500 ms after the onset of a cue that prompts recollection and correlates with source memory accuracy. Therefore, long-term memories are rapidly replayed during recollection and involve representations that were formed at very early stages of encoding. These findings indicate that very early representational information can be preserved in the memory engram and can be faithfully and rapidly reinstated during recollection. These novel insights into the nature of the memory engram provide constraints for mechanistic models of long-term memory function
The optimized Rayleigh-Ritz scheme for determining the quantum-mechanical spectrum
The convergence of the Rayleigh-Ritz method with nonlinear parameters
optimized through minimization of the trace of the truncated matrix is
demonstrated by a comparison with analytically known eigenstates of various
quasi-solvable systems. We show that the basis of the harmonic oscillator
eigenfunctions with optimized frequency ? enables determination of boundstate
energies of one-dimensional oscillators to an arbitrary accuracy, even in the
case of highly anharmonic multi-well potentials. The same is true in the
spherically symmetric case of V (r) = {\omega}2r2 2 + {\lambda}rk, if k > 0.
For spiked oscillators with k < -1, the basis of the pseudoharmonic oscillator
eigenfunctions with two parameters ? and {\gamma} is more suitable, and
optimization of the latter appears crucial for a precise determination of the
spectrum.Comment: 22 pages,8 figure
Designing organometallic compounds for catalysis and therapy
Bioorganometallic chemistry is a rapidly developing area of research. In recent years organometallic compounds have provided a rich platform for the design of effective catalysts, e.g. for olefin metathesis and transfer hydrogenation. Electronic and steric effects are used to control both the thermodynamics and kinetics of ligand substitution and redox reactions of metal ions, especially Ru II. Can similar features be incorporated into the design of targeted organometallic drugs? Such complexes offer potential for novel mechanisms of drug action through incorporation of outer-sphere recognition of targets and controlled activation features based on ligand substitution as well as metal- and ligand-based redox processes. We focus here on η 6-arene, η 5-cyclopentadienyl sandwich and half-sandwich complexes of Fe II, Ru II, Os II and Ir III with promising activity towards cancer, malaria, and other conditions. © 2012 The Royal Society of Chemistry
Smart segmentation supports transfer learning
Speeded learning based on previous experience, referred to as transfer learning, is thought to depend upon identification of a stable task structure. However, many common tasks contain a hierarchical or nested stable structure, in which subtasks vary in behavioral relevance. How we identify a nested stable task structure and the extent to which transfer learning is linked to task segmentation is currently unknown. We examined learning of a fixed sequence of tasks in which some subtasks acted as distractors and other subtasks determined the outcome of successful goal-oriented navigation. The relationship between subsequent memory, as a marker of task segmentation, and reaction time during learning, together with computational modeling of learning and segmentation, revealed that participants who demonstrated transfer learning adopted a smart task segmentation strategy including separation of distracting subtasks