818 research outputs found

    Nuclear Spin Dynamics in Double Quantum Dots: Fixed Points, Transients, and Intermittency

    Full text link
    Transport through spin-blockaded quantum dots provides a means for electrical control and detection of nuclear spin dynamics in the host material. Although such experiments have become increasingly popular in recent years, interpretation of their results in terms of the underlying nuclear spin dynamics remains challenging. Here we point out a fundamental process in which nuclear spin dynamics can be driven by electron shot noise; fast electric current fluctuations generate much slower nuclear polarization dynamics, which in turn affect electron dynamics via the Overhauser field. The resulting extremely slow intermittent current fluctuations account for a variety of observed phenomena that were not previously understood.Comment: version accepted for publication in Physical Review B, figure repaire

    Anomalous random multipolar driven insulators

    Get PDF
    It is by now well established that periodically driven quantum many-body systems can realize topological nonequilibrium phases without any equilibrium counterpart. Here we show that, even in the absence of time translation symmetry, nonequilibrium topological phases of matter can exist in aperiodically driven systems for tunably parametrically long prethermal lifetimes. As a prerequisite, we first demonstrate the existence of longlived prethermal Anderson localization in two dimensions under random multipolar driving. We then show that the localization may be topologically nontrivial with a quantized bulk orbital magnetization even though there are no well-defined Floquet operators. We further confirm the existence of this anomalous random multipolar driven insulator by detecting quantized charge pumping at the boundaries, which renders it experimentally observable

    The Organization of Working Memory Networks is Shaped by Early Sensory Experience

    Get PDF
    Early deafness results in crossmodal reorganization of the superior temporal cortex (STC). Here, we investigated the effect of deafness on cognitive processing. Specifically, we studied the reorganization, due to deafness and sign language (SL) knowledge, of linguistic and nonlinguistic visual working memory (WM). We conducted an fMRI experiment in groups that differed in their hearing status and SL knowledge: deaf native signers, and hearing native signers, hearing nonsigners. Participants performed a 2-back WM task and a control task. Stimuli were signs from British Sign Language (BSL) or moving nonsense objects in the form of point-light displays. We found characteristic WM activations in fronto-parietal regions in all groups. However, deaf participants also recruited bilateral posterior STC during the WM task, independently of the linguistic content of the stimuli, and showed less activation in fronto-parietal regions. Resting-state connectivity analysis showed increased connectivity between frontal regions and STC in deaf compared to hearing individuals. WM for signs did not elicit differential activations, suggesting that SL WM does not rely on modality-specific linguistic processing. These findings suggest that WM networks are reorganized due to early deafness, and that the organization of cognitive networks is shaped by the nature of the sensory inputs available during development

    Should We Learn Most Likely Functions or Parameters?

    Full text link
    Standard regularized training procedures correspond to maximizing a posterior distribution over parameters, known as maximum a posteriori (MAP) estimation. However, model parameters are of interest only insomuch as they combine with the functional form of a model to provide a function that can make good predictions. Moreover, the most likely parameters under the parameter posterior do not generally correspond to the most likely function induced by the parameter posterior. In fact, we can re-parametrize a model such that any setting of parameters can maximize the parameter posterior. As an alternative, we investigate the benefits and drawbacks of directly estimating the most likely function implied by the model and the data. We show that this procedure leads to pathological solutions when using neural networks and prove conditions under which the procedure is well-behaved, as well as a scalable approximation. Under these conditions, we find that function-space MAP estimation can lead to flatter minima, better generalization, and improved robustness to overfitting.Comment: NeurIPS 2023. Code available at https://github.com/activatedgeek/function-space-ma

    Spectrum of the Nuclear Environment for GaAs Spin Qubits

    Full text link
    Using a singlet-triplet spin qubit as a sensitive spectrometer of the GaAs nuclear spin bath, we demonstrate that the spectrum of Overhauser noise agrees with a classical spin diffusion model over six orders of magnitude in frequency, from 1 mHz to 1 kHz, is flat below 10 mHz, and falls as 1/f21/f^2 for frequency f ⁣ ⁣1f \! \gtrsim \! 1 Hz. Increasing the applied magnetic field from 0.1 T to 0.75 T suppresses electron-mediated spin diffusion, which decreases spectral content in the 1/f21/f^2 region and lowers the saturation frequency, each by an order of magnitude, consistent with a numerical model. Spectral content at megahertz frequencies is accessed using dynamical decoupling, which shows a crossover from the few-pulse regime ( ⁣16\lesssim \! 16 π\pi-pulses), where transverse Overhauser fluctuations dominate dephasing, to the many-pulse regime ( ⁣32\gtrsim \! 32 π\pi-pulses), where longitudinal Overhauser fluctuations with a 1/f1/f spectrum dominate.Comment: 6 pages, 4 figures, 8 pages of supplementary material, 5 supplementary figure

    Postmortem Changes in Myoglobin Content in Organs

    Get PDF
    Postmortem changes in myoglobin concentrations in blood and organs were investigated using an enzyme immunoassay by animal experiments in combination with immunohistochemical staining of human cases. Blood myoglobin concentrations were found to increase drastically within a very short time after death. Those in striated muscle, however, did not change by day 14 postmortem. Myoglobin content in the liver and kidney increased slightly by day 5 postmortem, and more obviously by day 7 or later. However, almost no change was observed by day 5 in the kidney when the renal artery and vein had been ligated just after death. In the thyroid gland and the lung, the myoglobin content markedly increased by day 7 postmortem, with the logarithmical values rising nearly linearly as the time after death passed. In the thyroid gland, concentrations reached the level of the striated muscle. The mechanisms of postmortem myoglobin increase in organs are thought to be direct diffusion from the striated muscle and/or distribution through the blood. To estimate the postmortem interval, the determination of myoglobin content in the thyroid gland or the lung appears to be useful

    On sequential Bayesian inference for continual learning

    Get PDF
    Sequential Bayesian inference can be used for continual learning to prevent catastrophic forgetting of past tasks and provide an informative prior when learning new tasks. We revisit sequential Bayesian inference and assess whether using the previous task’s posterior as a prior for a new task can prevent catastrophic forgetting in Bayesian neural networks. Our first contribution is to perform sequential Bayesian inference using Hamiltonian Monte Carlo. We propagate the posterior as a prior for new tasks by approximating the posterior via fitting a density estimator on Hamiltonian Monte Carlo samples. We find that this approach fails to prevent catastrophic forgetting, demonstrating the difficulty in performing sequential Bayesian inference in neural networks. From there, we study simple analytical examples of sequential Bayesian inference and CL and highlight the issue of model misspecification, which can lead to sub-optimal continual learning performance despite exact inference. Furthermore, we discuss how task data imbalances can cause forgetting. From these limitations, we argue that we need probabilistic models of the continual learning generative process rather than relying on sequential Bayesian inference over Bayesian neural network weights. Our final contribution is to propose a simple baseline called Prototypical Bayesian Continual Learning, which is competitive with the best performing Bayesian continual learning methods on class incremental continual learning computer vision benchmarks

    Outcome-Driven Reinforcement Learning via Variational Inference

    Full text link
    While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.Comment: Published in Advances in Neural Information Processing Systems 34 (NeurIPS 2021
    corecore