1,118 research outputs found

    Processing of unpredictability in fear learning and memory

    Full text link
    Unpredictability is one of the major drivers of associative learning. While unpredictability in the timing of events can enhance fear memory strength, the neural substrates that are involved in generating and processing these errors remain largely unknown. We first showed that unpredictability, generated by the varied timing of the aversive event following the predictive cue, greatly enhanced fear memory strength (Chapter 3). The unpredictability-processing neural network in basal and lateral amygdala (BLA) was then studied using time-lapse microendoscopy to monitor neuronal calcium response across fear conditioning and recall (Chapter 4). We identified four distinct functional classes of neurons based on the neuronal activity patterns during fear conditioning and long-term recall. “Memory Winner” neurons outcompeted the “Memory Loser” neurons to encode the fear memories; nonetheless, both classes of neurons exhibited learning-related plasticity during the fear conditioning. In contrast, Fear Expression neurons did not display learning-related plasticity during fear conditioning but did respond to the tone presentation during auditory fear recall. The introduction of temporal unpredictability during the fear conditioning increased the percentage of both the Memory Winner neurons and Fear Expression neurons, and decreased the percentage of Memory Loser neurons. Furthermore (Chapter 5), pharmacological inhibition of dorsal hippocampus and optogenetic silencing of CA1 revealed the essential involvement of dorsal hippocampus in the processing of negative prediction errors, which is generated by unpredictability in their timing. Collectively, our data suggest that the processing of temporal unpredictability of aversive events requires the dorsal hippocampal activation to process the negative prediction errors; and the rearrangement of the BLA neural representation of fear learning and memory. Taken together, these processes underlie the mechanism of the unpredictability-enhanced fear memory strength

    Growth hormone biases amygdala network activation after fear learning

    Get PDF
    Prolonged stress exposure is a risk factor for developing posttraumatic stress disorder, a disorder characterized by the ‘over-encoding’ of a traumatic experience. A potential mechanism by which this occurs is through upregulation of growth hormone (GH) in the amygdala. Here we test the hypotheses that GH promotes the over-encoding of fearful memories by increasing the number of neurons activated during memory encoding and biasing the allocation of neuronal activation, one aspect of the process by which neurons compete to encode memories, to favor neurons that have stronger inputs. Viral overexpression of GH in the amygdala increased the number of amygdala cells activated by fear memory formation. GH-overexpressing cells were especially biased to express the immediate early gene c-Fos after fear conditioning, revealing strong autocrine actions of GH in the amygdala. In addition, we observed dramatically enhanced dendritic spine density in GH-overexpressing neurons. These data elucidate a previously unrecognized autocrine role for GH in the regulation of amygdala neuron function and identify specific mechanisms by which chronic stress, by enhancing GH in the amygdala, may predispose an individual to excessive fear memory formation.National Institute of Mental Health (U.S.) (NIMH R01 MH084966)United States. Defense Advanced Research Projects Agency (DARPA grant W911NF-10-1-0059)United States. Army Research Offic

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    Search for heavy resonances decaying to two Higgs bosons in final states containing four b quarks

    Get PDF
    A search is presented for narrow heavy resonances X decaying into pairs of Higgs bosons (H) in proton-proton collisions collected by the CMS experiment at the LHC at root s = 8 TeV. The data correspond to an integrated luminosity of 19.7 fb(-1). The search considers HH resonances with masses between 1 and 3 TeV, having final states of two b quark pairs. Each Higgs boson is produced with large momentum, and the hadronization products of the pair of b quarks can usually be reconstructed as single large jets. The background from multijet and t (t) over bar events is significantly reduced by applying requirements related to the flavor of the jet, its mass, and its substructure. The signal would be identified as a peak on top of the dijet invariant mass spectrum of the remaining background events. No evidence is observed for such a signal. Upper limits obtained at 95 confidence level for the product of the production cross section and branching fraction sigma(gg -> X) B(X -> HH -> b (b) over barb (b) over bar) range from 10 to 1.5 fb for the mass of X from 1.15 to 2.0 TeV, significantly extending previous searches. For a warped extra dimension theory with amass scale Lambda(R) = 1 TeV, the data exclude radion scalar masses between 1.15 and 1.55 TeV

    Search for supersymmetry in events with one lepton and multiple jets in proton-proton collisions at root s=13 TeV

    Get PDF
    Peer reviewe

    Measurement of the top quark forward-backward production asymmetry and the anomalous chromoelectric and chromomagnetic moments in pp collisions at √s = 13 TeV

    Get PDF
    Abstract The parton-level top quark (t) forward-backward asymmetry and the anomalous chromoelectric (d̂ t) and chromomagnetic (μ̂ t) moments have been measured using LHC pp collisions at a center-of-mass energy of 13 TeV, collected in the CMS detector in a data sample corresponding to an integrated luminosity of 35.9 fb−1. The linearized variable AFB(1) is used to approximate the asymmetry. Candidate t t ¯ events decaying to a muon or electron and jets in final states with low and high Lorentz boosts are selected and reconstructed using a fit of the kinematic distributions of the decay products to those expected for t t ¯ final states. The values found for the parameters are AFB(1)=0.048−0.087+0.095(stat)−0.029+0.020(syst),μ̂t=−0.024−0.009+0.013(stat)−0.011+0.016(syst), and a limit is placed on the magnitude of | d̂ t| < 0.03 at 95% confidence level. [Figure not available: see fulltext.

    Measurement of the top quark mass using charged particles in pp collisions at root s=8 TeV

    Get PDF
    Peer reviewe

    Measurement of t(t)over-bar normalised multi-differential cross sections in pp collisions at root s=13 TeV, and simultaneous determination of the strong coupling strength, top quark pole mass, and parton distribution functions

    Get PDF
    Peer reviewe

    An embedding technique to determine ττ backgrounds in proton-proton collision data

    Get PDF
    An embedding technique is presented to estimate standard model tau tau backgrounds from data with minimal simulation input. In the data, the muons are removed from reconstructed mu mu events and replaced with simulated tau leptons with the same kinematic properties. In this way, a set of hybrid events is obtained that does not rely on simulation except for the decay of the tau leptons. The challenges in describing the underlying event or the production of associated jets in the simulation are avoided. The technique described in this paper was developed for CMS. Its validation and the inherent uncertainties are also discussed. The demonstration of the performance of the technique is based on a sample of proton-proton collisions collected by CMS in 2017 at root s = 13 TeV corresponding to an integrated luminosity of 41.5 fb(-1).Peer reviewe

    Measurement of the Splitting Function in &ITpp &ITand Pb-Pb Collisions at root&ITsNN&IT=5.02 TeV

    Get PDF
    Data from heavy ion collisions suggest that the evolution of a parton shower is modified by interactions with the color charges in the dense partonic medium created in these collisions, but it is not known where in the shower evolution the modifications occur. The momentum ratio of the two leading partons, resolved as subjets, provides information about the parton shower evolution. This substructure observable, known as the splitting function, reflects the process of a parton splitting into two other partons and has been measured for jets with transverse momentum between 140 and 500 GeV, in pp and PbPb collisions at a center-of-mass energy of 5.02 TeV per nucleon pair. In central PbPb collisions, the splitting function indicates a more unbalanced momentum ratio, compared to peripheral PbPb and pp collisions.. The measurements are compared to various predictions from event generators and analytical calculations.Peer reviewe
    corecore