6 research outputs found

    Cannabinoids Enhance Subsecond Dopamine Release in the Nucleus Accumbens of Awake Rats

    Get PDF
    Dopaminergic neurotransmission has been highly implicated in the reinforcing properties of many substances of abuse, including marijuana. Cannabinoids activate ventral tegmental area dopaminergic neurons, the main ascending projections of the mesocorticolimbic dopamine system, and change their spiking pattern by increasing the number of impulses in a burst and elevating the frequency of bursts. Although they also increase time-averaged striatal dopamine levels for extended periods of time, little is known about the temporal structure of this change. To elucidate this, fast-scan cyclic voltammetry was used to monitor extracellular dopamine in the nucleus accumbens of freely moving rats with subsecond timescale resolution. Intravenous administration of the central cannabinoid (C

    Phasic Dopamine Release Evoked by Abused Substances Requires Cannabinoid Receptor Activation

    Get PDF
    Transient surges of dopamine in the nucleus accumbens are associated with drug seeking. Using a voltammetric sensor with high temporal and spatial resolution, we demonstrate differences in the temporal profile of dopamine concentration transients caused by acute doses of nicotine, ethanol, and cocaine in the nucleus accumbens shell of freely moving rats. Despite differential release dynamics, all drug effects are uniformly inhibited by administration of rimonabant, a cannabinoid receptor (C

    Crowd computing as a cooperation problem: an evolutionary approach

    Get PDF
    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive-conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.This work is supported by the Cyprus Research Promotion Foundation grant TE/HPO/0609(BE)/05, the National Science Foundation (CCF-0937829, CCF-1114930), Comunidad de Madrid grant S2009TIC-1692 and MODELICO-CM, Spanish MOSAICO, PRODIEVO and RESINEE grants and MICINN grant TEC2011-29688-C02-01, and National Natural Science Foundation of China grant 61020106002.Publicad
    corecore