8,315 research outputs found
Behavioural and physiological correlates of impulsivity in the domestic dog (Canis familiaris)
Impulsivity is a trait related to inhibitory control which is expressed in a range of behaviours. Impulsive individuals show a decreased ability to tolerate delay of
reinforcement, and more impulsive behaviour has been linked to decreased levels of serotonin and dopamine in a number of species. In domestic dogs, impulsivity is implicated in problem behaviours that result from a lack of self control, but currently there are no published studies that assess behavioural and physiological measures of impulsivity in relation to this trait. Impulsivity scores were calculated for 41 dogs using an owner-report assessment, the Dog Impulsivity Assessment Scale (DIAS). Twenty-three of these subjects completed an operant choice task based on a delayed
reward paradigm, to assess their tolerance to delay of reinforcement. High Pressure Liquid Chromatography (HPLC) with Fluorometric Detection was used to detect levels of the metabolites of serotonin (5-HIAA) and dopamine (HVA) in the urine of 17 of the subjects. Higher impulsivity scores were found to be significantly correlated with more impulsive behaviour (reduced tolerance to delay of reinforcement) in the behaviour tests and lower levels of urinary 5-HIAA and 5-HIAA/HVA ratio. The results demonstrate convergent validity between impulsivity (as assessed by the DIAS) and behavioural and physiological parameters
A Delay-Discounting Primer
Given the importance of research findings and the potential of further research to aid in the prediction and control of impulsivity, the primary focus of this chapter (and this book) is on choice and the failure of future events to affect current decisions. In this primer chapter, we consider two types of impulsive choice: (a) preferring a smaller-sooner reward while forgoing a larger-later one and (b) preferring a larger-later aversive outcome over a smaller-sooner one. The first of these is exemplified by the toy-pilfering child with whom we opened this chapter. Taking the toy is immediately rewarded, but it is a short-lived reward because the caretaker soon returns the toy to the victimized peer. Undoubtedly, the child would prefer to play with the toy for a longer period of time, but waiting until the toy is dropped by the peer seems a weak reinforcer when compared with brief access now. To put an economic term on this phenomenon, the child appears to have discounted the value of the delayed but otherwise preferred reward. Delay discounting describes the process of devaluing behavioral outcomes, be they rewarding or aversive events, that happen in the future (and perhaps the past; see chap. 7, this volume). This chapter provides a primer in delay discounting; it is intended for readers who have only a limited background in the procedures, measures, and outcomes of studies examining this form of impulsive choice. Following an overview of the delay-discounting process, its quantification, and its implications for the human condition, emphasis is placed on procedures (and critiques of these procedures). The remainder of the book is concerned with experimental findings, and for the most part, we do not review these here
Recommended from our members
Neuropsychology of reinforcement processes in the rat
This thesis investigated the role played by regions of the prefrontal cortex and ventral striatum in the control of rats’ behaviour by Pavlovian conditioned stimuli, and in their capacity to choose delayed reinforcement.
First, the function of the anterior cingulate cortex (ACC) in simple Pavlovian conditioning tasks was addressed. The ACC is a subdivision of prefrontal cortex that has previously been suggested to be critical for the formation of stimulus–reward associations. It was found that lesions of the ACC did not prevent rats from learning a simple conditioned approach response to a conditioned stimulus (CS) predictive of food reward, or from utilizing that CS as a conditioned reinforcer subsequently. Additionally, these subjects successfully acquired a conditioned freezing response to a CS predicting footshock. However, the same animals were impaired at the acquisition of autoshaped behaviour, an impairment that has been demonstrated previously. An autoshaping deficit was also observed when lesions were made following training. The phenomenon of Pavlovian–instrumental transfer was intact in these subjects. The hypothesis was developed that the ACC is not critical for the formation of stimulus–reward associations per se, but is critical when multiple stimuli must be discriminated on the basis of their differential association with reward. In support of this hypothesis, animals with lesions of the ACC were impaired on a version of the conditioned approach task in which a second, neutral stimulus, perceptually similar to the CS, was added; the lesioned subjects exhibited reduced discrimination.
Second, the role of the nucleus accumbens (Acb) in Pavlovian–instrumental transfer was investigated. The nucleus accumbens core, together with a larger amygdalar–striatal network of which it is a component, has previously been shown to be necessary for the expression of ‘simple’ Pavlovian–instrumental transfer. Rats with lesions of the nucleus accumbens core (AcbC) and shell (AcbSh) were tested on a ‘response-specific’ Pavlovian–instrumental transfer task, in which a Pavlovian CS selectively enhances instrumental responding for the outcome with which the CS was originally paired. AcbC lesions impaired the response specificity of this effect, while AcbSh lesions abolished Pavlovian–instrumental transfer entirely. These results are consistent with some — but not all — previous results in suggesting that the shell provides ‘vigour’ and the core provides ‘direction’ for the potentiation of behaviour by Pavlovian CSs.
Third, an attempt was made to train rats on a task for assessing preference for delayed reinforcement, using the ‘adjusting-delay’ paradigm. It was not immediately apparent that the rats reacted to the contingencies operative in this task, and mathematical analysis of their behaviour was conducted to establish whether their behaviour was sensitive to the delay, and what ‘molar’ features of performance on this task could be explained by delay-independent processes.
Fourth, a different delayed reinforcement choice task was developed, modifying a previously published task in which the subject is repeatedly offered a choice, in discrete trials, of a small reward delivered immediately, and a large reward delivered after a delay, with the delays systematically varied by the experimenter. Rats were trained on versions of this task in which the large, delayed reinforcer was or was not explicitly signalled by a cue present during the delay. The behavioural basis of performance on this task was examined, and d-amphetamine, chlordiazepoxide, and alpha-flupenthixol were administered systemically. It was found that the effects of d-amphetamine depended on whether the delayed reinforcer was signalled or unsignalled, increasing preference for signalled delayed reinforcement at some doses, but decreasing preference for unsignalled delayed reinforcement. These results may resolve contradictions in the literature, and are suggested to reflect the known effect of amphetamine to potentiate responding for conditioned reinforcers.
Fifth, rats that had been trained on this task (with no explicit signals present during the delay) were given lesions of the ACC, AcbC, or medial prefrontal cortex (mPFC). ACC-lesioned rats were no different from sham-operated controls in their ability to choose a large, delayed reinforcer. Lesions of mPFC reduced the tendency of subjects to shift from one lever to the other during the course of a session, but mPFC-lesioned subjects responded normally to removal of the delays, suggesting a loss of stimulus control. However, rats with lesions of the AcbC were severely impaired on this task, preferring the small, immediate reward, even though they discriminated the reinforcers. Additionally, the effects of intra-Acb amphetamine were assessed using a different version of the delayed reinforcement choice task, and found to have slight but inconsistent effects to reduce preference for the delayed reinforcer, though this effect did not depend on whether the delayed reward was signalled or unsignalled. These results suggest that the AcbC contributes significantly to the rat’s ability to choose a delayed reward, a finding that has important implications for the understanding of Acb function. It is suggested that dysfunction of the AcbC may be a key element in the pathology of impulsivity.Supported by a UK Medical Research Council (MRC) research studentship, 1997–2000, and a James Baird award, University of Cambridge School of Clinical Medicine, 1997–2000
Impulsivity in rodents with a genetic predisposition for excessive alcohol consumption is associated with a lack of a prospective strategy
Increasing evidence supports the hypothesis that impulsive decision-making is a heritable risk factor for an alcohol use disorder (AUD). Clearly identifying a link between impulsivity and AUD risk, however, is complicated by the fact that both AUDs and impulsivity are heterogeneous constructs. Understanding the link between the two requires identifying the underlying cognitive factors that lead to impulsive choices. Rodent models have established that a family history of excessive drinking can lead to the expression of a transgenerational impulsive phenotype, suggesting heritable alterations in the decision-making process. In the present study, we explored the cognitive processes underlying impulsive choice in a validated, selectively bred rodent model of excessive drinking-the alcohol-preferring ("P") rat. Impulsivity was measured via delay discounting (DD), and P rats exhibited an impulsive phenotype as compared to their outbred foundation strain-Wistar rats. Steeper discounting in P rats was associated with a lack of a prospective behavioral strategy, which was observed in Wistar rats and was directly related to DD. To further explore the underlying cognitive factors mediating these observations, a drift diffusion model of DD was constructed. These simulations supported the hypothesis that prospective memory of the delayed reward guided choice decisions, slowed discounting, and optimized the fit of the model to the experimental data. Collectively, these data suggest that a deficit in forming or maintaining a prospective behavioral plan is a critical intermediary to delaying reward, and by extension, may underlie the inability to delay reward in those with increased AUD risk
Probability Discounting of Lewis and Fischer 344 rats: Strain Comparisons at Baseline and Following Acute Administration of d-Amphetamine
Risky choice can be defined as choice for a larger, uncertain reinforcer over a smaller, certain reinforcer when choosing the smaller alternative maximizes reinforcement. Risky choice is studied using various procedures in the animal laboratory; one such procedure is called probability discounting. There are many variables that contribute to risky decision-making, including biological and pharmacological determinants. The present study assessed both of these variables by evaluating dose-response effects of d-amphetamine on risky choice of Lewis (LEW) and Fischer 344 (F344) rats. The probability-discounting procedure included discrete-trials choices between one food pellet delivered 100% of the time and three food pellets delivered following one of varying probabilities. The probability of three food pellets being delivered decreased systematically across blocks within each session. At baseline, risky choice did not differ between LEW and F344. However, choice for LEW became significantly less risky throughout extended training while choice for F344 remained relatively stable over time. d-Amphetamine significantly increased risky choice for both rat strains at low-to-moderate doses (0.1 and 0.3 mg/kg), although it did so at a lower dose for F344 (0.1 and 0.3 mg/kg) than LEW (0.3 mg/kg only), suggesting greater behavioral sensitivity to effects of d-amphetamine for F344. High doses of d-amphetamine (1.0 and 1.8 mg/kg) produced overall disruptions in choice for both strains, indicated by reductions in choice for the larger, uncertain alternative when the probability of delivery was relatively high and increases when the probability was relatively low. Results from the current study stand in contrast to previous reports investigating impulsive choice (i.e., choice involving temporal delays rather than uncertainty) of LEW and F344. Thus, the present work underscores the importance of considering risky and impulsive choice as two separate, but related, behavioral processes
Changing Nonhuman Impulsive Choice
Preference for smaller-sooner over larger-later rewards characterizes one type of impulsivity—impulsive choice. Impulsive choice is related to a number of maladaptive behaviors including substance abuse, pathological gambling, and poor health behaviors. As such, interventions designed to reduce impulsive choice may have therapeutic benefits. The purpose of this dissertation was to explore two methods to change nonhuman impulsive choice. In doing so, we hope to provide a baseline that future research can use to assess variables that are less amenable to human research (e.g., drug self-administration following reductions in impulsive choice). In Chapter 2, we failed to reduce nonhuman impulsive choice using working-memory training, a finding both inconsistent and consistent with the extant human literature. Chapters 3-5 sought to better understand a training regimen that generates large between-group differences in nonhuman impulsive choice—delay- and immediacy-exposure training. The results from Chapters 3 and 4 suggest that prolonged exposure to delayed food rewards produces large and long-lasting reductions in impulsive choice. Chapter 5 showed that the delay-exposure training effect can be obtained in fewer sessions than has previously been employed. A better understanding of the effects of delay-exposure training on nonhuman impulsive choice may have implications for the design and implementation of a human analog
CONTRIBUTION OF NUCLEUS ACCUMBENS CORE TO IMPULSIVE CHOICE: ROLE OF DOPAMINE AND GLUTAMATE SYSTEMS
Impulsive choice refers to the inability to delay gratification and is associated with increased drug abuse vulnerability. Understanding the underlying neural mechanisms linking impulsive choice and drug abuse can contribute to improved treatment options for individuals with substance use disorders. Evidence suggests a major role for nucleus accumbens core (NAcc) in impulsive choice and the reinforcing effects of drugs of abuse. The neurotransmitters glutamate (Glu) and dopamine (DA) are implicated in the neural adaptations observed in drug addiction; however, the role of intra-NAcc Glu and DA in impulsive choice is unclear. Rats were trained in a delay discounting task, in which animals chose between a small, immediate reinforcer and large, delayed reinforcer. Consistently choosing the small, immediate reinforcer was considered to reflect increased impulsivity. Following delay discounting, in vitro receptor autoradiography was performed to quantify the number of N-methyl-D-aspartate (NMDA) receptors and dopamine transporters (DAT) in NAcc and nucleus accumbens shell (NAcSh). In a separate experiment, rats were trained in delay discounting and were implanted with guide cannulae into NAcc. Following surgery, rats received microinfusions of either a) the Glu-selective ligands MK-801 (noncompetitive NMDA receptor channel blocker; 0, 0.3, and 1.0 μg), AP-5 (competitive NMDA receptor antagonist; 0, 0.3, and 1.0 μg), ifenprodil (NMDA NR2B subunit antagonist; 0, 0.3, and 1.0 μg), and CNQX (AMPA receptor antagonist; 0, 0.2, and 0.5 μg) or b) the DA-selective ligands SKF 38393 (D1-like receptor agonist; 0, 0.03, and 0.1 μg), SCH 23390 (D1-like receptor antagonist; 0, 0.3, and 1.0 μg), quinpirole (D2-like receptor agonist; 0, 0.3, and 1.0 μg), and eticlopride (D2-like receptor antagonist; 0, 0.3, and 1.0 μg). In NAcc and NAcSh, NMDA receptor and DAT expression did not differ between high and low impulsive rats. Furthermore, intra-NAcc administration of NDMA and DA receptor ligands did not significantly alter impulsive choice. These results suggest that Glu and DA systems within NAcc do not directly mediate impulsive decision making. Future work is needed to determine the precise role of NAcc in mediating impulsive choice
Mechanisms of Individual Differences in Impulsive and Risky Choice in Rats
Citation: Kirkpatrick, K., Marshall, A. T., & Smith, A. P. (2015). Mechanisms of Individual Differences in Impulsive and Risky Choice in Rats. Comparative Cognition & Behavior Reviews, 10. Retrieved from http://comparative-cognition-and-behavior-reviews.org/2015/vol10_kirkpatrick_marshall_smith/Mechanisms of Individual Differences in Impulsive and Risky Choice in Rats Kimberly Kirkpatrick Department of Psychological Sciences, Kansas State University Andrew T. Marshall Department of Psychological Sciences, Kansas State University Aaron P
Limits of behavioral control by temporally extended response -reinforcer relations
Three experiments were performed to determine the extent to which the behavior of rats can be controlled by response-reinforcer relations that are extended in time. In Experiment 1, bonus pellets delivered at the end of the session were contingent upon a shift in choice responding within the session. Experiment 2 examined control of aggregated responses by a delayed consequence over a much shorter time period than an entire session. The reinforcing efficacy of bonus pellets was assessed using a chained-schedule procedure. The relation between aggregated responses and a delayed reinforcing consequence was assessed several times per session and with shorter delays than in Experiment 1. Experiment 3 used an adjusting-delay procedure to assess whether differential reinforcer magnitudes have a differential effect on choice behavior when the delay between choice and subsequent reinforcement is equal for the two alternatives. The experiment was designed to determine the longest delay at which differential reinforcement is effective. Taken together, these three experiments were designed to determine the extent to which aggregated responses may be controlled by aggregated reinforcers or a single reinforcing event, and the extent to which a single response may be reinforced by its delayed consequence. Experiment 1 failed to produce reliable control of choice responding by the post-session consequence. Experiment 2 established control of responding by the delayed reinforcer, but such control was reliable for all rats only at delays of 40 s and less. Experiment 3 was unsuccessful in establishing discriminated choice performance by the large reinforcer, even at short delays, preventing the determination of the temporal limit of control by differential reinforcer magnitude. Overall, the results of this series of experiments suggest that the operant behavior of rats can be controlled by delayed consequences, but a finite limit to such control exists. It seems that reinforcers delayed on the order of several minutes or more are unlikely to control the behavior that produces them. Thus, response-reinforcer contiguity determines whether response-reinforcer correlations can control behavior
- …