27 research outputs found

    Timing and space usage are disrupted by amphetamine in rats maintained on DRL 24-s and DRL 72-s schedules of reinforcement

    Get PDF
    RATIONALE: A differential-reinforcement-of-low-rate schedule (DRL) delivers reinforcement only when the interresponse time (IRT) exceeds a fixed time interval, thereby shaping rats to discriminate the timing of their responses. However, little is known about the motor behavior and location of the rats in the chamber during the IRTs that lead to reinforcement. Although amphetamine is known to disrupt DRL timing behavior, the effects of this drug on non-operant motor behavior during DRL performance has not yet been quantified. OBJECTIVE: The purpose of this research was to measure the motor behavior (movement trajectories in the horizontal plane and spatial location in the plane) during longer IRTā€™s after either vehicle or amphetamine treatment. METHOD: Experimental chambers were constructed with a force-plate actometer as the floor, and while performing the operant task, the ratsā€™ motor behaviors were measured continuously with high temporal and spatial resolution. Separate groups of 8 male Sprague Dawley rats were maintained on either DRL 24-s or DRL 72-s schedules of water reinforcement in 4-hr recording sessions. RESULTS: Analyses of IRT distributions showed that the ratsā€™ timing behavior conformed to their respective DRL requirements. In the absence of drug, analysis of motor behavior in pre-reinforcement intervals showed that rats located themselves away from the operandum, and exhibited very low levels of movement. Rats exhibited a significant temporal diminution of horizontal movement that reached a minimum 4ā€“8 s before the rats moved to the operandum to execute operant responses. Amphetamine treatment increased locomotion, abolished the temporal movement gradient, and brought the rats closer to the operandum compared to vehicle treatment. Movement changes induced by amphetamine were accompanied by degraded timing behavior. CONCLUSIONS: Taken together, the data show that DRL training induced rats to locate themselves away from the operandum and to remain nearly motionless during longer IRTs, and that amphetamine treatment interfered with this complex of behavioral features

    Effects of Pramipexole on Impulsive Choice in Male Wistar Rats

    Get PDF
    This article may not exactly replicate the final version published in the APA journal. It is not the copy of record.Clinical reports, primarily with Parkinsonā€™s patients, note an association between the prescribed use of pramipexole (and other direct-acting dopamine agonist medications) and impulse control disorders, particularly pathological gambling. Two experiments examined the effects of acute pramipexole on ratsā€™ impulsive choices where impulsivity was defined as selecting a smaller-sooner over a larger-later food reward. In Experiment 1, pramipexole (0.1 to 0.3 mg/kg) significantly increased impulsive choices in a condition in which few impulsive choices were made during a stable baseline. In a control condition, in which impulsive choices predominated during baseline, pramipexole did not significantly change the same ratsā€™ choices. Experiment 2 explored a wider range of doses (0.01 to 0.3 mg/kg) using a choice procedure in which delays to the larger-later reinforcer delivery increased across trial blocks within each session. At the doses used in Experiment 1, pramipexole shifted choice toward indifference regardless of the operative delay. At lower doses of pramipexole (0.01 & 0.03 mg/kg), a trend toward more impulsive choice was observed at the 0.03 mg/kg dose. The difference in outcomes across experiments may be due to the more complex discriminations required in Experiment 2; i.e., multiple discriminations between changing delays within each session

    Effects of White and Infrared Lighting on Apomorphine-Induced Pecking in Pigeons

    No full text
    This experiment was concerned with the role of the environment in the production and form of apomorphine-induced pecking of pigeons. Earlier literature has suggested that the pecking occurs even when pigeons are placed in complete darkness, but there are no systematic or quantitative reports of such pecking. Six pigeons were tested with doses of 0.1, 0.3, and 1.0 mg/kg apomorphine. Tests were made in conditions of white and infrared light. The apparatus used novel force transduction measures that provided for both the detection of a peck as well as its peak forcefulness. At the lowest dose tested, apomorphine elicited pecking when the pigeon was placed in white light, but not when the dose was examined under infrared lighting. As the dose increased, however, pecking was observed regardless of lighting condition. No consistent differences were found in forcefulness of pecking as a function of lighting condition or dose. Though response output was seemingly unaffected by the lighting condition at higher doses, videotaped analysis revealed important changes in the formal characteristics of pecking. In white light, apomorphine elicited pecking at stimuli in the chamber (e.g. screw heads or the pigeon\u27s own toes), whereas in infrared light pecking was directed at the floor directly in front of the pigeon. Such differences may be attributable to shifts in control to other stimulus modalities when vision is limited. Additionally, apomorphine may have direct effects on retinal dopamine function modulating the expression of pecking in the dark

    Analysis of the Value-Altering Effect of Motivating Operations

    No full text
    Motivating operations (MOs) may affect behavior in two ways; A) an MO momentarily alters the frequency of behavior for which a particular consequence has served as reinforcement (evocative-effect) and B) an MO momentarily alters the behavioral effects of the relevant consequence (value-altering effect). Many studies have empirically demonstrated the evocative function of MOs, however, few if any studies have attempted to systematically manipulate and measure the value-altering effect. The focus of this study was to investigate the value-altering effect by measuring choice and response allocation across two alternative tasks. Participants were two female girls diagnosed with autism. During conditioning sessions, experimenters created a history for the children in which clicking on a moving square on a computer monitor produced a small piece of edible. Prior to some conditions, the participants were allowed 5 min of free-access to the edibles, and in other sessions, access to edibles prior to session was restricted. During these sessions, the square was either red or blue depending on the condition type (pre-access or restricted-access). During probe sessions, both colored squares were concurrently available and participants were allowed to allocate their responding to whichever square they chose. One participant preferred the square associated with restricted-access, which may support the notion of the value-altering effect. Difficulties during conditioning sessions interfered with the ability to run sufficient probes with the other participant to evaluate a valuealtering effect. Results suggest that the use of these procedures may be useful to differentiate evocative and function-altering effects of MOs. ii Copyright 2013 by Bailey Devine iii ACKNOWLEDGEMENTS Foremost, I would like to thank Dr. Richard Smith for allowing me to be your student. You have been an incredible advisor and I am truly honored to be in your tutelage. You have also provided me with careful instruction as a professor, thoughtful advice as a mentor and a kind ear as a friend. I would also like to thank Dr. Einar Ingvarsson for performing many roles in my life while I was a student at UNT; boss, professor, excellent musician, concert buddy and friend. To Dr. Jonathan Pinkston, I think I finally learned how to spell your name right! Thank you for helping me design and build the apparatus, writing the computer program, allowing me to come into your home and spend time with your family, and especially thank you for the 40 clove garlic chicken. Thank you all for agreeing to participate in this study and for the numerous hours of work you have put into this project and helping me make it the best thesis that I can. To my UNT family, Kim Kelly, Danielle Russell, Brett Kellerstedt, Carla Smith and Elissa Hamilton, thank you all for providing moral support and your friendship. Brett, thanks for helping me move 3 times in 1.5 years. To my Child Study Center friends, Rachel Koelker, Duy Le, Jesse Anderson, Tracie Mann, Melinda Robison and Anthony Cammilleri, thank you all for making this thesis possible and for your keen intellect and feedback during research meetings. You all have helped shape this project quite a bit from its' infancy. A special thanks is in order for Tracie, you took time out of your busy day, at a new job to help run thesis sessions for me. There is no way I would have been able to collect enough data without you. Thank you! To Dr. Anna Petursdottir, thank you for your thoughtful feedback during research meetings, as well as your seemingly endless supply Psychology (1950). In this classic text, the authors refer to "drives" and "establishing operations" (EO) as precursors to behavior. For the next three decades the concepts of drive and motivation would receive little attention in the behavior analytic literature until Michael addressed them in the early 1980 's. Michael (2000 reported that his 1982 and 1993 papers on establishing operations (EOs) were attempts to make motivational concepts more important parts of current behavior analysis theory. Despite the small number of articles specifically referring to MOs during this period, their applied relevance was becoming more apparent. Behavior analysts have identified two general types of antecedent influences on operant behavior. One class of variables that can alter the probability of occurrence of behavior is discriminative in nature. These stimuli or conditions affect operant behavior because of historical correlations between the presence or absence of the stimulus when a reinforcing or punishing event occurs. Another class of variables that can affect operant behavior is related to an organism's motivation with respect to specific environmental stimuli. This class of variables is known as motivating operations 1 (MO) An MO is defined as a stimulus or condition that affects an organism in two ways 1 Here the author introduces the term "motivating operation", or "MO", which is the currently accepted umbrella term for motivating variables. As is later discussed in the manuscript, a change in terminology was suggested by Laraway, Syncerski, Michael and Poling (2003) because the terminology used was confusing for a number of reasons. Prior to their article, behavior analysts used the term establishing operation (EO) to refer to MOs (either evocative or abative). This author will use the term MO (unless quoting directly) to refer to the broad category of motivating operations, and EO or AO to refer to specific variables. suggesting that much of applied psychology is concerned with getting people to do things that they know how to do but don't want to do. Some authors have suggested that the complexity of antecedent influences on behavior require new ways of conceptualizing these variables -that operant principles of discrimination and motivation simply cannot adequately account for these influences (e.g., Kantor, 1970; Wahler & Fox,1981). 2 The term "satiation" is used in other scientific fields to describe hormonal changes (i.e., a rise in blood glucose levels, distension of the belly and more) due to ingestion. It can be said to produce behavioral changes as well. "Habituation" is a term which may refer to a change in behavior due to repeated exposure to a specific stimulus such that over time, an organism may not react to the stimulus as it did initially. In any given example, it will be incredibly difficult to say which of these is responsible for any given behavioral change. It may be beneficial to refer to both of these as agents of change. More inquiry into the extent to which each plays a role in behavioral changes would beneficial. 6 Vollmer and Iwata (1991) directly investigated the effects of deprivation and satiation operations on the behavior of five adult males with developmental delays. Access to small edibles, music, and social praise was manipulated in order to examine of the effects of presession access versus no-access on the performance of arbitrary tasks (stacking blocks or closing switches) when the stimuli were presented as consequences. Results showed that "ā€¦response rates during reinforcement conditions varied as a function of relative deprivation versus satiation" across subjects (p.289), with periods of access associated with lower rates of responding than when access was withheld. The authors suggested that failures of reinforcement may sometimes be a function of motivational variables, and therapy providers should carefully consider scheduling and timing of potentially reinforcing events throughout an individual's day. Said another way, when a positive reinforcement procedure fails, it may not be because the procedure was performed incorrectly or the consequence would not otherwise function as a reinforcer. Rather, it may be that the consequence was not serving as reinforcement at the moment it was delivered because of motivationally-relevant variables, such as recent contact with the stimulus. For example, the delivery of snack items as reinforcement for appropriate behavior directly after a meal time may not be as effective as if the same items were contingently delivered before a meal or a few hours after a meal. Results of this and other investigations of deprivation operations suggest that these effects can be produced through relatively minor adjustments to clients' daily schedules, suggesting that these types of manipulations might represent easy, efficient, and nonintrusive ways to maximize reinforcement effects. McGill Sy and Borrero (2009) evaluated effects of pre-session access to reinforcing stimuli on rates of problem behavior. Specifically, access to small, medium or large amounts of edible and non-edible reinforcers was provided immediately before sessions in which identical reinforcers were contingent on problem behavior. Results showed that pre-session access to edibles produced different results across individuals, reducing responding for some but increasing responding for others. In addition, results varied across parameters of access, suggesting that "ā€¦duration may influence the reinforcing efficacy of some stimuli" (p. 836).The outcomes of this study suggest that prior access to stimuli that are subsequently presented contingent on behavior may produce both sensitization and habituation, and that these effects may differentiate across participants and also be mediated by the extent of access. As such, assessments that track both occurrence and effects of outside access to reinforcers may assist in effective programming. 3 Discriminated in this context refers to the child choosing the correct picture to exchange. In other words, the children could discriminate between the pictures that represented the items they were deprived of (and were actually seeking) and other pictures. 10 O'Reilly and colleagues examined the effects of three different types of pre-session conditions on problem behavior (2009). Analogue FAs identified delivery of tangible items as the maintaining variable of problem behavior in two students with autism. Next, the students were exposed to (a) brief access (b) no-access (c) satiation to tangible item conditions prior to a tangible condition session. Lower levels of problem behavior were emitted following the satiation pre-access condition than either the brief access or no-access conditions. The findings were discussed in terms of how best to define satiation in similar types of evaluations. Behavioral indicators of satiation were noted as one important measure such as latency to first response might be another indicator of interest. Michael has been influential in shaping the ongoing discussion about motivation in behavior analysis. From 1982 to 1999, the cumulative number of citations to In practice, MOs may be manipulated to achieve at least two effects. As noted by Michael According to Michael's definitions, behavior that produces a reinforcing consequence while a strong EO is in effect should be more effectively, or strongly, reinforced than when behavior produces the same consequence under conditions when motivation is weak. That is, behavior reinforced during strong EOs should be more likely to occur later, under similar conditions, than behavior reinforced during weak EOs. Thus, to demonstrate this effect, it would be necessary to show that a history of earning reinforcement while an EO was in effect increases the future occurrence of a response relative to a history of reinforcement during which the EO was not in effect. One potential tactic for evaluating the value-altering effect might take the form of a choice test. Given the option to choose between two stimuli that have been paired with different histories of access to a reinforcer prior to sessions in which the reinforcer was presented contingent on identical tasks, an MO account would predict that participants would choose the stimulus that was paired with deprivation from the reinforcer. In the current experiment food was used to reinforce an arbitrary response (clicking on a moving square on a computer screen) under two conditions: one in which the participant did not have access to the particular food for at least 24 hr. (zero access) and one in which a substantial amount of the food was freely available immediately prior to sessions (free access). Two previously neutral stimuli (the color of 12 the moving squares on the computer screen) were paired with two condition types, which varied from one another only according to pre-session access (i.e., sessions were controlled so that the tasks were identical, the same number of responses were reinforced and the same number of reinforcers were earned across access vs. no-access conditions 4 ). Following three yoked pairs of sessions, participants were permitted to choose which color of square (the color associated with restricted-access or the color associated with pre-session access) to view and/or click on during probe sessions. If deprivation from the food functioned to increase its reinforcing effectiveness, one would expect that the stimulus that was paired with restricted-access would be chosen more frequently and might also show more durable behavior (i.e., resistance to extinction). Results showing responding in the opposite direction might raise interesting questions regarding the effects of satiation and deprivation procedures, as well as about the putative function-altering characteristic of MOs. In other words, in order to test the value of some stimulus, it would be advisable to offer the participants the choice between the stimuli. If one stimulus is more chosen or more favored in terms of responses than the other, it would show that the participant preferred that stimulus. The current study may be set apart from the rest because it attempts to specifically evaluate the value-altering function of MOs by analyzing the choice behaviors of participants after a conditioning history has been built in which MOs have been manipulated. This procedure will allow analysis of the lasting effects of the manipulations to prior access of edibles on the perceived value of the squares associated with differing availabilities of edibles. 4 The number of reinforcers earned across conditioning sessions was not always held constant for all participants, a more in depth discussion of this follows in the method, results and discussion section. 13 CHAPTER 2 METHOD Participants and Setting Two females who attended a school for children with developmental disabilities served as participants for this study. Possible participants were identified by the school's director and various teachers. Participants were recruited by flyers sent home with the children's daily takehome folders. To be eligible for participation in this study the student must have had (a) a guardian who gave permission to participate in the study, (b) an extensive history of using a computer with a mouse and (c) a reported preference of some snack items. The participants had demonstrated the ability work independently on a variety of academic assignments using a computer, mouse and keyboard with minimal assistance. Ellen was a 13 year-old female who attended a classroom where she experienced a 6.5:1 student/teacher ratio. She was diagnosed with autism by an independent medical practitioner, and had an intelligence quotient (IQ) of 71 according to the Wechsler Intelligence Scale for Children, fourth edition (WISC IV). She functioned on a 6.9 grade level, according to the WoodcockJohnson III Test of Achievement Beatrix was a 6 year-old female who attended a different classroom and experienced a 5:1 student/teacher ratio. She was diagnosed with autism by an independent practitioner and had an IQ of 108 according to the WISC IV. She functioned on a 1.9 grade level, according to the WJ III Test and her age equivalent according to the same test was 6.83. She could follow multi-step 14 instructions, answer complex questions, engage in age typical conversations, and asked questions if she did not understand instructions or needed assistance. All experimental sessions were conducted in a room (approximately 1.83 m x 3.05 m) containing two desks, four chairs, a bookshelf, and the study apparatus. Sessions were conducted three to five days a week. Because the sessions involved consumption of edible items, sessions were only conducted once daily for all participants. Apparatus and Task The apparatus consisted of a cubicle constructed from hardy board that was nailed together in an "H" shape, providing for a participant side and an experimenter side. A 6.35cm diameter PVC tube protruded through the front of the apparatus and a 35.56 cm x 30.48cm color computer monitor was mounted above the tube on the participant side. Cables and control wires passed out of site of the participant to the experimenter side via a hole behind the monitor. Starting, stopping, and data logging of the experimental session was accomplished via a keyboard on the experimenter side of the cubicle. Images including exact measurements are shown in Madden & Perone, 1999). The target response was mouse clicks on the square. For both the "red" and "blue" conditioning sessions, each consecutive five clicks to the operative square resulted in the delivery of an edible (see below) by the experimenter. Clicks off the square reset the counter. The sessions stopped automatically after ten minutes or by the experimenter's hitting the "S" key in certain circumstances (see below). Probe trials were intermingled with training sessions. Probe trials were similar to conditioning sessions in appearance except that both colored tabs at the top of the screen were concurrently operative, and the participant could choose between the "red" and "blue" conditions. The schedule requirement remained the same during probe trials. In effect, probe trials were Findley concurrent arrangements Changing between components did not affect the current count of clicks, as before, only clicks off the square reset the response counter. All other parameters were likewise the same as 16 training sessions. For both components, each time a square was clicked five times in a row without error, a brief chime noise played. Measurement and Interobserver Agreement Pre-Conditioning Assessments During the color preference assessment, a color selection response was defined as touching, pointing or otherwise making contact with the hand to a single colored square. During the edible preference assessment, a selection response was defined as the participant making physical contact of the hand with the edible on the plate. During the reinforcer assessment, a response was defined as grasping a colored ball with one hand and removing it from the crate it was housed in, to deposit it into the other crate. A response was not tallied unless the child let go of the ball with all fingers upon releasing it into the other crate. Data were collected independently by trained observers who circled colors, scored tally marks or wrote the appropriate number using pencil and paper. Interobserver agreement (IOA) data were scored for each participant during edible preference assessment and reinforcer assessment. No IOA data were collected for either participant during the color preference assessment. IOA was measured for the edible preference assessment by calculating the point by point agreement of each edible selection per round. So if observer 1 and 2 disagreed that Ellen chose Goldfish crackers first, this would count as one disagreement. If they both agreed that Cheetos were selected third, this would count as one agreement. Agreements were divided by agreements plus disagreements and that number was multiplied by 100. For Ellen, IOA was 95% agreement during the edible preference assessment. For Beatrix, IOA was 100% during the edible preference assessment. IOA was measured for the reinforcer assessment by calculating the smaller number of responses 17 by the larger number of responses and multiplying by 100. For Ellen, IOAwas 100% during the reinforcer assessment. For Beatrix, IOA was 95% during the first reinforcer assessment and 100% during the second reinforcer assessment. Conditioning Sessions and Probes During conditioning and probe sessions target responses were clicks using a standard mouse on a PC. Target clicks were defined as clicks that occurred when the arrow was located on a box that moved continuously and randomly across a computer screen. Off-target clicks were defined as clicks to any other portion of the computer screen. Data were collected automatically and saved by the computer during all sessions. The computer program recorded all responses in real time, and data were summarized for each session as frequencies of target responses and offtarget responses (conditioning and probes), the number of minutes spent with each square (conditioning and probes), and the number of switch-overs between stimulus conditions (probes). Procedure Color Preference/Side Bias Assessment Prior to conditioning sessions, several pre-conditioning assessments were conducted. A paired choice assessment was conducted to determine if preferences existed among five differently colored squares and to determine if participants exhibited left or right side bias. Although squares of only two colors were used in the experiment, additional colors were evaluated (1) in case any pre-experimental preferences for red or blue were observed of or Edible Preference Assessment The experimenter asked the children's parents to list some of the child's prefe

    An Inexpensive Infrared Detector to Verify The Delivery of Food Pellets

    No full text
    The reproducibility of experimental outcomes depends on consistent control of independent variables. In food-maintained operant performance, it is of utmost importance that the quantity of food delivered is reliable. To that end, some commercial food pellet dispensers have add-on attachments to sense the delivery of pellets. Not all companies, however, offer such add-ons. Aside from availability, cost and temporary reduction in throughput may be a problem for smaller labs. The present paper outlines our recent development of a simple, inexpensive infrared device to detect and confirm the delivery of pellets. The in-line construction of the detector routes the falling pellet through a barrel so that it passes between an infrared emitter and receiver. The circuitry was designed to be compatible with all commercially available behavioral measurement systems, and so may be retrofit to any existing system. Our tests with the detector so far have shown that it is 100% accurate in detecting pellet delivery. The individual unit cost is approximately 25 dollars. The low cost and versatility of the device offer an easy method to ensure the integrity of food delivery in operant settings
    corecore