17 research outputs found

    Computational models of ontology evolution in legal reasoning

    Get PDF
    This thesis analyses the problem of creating computational models of ontology evolution in legal reasoning. Ontology evolution is the process of change that happens to a theory as it is used by agents within a domain. In the legal domain these theories are the laws that define acceptable behaviours and the meta-legal theories that govern the application of the laws. We survey the background subjects required to understand the problem and the relevant literature within AI and Law. We argue that context and commonsense are necessary features of a model of ontology evolution in legal reasoning; and propose a model of legal reasoning based upon creating a discourse context. We conclude by arguing that there is a distinction between prescriptive and descriptive models of ontology evolution; with a prescriptive model being a social and philosophical problem, rather than a technical one, and a descriptive model being an AI-complete problem

    Inhibiting function of reinforcement: magnitude effects on variable-interval schedules

    No full text
    In two experiments, the performance of rats under constant-probability and arithmetic variable-interval schedules respectively was compared when the concentration of a liquid reinforcer was varied within sessions; in other sessions, half of the reinforcers were randomly omitted. When the discriminative function of the reinforcer as a signal for a decrease in the probability of reinforcement was attenuated (the constant-probability schedule) the postreinforcement pause duration was nevertheless an increasing function of reinforcer magnitude. This relationship was also present, but more marked, when the temporal discriminative function of the reinforcer was enhanced (the arithmetic schedule). These results suggested that reinforcement has an unconditioned suppressive effect on the reinforced response distinct from any discriminative function it may acquire. The reinforcement-omission effect, where response rate accelerates following omission, was observed when the reinforcer functioned as an effective temporal discriminative stimulus, but not when such temporal control was absent

    Pausing under variable-ratio schedules: Interaction of reinforcer magnitude, variable-ratio size, and lowest ratio

    No full text
    Pigeons pecked a key under two-component multiple variable-ratio schedules that offered 8-s or 2-s access to grain. Postreinforcement pausing and the rates of responding following the pause (run rates) in each component were measured as a function of variable-ratio size and the size of the lowest ratio in the configuration of ratios comprising each schedule. In one group of subjects, variable-ratio size was varied while the size of the lowest ratio was held constant. In a second group, the size of the lowest ratio was varied while variable-ratio size was held constant. For all subjects, the mean duration of postreinforcement pausing increased in the 2-s component but not in the 8-s component. Postreinforcement pauses increased with increases in variable-ratio size (Group 1) and with increases in the lowest ratio (Group 2). In both groups, run rates were slightly higher in the 8-s component than in the 2-s component. Run rates decreased slightly as variable-ratio size increased, but were unaffected by increases in the size of the lowest ratio. These results suggest that variable-ratio size, the size of the lowest ratio, and reinforcer magnitude interact to determine the duration of postreinforcement pauses

    Determinants of pausing under variable-ratio schedules: Reinforcer magnitude, ratio size, and schedule configuration

    No full text
    Pigeons pecked a key under two-component multiple variable-ratio schedules that offered 8-s or 2-s access to grain. Phase 1 assessed the effects of differences in reinforcer magnitude on postreinforcement pausing, as a function of ratio size. In Phase 2, postreinforcement pausing and the first five interresponse times in each ratio were measured as a function of differences in reinforcer magnitude under equal variable-ratio schedules consisting of different configurations of individual ratios. Rates were also calculated exclusive of postreinforcement pause times in both phases. The results from Phase 1 showed that as ratio size increased, the differences in pausing educed by unequal reinforcer magnitudes also increased. The results of Phase 2 showed that the effects of reinforcer magnitude on pausing and IRT durations were a function of schedule configuration. Under one configuration, in which the smallest ratio was a fixed-ratio 1, pauses were unaffected by magnitude but the first five interresponse times were affected. Under the other configuration, in which the smallest ratio was a fixed-ratio 7, pauses were affected by reinforcer magnitude but the first five interresponse times were not. The effect of each configuration seemed to be determined by the value of the smallest individual ratio. Rates calculated exclusive of postreinforcement pause times were, in general, directly related to reinforcer magnitude, and the relation was shown to be a function of schedule configuration

    Aftereffects of reinforcement on variable-ratio schedules

    No full text
    On each of variable-ratio 10, 40, and 80 schedules of reinforcement, when rats' lever-pressing rates were stable, the concentration of a liquid reinforcer was varied within sessions. The duration of the postreinforcement pause was an increasing function of the reinforcer concentration, this effect being more marked the higher the schedule parameter. The running rate, calculated by excluding the postreinforcement pause, was unaffected by concentration. The duration of the postreinforcement pause increased with the schedule parameter, but the proportion of the interreinforcement interval taken up by the pause decreased. Consequently, the overall response rate was an increasing function of the schedule parameter; i.e., it was inversely related to reinforcement frequency, contrary to the law of effect. The running rate, however, decreased with the reinforcement frequency, in accord with the law of effect. When 50% of reinforcements were randomly omitted, the postomission pause was shorter than the postreinforcement pause, but the running rate of responses was not affected
    corecore