682 research outputs found

    Departure from the onset-onset rule

    Get PDF
    Using a signal-detection task, the generality of Turvey's (1973) onset-onset rule was tested in four experiments. After seeing, in succession, (1) one or two letters (target display), (2) a multiletter detection display, and (3) a mask display, subjects decided whether or not the letter or letters in the target display reappeared in the succeeding detection display at different levels of detection-display duration in various situations. The subjects' sensitivity was inconsistent with the onset-onset rule. More specifically, sensitivity increased with increases in display duration within a fixed stimulus onset asynchrony of 150 msec. Display duration, however, had no effect on response bias. Nor was there any interaction between display duration and display size in terms of either sensitivity or response bias. The more complicated relationship between display duration and display size does not invalidate the departure from the onset-onset rule

    Iconic memory, location information, and partial report

    Get PDF
    It has been suggested that the systematic decline of partial report as the delay of the partial-report cue increases is due to a time-related loss of location information. Moreover, the backward masking effect is said to be precipitated by the disruption of location information before and after identification. Results from three experiments do not support these claims when new indices of location information and of item information are used. Instead, it was found that (a) the systematic decline in partial report was due to a time-related loss of item information, and (b) location information was affected neither by the delay of the partial-report cue nor by the delay of backward masking. Subjects adopted the "select-then-identify" mode of processing

    Iconic store and partial report

    Get PDF
    The iconic store has recently been challenged on the grounds that data in its favor may have resulted from some procedural artifacts. The display-instruction compatibility and perceptual grouping hypotheses were reexamined in two experiments with the partial-report paradigm. When care was taken to rectify some procedural problems found in Merikle's (1980) study, it was established that the iconic store (as a hypothetical mechanism) can still be validly entertained. This report demonstrates one important procedural point in studying the iconic store with the partial-report task, namely, that subjects must be given more than token training on the partial-report task

    Methods in Psychological Research

    Get PDF
    Psychologists collect empirical data with various methods for different reasons. These diverse methods have their strengths as well as weaknesses. Nonetheless, it is possible to rank them in terms of different critieria. For example, the experimental method is used to obtain the least ambiguous conclusion. Hence, it is the best suited to corroborate conceptual, explanatory hypotheses. The interview method, on the other hand, gives the research participants a kind of emphatic experience that may be important to them. It is for the reason the best method to use in a clinical setting. All non-experimental methods owe their origin to the interview method. Quasi-experiments are suited for answering practical questions when ecological validity is importa

    Experimentation in Psychology--Rationale, Concepts and Issues

    Get PDF
    An experiment is made up of two or more data-collection conditons that are identical in all aspects, but one. It owes its design to an inductive principle and its hypothesis to deductive logic. It is the most suited for corroborating explanatory theries , ascertaining functional relationship, or assessing the substantive effectiveness of a manipulation. Also discussed are (a) the three meanings of 'control,' (b) the issue of ecological validity, (c) the distinction between theory-corroboration and agricultural-model experiments, and (d) the distinction among the hypotheses at four levels of abstraction that are implicit in an experiment

    Some meta-theoretical issues relating to statistical inference

    Get PDF
    This paper is a reply to some comments made by Green (2002) on Chow’s (2002) critique of Wilkinson and Task Force's (1999) report on statistical inference. Issues raised are (a) the inappropriateness of accepting methodological prescription on authority, (ii) the vacuity of non-falsifiable theories, (iii) the need to distinguish between experiment and meta﷓experiment, and (iv) the probability foundation of the null﷓hypothesis significance﷓test procedure (NHSTP). This reply is intended to foster a better understanding of research methods in general, and of the role of NHSTP in empirical research in particular

    Auotmatic detection, consistent mapping, and training

    Get PDF
    Results from two experiments showed that a flat displayï·“size function was found under the consistent mapping (CM) condition despite the facts that there was no extensive CM training and that the stimulusï·“response (Sï·“R) consistency was only an intrasession manipulation. A confounding factor might be responsible for the fact that the consistent and the varied Sï·“R mapping conditions gave rise to different displayï·“size functions in Schneider and Shiffrin's (1977) study. Their claim that automatic detection and controlled search are qualitatively different is also discussed

    Issues in Statistical Inference

    Get PDF
    The APA Task Force’s treatment of research methods is critically examined. The present defense of the experiment rests on showing that (a) the control group cannot be replaced by the contrast group, (b) experimental psychologists have valid reasons to use non-randomly selected subjects, (c) there is no evidential support for the experimenter expectancy effect, (d) the Task Force had misrepresented the role of inductive and deductive logic, and (e) the validity of experimental data does not require appealing to the effect size or statistical power

    Cognitive Science and Psychology

    Get PDF
    The protocol algorithm abstracted from a human cognizer's own narrative in the course of doing a cognitive task is an explanation of the corresponding mental activity in Pylyshyn's (1984) virtual machine model of mind. Strong equivalence between an analytic algorithm and the protocol algorithm is an index of validity of the explanatory model. Cognitive psychologists may not find the index strong equivalence useful as a means to ensure that a theory is not circular because (a) research data are also used as foundation data, (b) there is no justification for the relationship between a toï·“beï·“validated theory and its criterion of validity, and (c) foundation data, validation criterion and toï·“beï·“validated theory are not independent in cognitive science. There is also the difficulty with not knowing what psychological primitives are

    Iconic memory of icon?

    Get PDF
    The objectives of the present commentary are to show that (1) one important theoretical property of iconic memory is inconsistent with a retinotopic icon, (2) data difficult for the notion of an icon do not necessarily challenge the notion of an iconic store, (3) the iconic store, as a theoretical mechanism, is an ecologically valid one, and (4) the rationale of experimentation is such that the experimental task need not mimic the phenomenon being studied
    • …
    corecore