4,817 research outputs found

    Methods in Psychological Research

    Get PDF
    Psychologists collect empirical data with various methods for different reasons. These diverse methods have their strengths as well as weaknesses. Nonetheless, it is possible to rank them in terms of different critieria. For example, the experimental method is used to obtain the least ambiguous conclusion. Hence, it is the best suited to corroborate conceptual, explanatory hypotheses. The interview method, on the other hand, gives the research participants a kind of emphatic experience that may be important to them. It is for the reason the best method to use in a clinical setting. All non-experimental methods owe their origin to the interview method. Quasi-experiments are suited for answering practical questions when ecological validity is importa

    Experimentation in Psychology--Rationale, Concepts and Issues

    Get PDF
    An experiment is made up of two or more data-collection conditons that are identical in all aspects, but one. It owes its design to an inductive principle and its hypothesis to deductive logic. It is the most suited for corroborating explanatory theries , ascertaining functional relationship, or assessing the substantive effectiveness of a manipulation. Also discussed are (a) the three meanings of 'control,' (b) the issue of ecological validity, (c) the distinction between theory-corroboration and agricultural-model experiments, and (d) the distinction among the hypotheses at four levels of abstraction that are implicit in an experiment

    Some meta-theoretical issues relating to statistical inference

    Get PDF
    This paper is a reply to some comments made by Green (2002) on Chow’s (2002) critique of Wilkinson and Task Force's (1999) report on statistical inference. Issues raised are (a) the inappropriateness of accepting methodological prescription on authority, (ii) the vacuity of non-falsifiable theories, (iii) the need to distinguish between experiment and meta﷓experiment, and (iv) the probability foundation of the null﷓hypothesis significance﷓test procedure (NHSTP). This reply is intended to foster a better understanding of research methods in general, and of the role of NHSTP in empirical research in particular

    Auotmatic detection, consistent mapping, and training

    Get PDF
    Results from two experiments showed that a flat display﷓size function was found under the consistent mapping (CM) condition despite the facts that there was no extensive CM training and that the stimulus﷓response (S﷓R) consistency was only an intrasession manipulation. A confounding factor might be responsible for the fact that the consistent and the varied S﷓R mapping conditions gave rise to different display﷓size functions in Schneider and Shiffrin's (1977) study. Their claim that automatic detection and controlled search are qualitatively different is also discussed

    Issues in Statistical Inference

    Get PDF
    The APA Task Force’s treatment of research methods is critically examined. The present defense of the experiment rests on showing that (a) the control group cannot be replaced by the contrast group, (b) experimental psychologists have valid reasons to use non-randomly selected subjects, (c) there is no evidential support for the experimenter expectancy effect, (d) the Task Force had misrepresented the role of inductive and deductive logic, and (e) the validity of experimental data does not require appealing to the effect size or statistical power

    Cognitive Science and Psychology

    Get PDF
    The protocol algorithm abstracted from a human cognizer's own narrative in the course of doing a cognitive task is an explanation of the corresponding mental activity in Pylyshyn's (1984) virtual machine model of mind. Strong equivalence between an analytic algorithm and the protocol algorithm is an index of validity of the explanatory model. Cognitive psychologists may not find the index strong equivalence useful as a means to ensure that a theory is not circular because (a) research data are also used as foundation data, (b) there is no justification for the relationship between a to﷓be﷓validated theory and its criterion of validity, and (c) foundation data, validation criterion and to﷓be﷓validated theory are not independent in cognitive science. There is also the difficulty with not knowing what psychological primitives are

    Iconic memory of icon?

    Get PDF
    The objectives of the present commentary are to show that (1) one important theoretical property of iconic memory is inconsistent with a retinotopic icon, (2) data difficult for the notion of an icon do not necessarily challenge the notion of an iconic store, (3) the iconic store, as a theoretical mechanism, is an ecologically valid one, and (4) the rationale of experimentation is such that the experimental task need not mimic the phenomenon being studied

    Application of Subset Simulation to Seismic Risk Analysis

    Get PDF
    This paper presents the application of a new reliability method called Subset Simulation to seismic risk analysis of a structure, where the exceedance of some performance quantity, such as the peak interstory drift, above a specified threshold level is considered for the case of uncertain seismic excitation. This involves analyzing the well-known but difficult first-passage failure problem. Failure analysis is also carried out using results from Subset Simulation which yields information about the probable scenarios that may occur in case of failure. The results show that for given magnitude and epicentral distance (which are related to the ‘intensity’ of shaking), the probable mode of failure is due to a ‘resonance effect.’ On the other hand, when the magnitude and epicentral distance are considered to be uncertain, the probable failure mode correspondsto the occurrence of ‘large-magnitude, small epicentral distance’ earthquakes

    One Dimensional nnary Density Classification Using Two Cellular Automaton Rules

    Full text link
    Suppose each site on a one-dimensional chain with periodic boundary condition may take on any one of the states 0,1,...,n10,1,..., n-1, can you find out the most frequently occurring state using cellular automaton? Here, we prove that while the above density classification task cannot be resolved by a single cellular automaton, this task can be performed efficiently by applying two cellular automaton rules in succession.Comment: Revtex, 4 pages, uses amsfont
    corecore