291,610 research outputs found
Methods in Psychological Research
Psychologists collect empirical data with various methods for different reasons. These diverse methods have their strengths as well as weaknesses. Nonetheless, it is possible to rank them in terms of different critieria. For example, the experimental method is used to obtain the least ambiguous conclusion. Hence, it is the best suited to corroborate conceptual, explanatory hypotheses. The interview method, on the other hand, gives the research participants a kind of emphatic experience that may be important to them. It is for the reason the best method to use in a clinical setting. All non-experimental methods owe their origin to the interview method. Quasi-experiments are suited for answering practical questions when ecological validity is importa
Research and Applications of the Processes of Performance Appraisal: A Bibliography of Recent Literature, 1981-1989
[Excerpt] There have been several recent reviews of different subtopics within the general performance appraisal literature. The reader of these reviews will find, however, that the accompanying citations may be of limited utility for one or more reasons. For example, the reference sections of these reviews are usually composed of citations which support a specific theory or practical approach to the evaluation of human performance. Consequently, the citation lists for these reviews are, as they must be, highly selective and do not include works that may have only a peripheral relationship to a given reviewer\u27s target concerns. Another problem is that the citations are out of date. That is, review articles frequently contain many citations that are fifteen or more years old. The generation of new studies and knowledge in this field occurs very rapidly. This creates a need for additional reference information solely devoted to identifying the wealth of new research, ideas, and writing that is changing the field
The construction of colorimetry by committee
This paper explores the confrontation of physical and contextual factors involved in the emergence of the subject of color measurement, which stabilized in essentially its present form during the interwar period. The contentions surrounding the specialty had both a national and a disciplinary dimension. German dominance was curtailed by American and British contributions after World War I. Particularly in America, communities of physicists and psychologists had different commitments to divergent views of nature and human perception. They therefore had to negotiate a compromise between their desire for a quantitative system of description and the perceived complexity and human-centeredness of color judgement. These debates were played out not in the laboratory but rather in institutionalized encounters on standards committees. Groups such as this constitute a relatively unexplored historiographic and social site of investigation. The heterogeneity of such committees, and their products, highlight the problems of identifying and following such ephemeral historical 'actors'
Justice Expectations and Applicant Perceptions
Expectations, which are beliefs about a future state of affairs, constitute a basic psychological mechanism that underlies virtually all human behavior. Although expectations serve as a central component in many theories of organizational behavior, they have received limited attention in the organizational justice literature. The goal of this paper is to introduce the concept of justice expectations and explore its implications for understanding applicant perceptions. To conceptualize justice expectations, we draw on research on expectations conducted in multiple disciplines. We discuss the three sources of expectations – direct experience, indirect influences, and other beliefs - and use this typology to identify the likely antecedents of justice expectations in selection contexts. We also discuss the impact of expectations on attitudes, cognitions, and behaviors, focusing specifically on outcomes tied to selection environments. Finally, we explore the theoretical implications of incorporating expectations into research on applicant perceptions and discuss the practical significance of justice expectations in selection contexts
Recommended from our members
Assessing the payback from health R & D: From ad hoc studies to regular monitoring
Chapter 1 : Introduction
• The increasing demands for the benefits of payback from publicly funded R&D to be assessed are based partly on the need to justify or account for expenditure on R&D, and partly on the desire for information to assist resource allocation and the better management of R&D funds. The former consideration is particularly strong in relation to the R&D expenditure that comes out of the wider NHS budget.
• In this report a range of categories of payback will be identified along with a variety of methods for assessing them.
• The aim of the report is to make recommendations as to how the outcomes from health research might best be monitored on a regular basis. The specific context of the report is the NHS R&D Programme but many of the issues will be relevant for a wide range of funders of health R&D.
• The introduction sets out not only a plan of the report but also suggests that readers familiar with the general arguments and existing literature may choose to jump to Chapter 6.
Chapter 2 : Review of Existing Approaches to Assessing the Payback from Research
• Existing work describes various approaches to valuing research. Some are ex ante and attempt to predict the outcomes of research being considered, others are ex post or retrospective.
• The five categories of benefit or payback from health R&D that have been identified involve contributions: to knowledge; to research capacity and future research; to improved information for decision making; to the efficiency, efficacy and equity of health care services; and to the nation’s economic performance. These are shown in Table 1 of the report
• The process by which R&D generates final outcomes can be modelled as a sequence. This includes primary outputs such as publications; secondary outputs in the form of policy or administrative decisions; and final outcomes which comprise the health and economic benefits. Feedback loops are also introduced and mitigate the limitations of a linear approach.
• Qualitative and quantitative approaches can be used but there are immense problems with time lags and attributing outcomes, and sometimes even outputs, to specific items of research funding.
• Four common methods of measuring payback can be used. Expert review, by peers or, sometimes, users is the traditional way of assessing the quality of research. Bibliometric techniques can involve not only counting publications but also using datasets such as the Science Citation Index and Wellcome’s Research Outputs Database (ROD). The various methods of economic analysis of payback are difficult to undertake given the costs and problems of acquiring relevant information and estimating benefits. Social science methods include case studies, which can provide useful information but are resource intensive, and questionnaires to researchers and potential research users.
Chapter 3 : Characteristics of a Routine Monitoring System
• In moving from ad hoc or research studies of payback towards a more regular monitoring it is noted that whereas there has always been a tradition of evaluation of research, in the public services in general there is now a greater emphasis on audit and performance measurement and indicators. A review of these various systems suggests we should be looking to develop a system of outcomes monitoring that incorporates performance indicators (PIs) and measurement rather than an audit system that is trying to monitor activities against predetermined targets.
• Standard characteristics of performance measurement systems do not necessarily apply to research where, for example, there are non-standard outputs. Difficulties have arisen in the USA in attempting to apply the Government Performance and Results Act to research funding agencies. It is shown that because the findings of basic research, in particular, enter a knowledge pool in which people and ideas interact, it is difficult to use a PIs’ approach to track eventual outcomes. However, for some types of health research it has proved more feasible to trace the flow between research outputs and outcomes.
• An outcomes monitoring system could be useful if it met the following criteria: relevant to, with as comprehensive coverage as possible of, the funders objectives; relevant to the funder’s decision making processes; encourages accurate compliance; minimises unintended consequences; and has acceptable costs.
Chapter 4 : Differences Between Research Types
• The range of differences between types of research can be relevant for the design of a routine monitoring system. The OECD distinguishes between basic research, applied research and experimental development. Most DH/NHS research is applied. There might be more of a tradition of publication of findings in applied research in health than in other fields. Nevertheless, the publication and incentives patterns operating in basic research mean that it would be inappropriate to use bibliometric indicators in a simple way across all fields even in health research.
• Despite having some differences from health research in publication patterns and in the detailed categories of payback, the broad approach proposed in Chapter 6 could be applied to social care research.
• Research that is commissioned, especially by the government, has some of the minimum conditions built into it that are associated with outcomes being generated, in particular because the funder has identified that a contribution in this area will be valuable.
Chapter 5 : What Units of Research?
• The term programme has various meanings including being used to describe a collection of projects on a common theme and to describe a block of funding for a research unit.
• Three main streams or modes of funding can be identified: projects, which are administratively grouped into programmes including a responsive programme; institutions/centres/units; individual researchers. These 3 streams are displayed in Figure 1. It is probable that the regular data-gathering for a monitoring system would operate at the basic level of each stream or mode.
• Previous work demonstrates that the full range of benefits can sometimes be applied at the level of projects, either in the responsive mode or in programmes, through the use of questionnaires to researchers. Expert and user review and user surveys have also been applied.
• Institutions and centres increasingly have experience not only of traditional periodic expert review but also of producing annual reports, although there are debates about what dimensions to include in such reviews and reports.
• Individuals in receipt of research development awards have completed questionnaires during and after the awards. These concentrate on the development of research capacity but can go wider.
Chapter 6 : A Possible Comprehensive Outcomes Monitoring System
• The proposed system is intended for DH/NHS to monitor the outcomes from its R&D in order to justify the R&D expenditure and assist with managing the portfolio. More detailed information is required for the latter purpose.
• We propose a multidimensional approach be adopted to cover all the dimensions of payback and that information be gathered from three sets of sources and Table 3 shows which methods would cover which output/outcome categories.
• Firstly, possibly annually, a questionnaire (possibly electronic) covering most payback categories should gather data from the basic level of each funding stream ie. from lead researchers of projects, from research institutions/centres, and from individual award holders.
• Secondly, supplementary information should be gathered from external databases (including the citation indices and Wellcome’s ROD).
• Thirdly, a range of approaches ie. user surveys, reviews by experts and peers, case studies including economic evaluations, and analysis of sources used in policy documents such as NICE guidelines, would be undertaken on a sample basis. They would provide not only supplementary information but, as with the external databases, would also verify the data collected directly from researchers.
• These proposals can be evaluated against the criteria set out in Chapter 3:
• The system is relevant to DH’s objectives of generating payback in a range of categories.
• Various problems have to be overcome before the system could be fully decision relevant. Firstly it might be necessary to ask researchers to apportion the contribution made to specific outputs from various funding streams. Second, to be decision relevant the information would have to be analysed and presented in a manner consistent with funders’ decision making processes. This would involve a) showing how for each outcome and output, for example publications, data from one project or stream could be compared with those from another and b) demonstrating how different outputs and outcomes could be aggregated.
• The questions of accuracy of data, minimisation of unintended consequences and the acceptability of the net costs are also addressed.
Chapter 7 : Research and Monitoring
• Whilst this report is primarily concerned with moving from ad hoc studies towards a routine monitoring system there are issues that need further research.
• Before embarking on full implementation the feasibility needs to be tested of items such as on-line recording of data and asking researchers to attribute proportions of research outputs to separate funding agencies.
• Once the system is implemented the value of some items can be better assessed, for example the additional value provided by self reporting of publications beyond that gained from relying on external databases.
• The data provided by the system would provide opportunities for further payback research on, for example, links between publications and other categories of payback.
• Some items such as network analysis could potentially be added to the monitoring system after further examination of them.
• Finally the benefit from the monitoring system itself should be assessed.Department of Health; Wellcome Trus
Recommended from our members
Econometrics: A bird's eye view
As a unified discipline, econometrics is still relatively young and has been transforming and expanding very rapidly over the past few decades. Major advances have taken place in the analysis of cross sectional data by means of semi-parametric and non-parametric techniques. Heterogeneity of economic relations across individuals, firms and industries is increasingly acknowledge and attempts have been made to take them into account either by integrating out their effects or by modeling the sources of heterogeneity when suitable panel data exists. The counterfactual considerations that underlie policy analysis and treatment evaluation have been given a more satisfactory foundation. New time series econometric techniques have been developed and employed extensively in the areas of macroeconometrics and finance. Non-linear econometric techniques are used increasingly in the analysis of cross section and time series observations. Applications of Bayesian techniques to econometric problems have been given new impetus largely thanks to advances in computer power and computational techniques. The use of Bayesian techniques have in turn provided the investigators with a unifying framework where the tasks and forecasting, decision making, model evaluation and learning can be considered as parts of the same interactive and iterative process; thus paving the way for establishing the foundation of the "real time econometrics". This paper attempts to provide an overview of some of these developments
Measuring Moral Reasoning using Moral Dilemmas: Evaluating Reliability, Validity, and Differential Item Functioning of the Behavioral Defining Issues Test (bDIT)
We evaluated the reliability, validity, and differential item functioning (DIF) of a shorter version of the Defining Issues Test-1 (DIT-1), the behavioral DIT (bDIT), measuring the development of moral reasoning. 353 college students (81 males, 271 females, 1 not reported; age M = 18.64 years, SD = 1.20 years) who were taking introductory psychology classes at a public University in a suburb area in the Southern United States participated in the present study. First, we examined the reliability of the bDIT using Cronbach’s α and its concurrent validity with the original DIT-1 using disattenuated correlation. Second, we compared the test duration between the two measures. Third, we tested the DIF of each question between males and females. Findings reported that first, the bDIT showed acceptable reliability and good concurrent validity. Second, the test duration could be significantly shortened by employing the bDIT. Third, DIF results indicated that the bDIT items did not favour any gender. Practical implications of the present study based on the reported findings are discussed
Recommended from our members
An Overview of Models for Response Times and Processes in Cognitive Tests.
Response times (RTs) are a natural kind of data to investigate cognitive processes underlying cognitive test performance. We give an overview of modeling approaches and of findings obtained with these approaches. Four types of models are discussed: response time models (RT as the sole dependent variable), joint models (RT together with other variables as dependent variable), local dependency models (with remaining dependencies between RT and accuracy), and response time as covariate models (RT as independent variable). The evidence from these approaches is often not very informative about the specific kind of processes (other than problem solving, information accumulation, and rapid guessing), but the findings do suggest dual processing: automated processing (e.g., knowledge retrieval) vs. controlled processing (e.g., sequential reasoning steps), and alternative explanations for the same results exist. While it seems well-possible to differentiate rapid guessing from normal problem solving (which can be based on automated or controlled processing), further decompositions of response times are rarely made, although possible based on some of model approaches
Standardized or simple effect size: what should be reported?
It is regarded as best practice for psychologists to report effect size when disseminating quantitative research findings. Reporting of effect size in the psychological literature is patchy – though this may be changing – and when reported it is far from clear that appropriate effect size statistics are employed. This paper considers the practice of reporting point estimates of standardized effect size and explores factors such as reliability, range restriction and differences in design that distort standardized effect size unless suitable corrections are employed. For most purposes simple (unstandardized) effect size is more robust and versatile than standardized effect size. Guidelines for deciding what effect size metric to use and how to report it are outlined. Foremost among these are: i) a preference for simple effect size over standardized effect size, and ii) the use of confidence intervals to indicate a plausible range of values the effect might take. Deciding on the appropriate effect size statistic to report always requires careful thought and should be influenced by the goals of the researcher, the context of the research and the potential needs of readers
Studying Attitudes and Social Norms in Agile Software Development
The purpose of this paper is to review research on attitudes and social norms
and connect it to the agile software development context. Furthermore, I
propose additional theories from social psychology (mainly the theory of
planned behavior and using the degree of internalization of social norms) that
would most certainly be useful for further sense-making of human
factors-related research on agile teams.Comment: Accepted at XP2018 Poster Track Sessio
- …