90,999 research outputs found
Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events
In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual\u2019s cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition. To encourage replications, all materials needed to conduct them were made available on request. We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2
7 10 with an effect size (Hedges\u2019 g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 5.1
7 10 , greatly exceeding the criterion value of 100 for \u201cdecisive evidence\u201d in support of the experimental hypothesis. When DJB\u2019s original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1
7 10 , and the BF value is 3,853, again exceeding the criterion for \u201cdecisive evidence.\u201d The number of
potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by intense \u201cp -hacking\u201d\u2014the selective suppression of findings or analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique, estimates the true effect size of the experiments to be 0.20 for the complete database and 0.24 for the independent replications, virtually identical to the effect size of DJB\u2019s original experiments (0.22) and the closely related \u201cpresentiment\u201d experiments (0.21). We discuss the controversial status of precognition and other anomalous effects collectively known as psi
Discovery and Communication of Important Marketing Findings: Evidence and Proposals
My review of empirical research on scientific publication led to the following conclusions. Three criteria are useful for identifying whether findings are important: replication, validity, and usefulness. A fourth criterion, surprise, applies in some situations. Based on these criteria, important findings resulting from academic research in marketing seem to be rare. To a large extent, this rarity is due to a reward system that is built around subjective peer review. Rather than using peer review as a secret screening process, using an open process likely will improve papers and inform readers. Researchers, journals, business schools, funding agencies, and professional organizations can all contribute to improving the process. For example, researchers should do directed research on papers that contribute to principles. Journals should invite papers that contribute to principles. Business school administrators should reward researchers who make important findings. Funding agencies should base decisions on researchers' prior success in making important findings, and professional organizations should maintain web sites that describe what is known about principles and what research is needed on principles
Green Grass, High Cotton: Reflections on the Evolution of the Journal of Advertising
This article reflects on my time as the fifth editor of the Journal of Advertising, makes observations about the evolution of scholarship in the Journal over the past decades, offers suggestions for how JA might advance in the coming years, and provides some “words of wisdom” to advertising researchers. Because it is the first in an invited article series of editor reflections, a bit of historical context is provided
Discovery and Communication of Important Marketing Findings: Evidence and Proposals
My review of empirical research on scientific publication led to the following conclusions. Three criteria are useful for identifying whether findings are important: replication, validity, and usefulness. A fourth criterion, surprise, applies in some situations. Based on these criteria, important findings resulting from academic research in marketing seem to be rare. To a large extent, this rarity is due to a reward system that is built around subjective peer review. Rather than using peer review as a secret screening process, using an open process likely will improve papers and inform readers. Researchers, journals, business schools, funding agencies, and professional organizations can all contribute to improving the process. For example, researchers should do directed research on papers that contribute to principles. Journals should invite papers that contribute to principles. Business school administrators should reward researchers who make important findings. Funding agencies should base decisions on researchers' prior success in making important findings, and professional organizations should maintain web sites that describe what is known about principles and what research is needed on principles.marketing, marketing findings
Recommended from our members
Developments in information technology and their implications for psychological research: Disruptive or diffusive change?
The notion of technology-induced disruptive change has generally been applied within academia to teaching and learning. Less explored is the disruption that occurs to research as mainstream technology develops. This article examines the effects of technological change on research in psychology, in particular focussing on the development of web-based empirical research procedures over the past 15 years or so. I discuss the history, challenges and potential of these developments, and put forward some qualified suggestions for some of the future directions that technology will allow research in psychology to take
Simulating Wde-area Replication
We describe our experiences with simulating replication algorithms for use in far flung distributed systems. The algorithms under scrutiny mimic epidemics. Epidemic algorithms seem to scale and adapt to change (such as varying replica sets) well. The loose consistency guarantees they make seem more useful in applications where availability strongly outweighs correctness; e.g., distributed name service
- …