66 research outputs found
QRTEngine: An easy solution for running online reaction time experiments using Qualtrics
Performing online behavioral research is gaining increased popularity among researchers in psychological and cognitive science. However, the currently available methods for conducting online reaction time experiments are often complicated and typically require advanced technical skills. In this article, we introduce the Qualtrics Reaction Time Engine (QRTEngine), an open-source JavaScript engine that can be embedded in the online survey development environment Qualtrics. The QRTEngine can be used to easily develop browser-based online reaction time experiments with accurate timing within current browser capabilities, and it requires only minimal programming skills. After introducing the QRTEngine, we briefly discuss how to create and distribute a Stroop task. Next, we describe a study in which we investigated the timing accuracy of the engine under different processor loads using external chronometry. Finally, we show that the QRTEngine can be used to reproduce classic behavioral effects in three reaction time paradigms: a Stroop task, an attentional blink task, and a masked-priming task. These findings demonstrate that QRTEngine can be used as a tool for conducting online behavioral research even when this requires accurate stimulus presentation times
Recommended from our members
Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments
Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems
Participant Nonnaiveté and the reproducibility of cognitive psychology
Many argue that there is a reproducibility crisis in psychology. We investigated nine well-known effects from the cognitive psychology literature—three each from the domains of perception/action, memory, and language, respectively—and found that they are highly reproducible. Not only can they be reproduced in online environments, but they also can be reproduced with nonnaïve participants with no reduction of effect size. Apparently, some cognitive tasks are so constraining that they encapsulate behavior from external influences, such as testing situation and prior recent experience with the experiment to yield highly robust effects
Quality versus quantity of social ties in experimental cooperative networks
Recent studies suggest that allowing individuals to choose their partners can help to maintain cooperation in human social networks; this behaviour can supplement behavioural reciprocity, whereby humans are influenced to cooperate by peer pressure. However, it is unknown how the rate of forming and breaking social ties affects our capacity to cooperate. Here we use a series of online experiments involving 1,529 unique participants embedded in 90 experimental networks, to show that there is a ‘Goldilocks’ effect of network dynamism on cooperation. When the rate of change in social ties is too low, subjects choose to have many ties, even if they attach to defectors. When the rate is too high, cooperators cannot detach from defectors as much as defectors re-attach and, hence, subjects resort to behavioural reciprocity and switch their behaviour to defection. Optimal levels of cooperation are achieved at intermediate levels of change in social ties
Sex differences in the Simon task help to interpret sex differences in selective attention.
In the last decade, a number of studies have reported sex differences in selective attention, but a unified explanation for these effects is still missing. This study aims to better understand these differences and put them in an evolutionary psychological context. 418 adult participants performed a computer-based Simon task, in which they responded to the direction of a left or right pointing arrow appearing left or right from a fixation point. Women were more strongly influenced by task-irrelevant spatial information than men (i.e., the Simon effect was larger in women, Cohen's d = 0.39). Further, the analysis of sex differences in behavioral adjustment to errors revealed that women slow down more than men following mistakes (d = 0.53). Based on the combined results of previous studies and the current data, it is proposed that sex differences in selective attention are caused by underlying sex differences in core abilities, such as spatial or verbal cognition
Are all ‘research fields’ equal? Rethinking practice for the use of data from crowd-sourcing market places
New technologies like large-scale social media sides (e.g., Facebook and Twitter) and crowdsourcing services (e.g., Amazon Mechanical Turk, Crowdflower, Clickworker) impact social science research and provide many new and interesting avenues for research. The use of these new technologies for research has not been without challenges and a recently published psychological study on Facebook led to a widespread discussion on the ethics of conducting large-scale experiments online. Surprisingly little has been said about the ethics of conducting research using commercial crowdsourcing market places. In this paper, I want to focus on the question of which ethical questions are raised by data collection with crowdsourcing tools. I briefly draw on implications of internet research more generally and then focus on the specific challenges that research with crowdsourcing tools faces. I identify fair-pay and related issues of respect for autonomy as well as problems with power dynamics between researcher and participant, which has implications for ‘withdrawal-withoutprejudice’, as the major ethical challenges with crowdsourced data. Further, I will to draw attention on how we can develop a ‘best practice’ for researchers using crowdsourcing tools
- …