20 research outputs found

    The size of the oddball effect depends on the degree of novelty between the oddball and the repeated stimulus.

    No full text
    <p>(A) Cartoon depicting two possible trials. Top: the oddball is very different compared to the standard; bottom: oddball is very similar to the standard. (B) The size of the oddball effect increases linearly with the difference in angle between the oddball and standard stimuli. (C) The effect of novelty saturates by ∼50° of difference.</p

    The size of the oddball effect depends on the number of repetitions of standard stimulus.

    No full text
    <p>(A) Cartoon depicting the experimental design. Participants viewed a stream of repeated lines with an oddball line that appeared anywhere from the 2<sup>nd</sup> to the 6<sup>th</sup> position. (B) The number of repetitions of the standard stimulus modulates the size of the temporal oddball effect.</p

    The oddball effect is subject to experimental context.

    No full text
    <p>(A) The oddball effect at the 7<sup>th</sup> position was tested in 3 different blocks – among interleaved trials in which the oddball appears between the 4<sup>th</sup> and 7<sup>th</sup> position, interleaved trials in which the oddball appears between the 7<sup>th</sup> and 10<sup>th</sup> position, and trials in which the oddball only appeared at the 7<sup>th</sup> position. (B) The size of the oddball effect when the oddball appears at the 7<sup>th</sup> position is different depending on the expectations within the experimental block of trials.</p

    Grapheme-Color synesthesia in 6588 participants.

    No full text
    <p>The letter-color pairings across the whole population are shown with rows corresponding to participants and columns to letters. The colors along the bottom represent the most frequently chosen (modal) color label for each letter after interpolating from subject’s RGB coordinates to labels. Letters not assigned a color by participants are given a random color.</p

    Synesthetes with learned matches are subject to the same influences as the larger population.

    No full text
    <p><b>A</b>. The colors in the toy (upper row), the modal color choice for each letter from the 6188 synesthetes (middle row), and the most commonly assigned color for each letter for the 400 synesthetes from <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0118996#pone.0118996.g002" target="_blank">Fig. 2C</a>, when the choice does not match the toy (bottom row). Letters for which the magnet set color is the same as the modal color for all synesthetes are indicated in gray. When the 400 synesthetes have letter color matches that don’t match the toy, they generally follow the same pattern as the rest of the population (15/20 letters the same). <b>B</b>. Relationship between the probability that a letter is matched to the modal choice in the 400 synesthetes from <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0118996#pone.0118996.g002" target="_blank">Fig. 2C</a> and the other 6188 synesthetes. The strong positive correlation indicates that the two populations are subject to similar influences. <b>C</b>. Correlation between the probability that a latter matches the magnet set in the 400 synesthetes and the probability a letter matches the modal choice in the other 6158 synesthetes. The negative correlation signals competition between the two sources of influence in the magnet synesthetes, the toy versus general cultural/linguistic tendencies. Analyses in this section are limited to the subset of 20 letters which are not same for the magnet set and modal choice (excludes A, C, H, M, N, and W; grayed out in the bottom row of panel A).</p

    No evidence for fear-induced increase in temporal resolution.

    No full text
    <p>(a) Participants' estimates of the duration of the free-fall were expanded by 36%. The actual duration of the fall was 2.49 sec. (b) If a duration expansion of 36% caused a corresponding increase in temporal resolution, a 79% accuracy in digit identification during the fall would be predicted (left bar, see text). However, participants' accuracy in-flight was significantly less than expected based on this theory (middle bar, p<2×10<sup>−6</sup>). In-flight performance was no better than ground-based controls (right bar, p = 0.86), in which the experimental sequence was identical except that the participants did not perform the free fall. The performance scores are averaged over participants, each of whom performed the experiment only once and had a potential performance of 100% (correctly reported both digits), 50%, or 0%. Note that participants did show better-than-chance performance on both the in-flight experiment and ground-based control (chance = 10% accuracy) even though the alternation period had been set to 6 ms below their threshold. This performance gain might be attributable to perceptual learning; it may also be because movement of the chronometer makes it slightly easier to read due to separation of successive frames, and participants sometimes moved the device involuntarily as they hit the net. To ensure parity between the comparisons, we applied a small jerk to control participants' wrists to mimic how the device moved when free-fall participants hit the net. Asterisks represent p<0.05.</p

    Clustering analysis.

    No full text
    <p>9 clusters obtained by applying k-medoids algorithm to participants’ letter-color matching data (k = 9). A large cluster (n = 500) with data clearly resembling the magnet set can be seen at bottom right.</p

    Measuring temporal resolution during a fearful event.

    No full text
    <p>(a) When a digit is alternated slowly with its negative image, it is easy to identify. (b) As the rate of alternation speeds, the patterns fuse into a uniform field, indistinguishable from any other digit and its negative. (c) The perceptual chronometer is engineered to display digits defined by rapidly alternating LED lights on two 8×8 arrays. The internal microprocessor randomizes the digits and can display them adjustably from 1–166 Hz. (d) The Suspended Catch Air Device (SCAD) diving tower at the Zero Gravity amusement park in Dallas, Texas (<a href="http://www.gojump.com" target="_blank">www.gojump.com</a>). Participants are released from the apex of the tower and fall backward for 31 m before landing safely in a net below.</p

    Video_2_Empathic Neural Responses Predict Group Allegiance.MP4

    No full text
    <p>Watching another person in pain activates brain areas involved in the sensation of our own pain. Importantly, this neural mirroring is not constant; rather, it is modulated by our beliefs about their intentions, circumstances, and group allegiances. We investigated if the neural empathic response is modulated by minimally-differentiating information (e.g., a simple text label indicating another's religious belief), and if neural activity changes predict ingroups and outgroups across independent paradigms. We found that the empathic response was larger when participants viewed a painful event occurring to a hand labeled with their own religion (ingroup) than to a hand labeled with a different religion (outgroup). Counterintuitively, the magnitude of this bias correlated positively with the magnitude of participants' self-reported empathy. A multivariate classifier, using mean activity in empathy-related brain regions as features, discriminated ingroup from outgroup with 72% accuracy; the classifier's confidence correlated with belief certainty. This classifier generalized successfully to validation experiments in which the ingroup condition was based on an arbitrary group assignment. Empathy networks thus allow for the classification of long-held, newly-modified and arbitrarily-formed ingroups and outgroups. This is the first report of a single machine learning model on neural activation that generalizes to multiple representations of ingroup and outgroup. The current findings may prove useful as an objective diagnostic tool to measure the magnitude of one's group affiliations, and the effectiveness of interventions to reduce ingroup biases.</p

    Data_Sheet_1_Empathic Neural Responses Predict Group Allegiance.pdf

    No full text
    <p>Watching another person in pain activates brain areas involved in the sensation of our own pain. Importantly, this neural mirroring is not constant; rather, it is modulated by our beliefs about their intentions, circumstances, and group allegiances. We investigated if the neural empathic response is modulated by minimally-differentiating information (e.g., a simple text label indicating another's religious belief), and if neural activity changes predict ingroups and outgroups across independent paradigms. We found that the empathic response was larger when participants viewed a painful event occurring to a hand labeled with their own religion (ingroup) than to a hand labeled with a different religion (outgroup). Counterintuitively, the magnitude of this bias correlated positively with the magnitude of participants' self-reported empathy. A multivariate classifier, using mean activity in empathy-related brain regions as features, discriminated ingroup from outgroup with 72% accuracy; the classifier's confidence correlated with belief certainty. This classifier generalized successfully to validation experiments in which the ingroup condition was based on an arbitrary group assignment. Empathy networks thus allow for the classification of long-held, newly-modified and arbitrarily-formed ingroups and outgroups. This is the first report of a single machine learning model on neural activation that generalizes to multiple representations of ingroup and outgroup. The current findings may prove useful as an objective diagnostic tool to measure the magnitude of one's group affiliations, and the effectiveness of interventions to reduce ingroup biases.</p
    corecore