23 research outputs found

    Who Shares and Comments on News?: A Cross-National Comparative Analysis of Online and Social Media Participation

    Get PDF
    In this article, we present a cross-national comparative analysis of which online news users in practice engage with the participatory potential for sharing and commenting on news afforded by interactive features in news websites and social media technologies across a strategic sample of six different countries. Based on data from the 2016 Reuters Institute Digital News Report, and controlling for a range of factors, we find that (1) people who use social media for news and a high number of different social media platforms are more likely to also engage more actively with news outside social media by commenting on news sites and sharing news via email, (2) political partisans on both sides are more likely to engage in sharing and commenting particularly on news stories in social media, and (3) people with high interest in hard news are more likely to comment on news on both news sites and social media and share stores via social media (and people with high interest in any kind of news [hard or soft] are more likely to share stories via email). Our analysis suggests that the online environment reinforces some long-standing inequalities in participation while countering other long-standing inequalities. The findings indicate a self-reinforcing positive spiral where the already motivated are more likely in practice to engage with the potential for participation offered by digital media, and a negative spiral where those who are less engaged participate less

    Auditory event-related potentials

    Get PDF
    Auditory event related potentials are electric potentials (AERP, AEP) and magnetic fields (AEF) generated by the synchronous activity of large neural populations in the brain, which are time-locked to some actual or expected sound event

    The superiority in voice processing of the blind arises from neural plasticity at sensory processing stages.

    No full text
    Blind people rely much more on voices compared to sighted individuals when identifying other people. Previous research has suggested a faster processing of auditory input in blind individuals than sighted controls and an enhanced activation of temporal cortical regions during voice processing. The present study used event-related potentials (ERPs) to single out the sub-processes of auditory person identification that change and allow for superior voice processing after congenital blindness. A priming paradigm was employed in which two successive voices (S1 and S2) of either the same (50% of the trials) or different actors were presented. Congenitally blind and matched sighted participants made an old-young decision on the S2. During the pre-experimental familiarization with the stimuli, congenitally blind individuals showed faster learning rates than sighted controls. Reaction times were shorter in person-congruent trials than in person-incongruent trials in both groups. ERPs to S2 stimuli in person-incongruent as compared to person-congruent trials were significantly enhanced at early processing stages (100-160 ms) in congenitally blind participants only. A later negative ERP effect (>200 ms) was found in both groups. The scalp topographies of the experimental effects were characterized by a central and parietal distribution in the sighted but a more posterior distribution in the congenitally blind. These results provide evidence for an improvement of early voice processing stages and a reorganization of the person identification system as a neural correlate of compensatory behavioral improvements following congenital blindness

    Neural plasticity of voice processing: Evidence from event-related potentials in late-onset blind and sighted individuals.

    No full text
    PURPOSE: Intra- and crossmodal neuroplasticity have been reported to underlie superior voice processing skills in congenitally blind individuals. The present study used event-related potentials (ERPs) in order to test if such compensatory plasticity is limited to the developing brain. METHODS: Late blind individuals were compared to sighted controls in their ability to identify human voices. A priming paradigm was employed in which two successive voices (S1, S2) of the same (person-congruent) or different speakers (person-incongruent) were presented. Participants made an old-young decision on the S2. RESULTS: In both groups ERPs to the auditory S2 were more negative in person-incongruent than in person-congruent trials between 200-300 ms. A topographic analysis suggested a more posteriorly shifted distribution of the Person Match effect (person-incongruent minus person-congruent trials) in late blind individuals compared to sighted controls. CONCLUSION: In contrast to congenitally blind individuals, late blind individuals did not show an early Person Match effect in the time range of the N1, suggesting that crossmodal compensation is mediated by later processing steps rather than by changes at early perceptual levels

    Crossmodal interaction of facial and vocal person identity information: an event-related potential study.

    No full text
    Hearing a voice and seeing a face are essential parts of person identification and social interaction. It has been suggested that both types of information do not only interact at late processing stages but rather interact at the level of perceptual encoding (<200 ms). The present study analysed when visual and auditory representations of person identity modulate the processing of voices. In unimodal trials, two successive voices (S1-S2) of the same or of two different speakers were presented. In the crossmodal condition, the S1 consisted of the face of the same or a different person with respect to the following voice stimulus. Participants had to decide whether the voice probe (S2) was from an elderly or a young person. Reaction times to the S2 were shorter when these stimuli were person-congruent, both in the uni- and crossmodal conditions. ERPs recorded to the person-incongruent as compared to the person-congruent trials (S2) were enhanced at early (100-140 ms) and later processing stages (270-530 ms) in the crossmodal condition. A similar later negative ERP effect (270-530 ms) was found in the unimodal condition as well. These results suggest that identity information conveyed by a face is capable to modulate the sensory processing of voice stimuli

    Crossmodal plasticity in the fusiform gyrus of late blind individuals during voice recognition.

    No full text
    Blind individuals are trained in identifying other people through voices. In congenitally blind adults the anterior fusiform gyrus has been shown to be active during voice recognition. Such crossmodal changes have been associated with a superiority of blind adults in voice perception. The key question of the present functional magnetic resonance imaging (fMRI) study was whether visual deprivation that occurs in adulthood is followed by similar adaptive changes of the voice identification system. Late blind individuals and matched sighted participants were tested in a priming paradigm, in which two voice stimuli were subsequently presented. The prime (S1) and the target (S2) were either from the same speaker (person-congruent voices) or from two different speakers (person-incongruent voices). Participants had to classify the S2 as either coming from an old or a young person. Only in late blind but not in matched sighted controls, the activation in the anterior fusiform gyrus was modulated by voice identity: late blind volunteers showed an increase of the BOLD signal in response to person-incongruent compared with person-congruent trials. These results suggest that the fusiform gyrus adapts to input of a new modality even in the mature brain and thus demonstrate an adult type of crossmodal plasticity
    corecore