23 research outputs found

    Neuroprotective Properties of Topiramate in the Lithium- Pilocarpine Model of Epilepsy

    Get PDF
    ABSTRACT The lithium-pilocarpine model reproduces the main characteristics of human temporal lobe epilepsy. After status epilepticus (SE), rats exhibit a latent seizure-free phase characterized by development of extensive damage in limbic areas and occurrence of spontaneous recurrent seizures. Neuroprotective and antiepileptogenic effects of topiramate were investigated in this model. SE was induced in adult male rats by LiCl (3 mEq/kg) followed 20 h later by pilocarpine (25 mg/kg). Topiramate (10, 30, or 60 mg/kg) was injected at 1 and 10 h of SE. Injections were repeated twice a day for six additional days. Another group received two injections of diazepam on the day of SE and of vehicle for 6 days. Neuronal damage was assessed at 14 days after SE by cell counting on thionin-stained sections. Occurrence of spontaneous recurrent seizures (SRS) was videorecorded for 10 h per day in other groups of rats. In diazepam-treated rats, the number of neurons was dramatically reduced after SE in all subregions of hippocampus and layers II-IV of ventral cortices. At all doses, topiramate induced a 24 to 30% neuroprotection in layer CA1 of hippocampus (p Ͻ 0.05). In CA3b, the 30-mg/kg dose prevented neuronal death. All rats subjected to SE became epileptic. The latency (14 -17 days) to and frequency of SRS were similar in topiramate-and diazepam-treated rats. The high mortality in the 30 mg/kg topiramate group (84%) was possibly the result of interaction between lithium and topiramate. In conclusion, topiramate displayed neuroprotective properties only in CA1 and CA3 that were not sufficient to prevent epileptogenesis

    Neuroprotective Properties of Topiramate in the Lithium-Pilocarpine Model of Epilepsy

    Full text link

    Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Get PDF
    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions

    Implicit and explicit object recognition at very large visual eccentricities : no improvement after loss of central vision.

    No full text
    International audienceLittle is known about the ability of human observers to process objects in the far periphery of their visual field and nothing about its evolution in case of central vision loss. We investigated implicit and explicit recognition at two large visual eccentricities. Pictures of objects were centred at 30° or 50° eccentricity. Implicit recognition was tested through a priming paradigm. Participants (normally sighted observers and people with 10-20 years of central vision loss) categorized pictures as animal/transport both in a study phase (Block 1) and in a test phase (Block 2). In explicit recognition participants decided for each picture presented in Block 2 whether it had been displayed in Block 1 (“yes”/“no”). Both visual (identical) and conceptual/lexical (same-name) priming occurred at 30° and at 50°. Explicit recognition was observed only at 30°. In people with central vision loss testing was only performed at 50° eccentricity. The pattern of results was similar to that of normally sighted observers but global performance was lower. The results suggest that vision, at large eccentricity, is mainly based on nonconscious coarse representations. Moreover, after 10-20 years of central vision loss, no evidence was found for an increased ability to use peripheral information in object recognition
    corecore