101,701 research outputs found
ANALYSIS OF IDIOMATIC EMOTION EXPRESSIONS DETECTED FROM ONLINE MOVIE REVIEWS
A large number of idiomatic emotion expressions in Korean are composed of certain nouns
of human body parts accompanied by selected predicates, which represent a âphysiological
metonymyâ of sentiment (Lakoff 1987, Ungerer & Schmid 1996)or instance, kasum-i ttwita
literally means a physiological reaction (i.e. oneâs heart beat) but also can represent the emotion
like being thrilled to bits. We compared idiomatic emotion expressions used in English online movie
reviews and those observed in Korean, and noticed that the nouns of body parts such as kasum
âheartâ, maum âmindâ or nwun âeyesâ emerge frequently in both languages, whereas ekkay
âshoulderâ, kancang âintestinesâ or ppye âbonesâ seem to be rather reserved for Korean emotion
expressions.
In this study, we extract idiomatic emotion expressions based on the 13 nouns of body parts
listed by Lim (2001) from Korean online movie reviews. For instance, nouns such as meli âheadâ, ip
âmouthâ or simcang âcardiaâ are frequently used for constituting the emotion expressions of
POSITIVE values as shown in ip-ul tamwul-swu epsta âbe with open mouth (with delight) these
nouns hardly occur in NEGATIVE emotion expressions, which is not predictable from their semantic
features, but reveals their lexical idiosyncrasy. The frequent emotion expressions observed in online
movie reviews will be analyzed and classified according to their semantic properties. We will show
what salient traits of Korean emotion expressions can be remarked in current online subjective
documents such as usersâ reviews, blogs or opinion texts
The Role of Multiple Articulatory Channels of Sign-Supported Speech Revealed by Visual Processing
Purpose
The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed in relation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication.
Method
Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers' foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either the face or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message.
Results
In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased in the magnified condition. In Experiment 2, results indicated less accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech.
Conclusions
All participants, even those with residual hearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions.UniĂłn Europea, Grant Agreement 31674
Training of the pre-school blind child in India.
Thesis (Ed.M.)--Boston Universit
Recommended from our members
Human Sensation of Transcranial Electric Stimulation.
Noninvasive transcranial electric stimulation is increasingly being used as an advantageous therapy alternative that may activate deep tissues while avoiding drug side-effects. However, not only is there limited evidence for activation of deep tissues by transcranial electric stimulation, its evoked human sensation is understudied and often dismissed as a placebo or secondary effect. By systematically characterizing the human sensation evoked by transcranial alternating-current stimulation, we observed not only stimulus frequency and electrode position dependencies specific for auditory and visual sensation but also a broader presence of somatic sensation ranging from touch and vibration to pain and pressure. We found generally monotonic input-output functions at suprathreshold levels, and often multiple types of sensation occurring simultaneously in response to the same electric stimulation. We further used a recording circuit embedded in a cochlear implant to directly and objectively measure the amount of transcranial electric stimulation reaching the auditory nerve, a deep intercranial target located in the densest bone of the skull. We found an optimal configuration using an ear canal electrode and low-frequency (<300âHz) sinusoids that delivered maximally ~1% of the transcranial current to the auditory nerve, which was sufficient to produce sound sensation even in deafened ears. Our results suggest that frequency resonance due to neuronal intrinsic electric properties need to be explored for targeted deep brain stimulation and novel brain-computer interfaces
- âŠ