3 research outputs found

    Selfies of Twitter Data Stream through the Lens of Information Theory: A Comparative Case Study of Tweet-trails with Healthcare Hashtags

    Get PDF
    Little research in information system has been carried out on the subject of user’s choice of different components when composing a tweet through the analytical lens of information theory. This study employs a comparative case study approach to examine the use of hashtags of medical-terminology versus lay-language in tweet-trails and (1) introduces a novel H(x) index to reveal the complexity in the statistical structure and the variety in the composition of a tweet-trail, (2) applies radar graph and scatter plot as intuitive data visualization aids, and (3) proposes a methodological framework for structural analysis of Twitter data stream as a supplemental tool for profile analysis of Twitter users and content analysis of tweets. This systematic framework is capable of unveiling patterns in the structure of tweet-trails and providing quick and preliminary snap shots (selfies) of Twitter data stream because it’s an automatic and objective approach which requires no human intervention

    Windows into Sensory Integration and Rates in Language Processing: Insights from Signed and Spoken Languages

    Get PDF
    This dissertation explores the hypothesis that language processing proceeds in "windows" that correspond to representational units, where sensory signals are integrated according to time-scales that correspond to the rate of the input. To investigate universal mechanisms, a comparison of signed and spoken languages is necessary. Underlying the seemingly effortless process of language comprehension is the perceiver's knowledge about the rate at which linguistic form and meaning unfold in time and the ability to adapt to variations in the input. The vast body of work in this area has focused on speech perception, where the goal is to determine how linguistic information is recovered from acoustic signals. Testing some of these theories in the visual processing of American Sign Language (ASL) provides a unique opportunity to better understand how sign languages are processed and which aspects of speech perception models are in fact about language perception across modalities. The first part of the dissertation presents three psychophysical experiments investigating temporal integration windows in sign language perception by testing the intelligibility of locally time-reversed sentences. The findings demonstrate the contribution of modality for the time-scales of these windows, where signing is successively integrated over longer durations (~ 250-300 ms) than in speech (~ 50-60 ms), while also pointing to modality-independent mechanisms, where integration occurs in durations that correspond to the size of linguistic units. The second part of the dissertation focuses on production rates in sentences taken from natural conversations of English, Korean, and ASL. Data from word, sign, morpheme, and syllable rates suggest that while the rate of words and signs can vary from language to language, the relationship between the rate of syllables and morphemes is relatively consistent among these typologically diverse languages. The results from rates in ASL also complement the findings in perception experiments by confirming that time-scales at which phonological units fluctuate in production match the temporal integration windows in perception. These results are consistent with the hypothesis that there are modality-independent time pressures for language processing, and discussions provide a synthesis of converging findings from other domains of research and propose ideas for future investigations
    corecore