420,550 research outputs found

    Paralinguistic and Rhetorical Capabilities of Emojis in Marketing Communication

    Get PDF
    Consumers and social media marketers have over 3,000 emojis at their fingertips. Despite the popularity of emojis on social media, marketing research on emojis remains limited. Extant marketing research on emojis that does exist primarily focuses on the emotional and reinforcement capabilities, a remnant of the limitations of the emoticon ancestor, and largely ignores the additional paralinguistic and rhetorical potential of emojis. In this dissertation, emojis as a paralanguage are explored with a particular focus on the creation of meaning on social media (Essay 1), and emojis as a full (Essay 2) and partial (Essay 3) substitute for text in marketing communication. Essay 1 is a conceptual piece that examines the perpetual evolution of emoji meaning on social media through the lens of symbolic interactionism and liquid consumption. Essay 2 looks at how consumers evaluate strings of emojis and shows that emoji only communication has a negative (positive) effect on brand attitude via processing fluency (fun) when compared to the equivalent textual translation. Essay 3 focuses on emojis as partial substitutes for promotions on social media (e.g., “buy one get one” becomes “buy ☝ get ☝). This essay demonstrates the positive effect of gesture emojis on promotion evaluation via heightened processing fluency, when compared to object emojis. However, when the message includes haptic imagery, processing fluency and promotion evaluation are similar for gesture and object emojis. Overall, this dissertation explores the paralinguistic and rhetorical potential of emojis in marketing communication and provides insights to marketers that use emojis on social media

    Smart detection of offensive words in social media using the soundex algorithm and permuterm index

    Get PDF
    Offensive posts in the social media that are inappropriate for a specific age, level of maturity, or impression are quite often destined more to unadult than adult participants. Nowadays, the growth in the number of the masked offensive words in the social media is one of the ethically challenging problems. Thus, there has been growing interest in development of methods that can automatically detect posts with such words. This study aimed at developing a method that can detect the masked offensive words in which partial alteration of the word may trick the conventional monitoring systems when being posted on social media. The proposed method progresses in a series of phases that can be broken down into a pre-processing phase, which includes filtering, tokenization, and stemming; offensive word extraction phase, which relies on using the soundex algorithm and permuterm index; and a post-processing phase that classifies the users’ posts in order to highlight the offensive content. Accordingly, the method detects the masked offensive words in the written text, thus forbidding certain types of offensive words from being published. Results of evaluation of performance of the proposed method indicate a 99% accuracy of detection of offensive words

    TMX markup: a challenge when adapting SMT to the localisation environment

    Get PDF
    Translation memory (TM) plays an important role in localisation workflows and is used as an efficient and fundamental tool to carry out translation. In recent years, statistical machine translation (SMT) techniques have been rapidly developed, and the translation quality and speed have been significantly improved as well. However,when applying SMT technique to facilitate post-editing in the localisation industry, we need to adapt SMT to the TM data which is formatted with special mark-up. In this paper, we explore some issues when adapting SMT to Symantec formatted TM data. Three different methods are proposed to handle the Translation Memory eXchange (TMX) markup and a comparative study is carried out between them. Furthermore, we also compare the TMX-based SMT systems with a customised SYSTRAN system through human evaluation and automatic evaluation metrics. The experimental results conducted on the French and English language pair show that the SMT can perform well using TMX as input format either during training or at runtime

    MORA - an architecture and programming model for a resource efficient coarse grained reconfigurable processor

    Get PDF
    This paper presents an architecture and implementation details for MORA, a novel coarse grained reconfigurable processor for accelerating media processing applications. The MORA architecture involves a 2-D array of several such processors, to deliver low cost, high throughput performance in media processing applications. A distinguishing feature of the MORA architecture is the co-design of hardware architecture and low-level programming language throughout the design cycle. The implementation details for the single MORA processor, and benchmark evaluation using a cycle accurate simulator are presented

    Query and Output: Generating Words by Querying Distributed Word Representations for Paraphrase Generation

    Full text link
    Most recent approaches use the sequence-to-sequence model for paraphrase generation. The existing sequence-to-sequence model tends to memorize the words and the patterns in the training dataset instead of learning the meaning of the words. Therefore, the generated sentences are often grammatically correct but semantically improper. In this work, we introduce a novel model based on the encoder-decoder framework, called Word Embedding Attention Network (WEAN). Our proposed model generates the words by querying distributed word representations (i.e. neural word embeddings), hoping to capturing the meaning of the according words. Following previous work, we evaluate our model on two paraphrase-oriented tasks, namely text simplification and short text abstractive summarization. Experimental results show that our model outperforms the sequence-to-sequence baseline by the BLEU score of 6.3 and 5.5 on two English text simplification datasets, and the ROUGE-2 F1 score of 5.7 on a Chinese summarization dataset. Moreover, our model achieves state-of-the-art performances on these three benchmark datasets.Comment: arXiv admin note: text overlap with arXiv:1710.0231
    corecore