12 research outputs found
On the context-free ambiguity of emoji
Due to their pictographic nature, emojis come with baked-in, grounded semantics. Although this makes emojis promising candidates for new forms of more accessible communication, it is still unknown to what degree humans agree on the inherent meaning of emojis when encountering them outside of concrete textual contexts. To bridge this gap, we collected a crowdsourced dataset (made publicly available) of one-word descriptions for 1,289 emojis presented to participants with no surrounding text. The emojis and their interpretations were then examined for ambiguity. We find that, with 30 annotations per emoji, 16 emojis (1.2%) are completely unambiguous, whereas 55 emojis (4.3%) are so ambiguous that the variation in their descriptions is as high as that in randomly chosen descriptions. Most emojis lie between these two extremes. Furthermore, investigating the ambiguity of different types of emojis, we find that emojis representing symbols from established, yet not cross-culturally familiar code books (e.g., zodiac signs, Chinese characters) are most ambiguous. We conclude by discussing design implications
Understanding the Role of Nonverbal Tokens in the Spread of Online Information
Individuals and society continue to suffer as the fake news infodemic continues unabated. Current research has focused largely on the verbal part (plain text) of fake news, the nuances of nonverbal communication (emojis and other semiotic tokens) remain largely understudied. We explore the relationship between fake news and emojis in this work through two studies. The first study found that information with emojis is retweeted 1.28 times more and liked 1.41 times more than information without them. Additionally, our research finds that tweets with emojis are more common in fake news (49%) than true news (33%). We also find that emojis are more popular with fake news compared to true news. In our second study, we conducted an online experiment with true and fake news (N=99) to understand how the functional usage (replace/emphasize) of emoji affects the spread of information. We find that when an emoji replaces a verbal token, it is liked less (p0.05)
“Blissfully Happy” or “Ready toFight”: Varying Interpretations of Emoji
Emoji are commonly used in modern text communication. However, as graphics with nuanced details, emoji may be open to interpretation. Emoji also render differently on different viewing platforms (e.g., Apple’s iPhone vs. Google’s Nexus phone), potentially leading to communication errors. We explore whether emoji renderings or differences across platforms give rise to diverse interpretations of emoji. Through an online survey, we solicit people’s interpretations of a sample of the most popular emoji characters, each rendered for multiple platforms. Both in terms of sentiment and semantics, we analyze the variance in interpretation of the emoji, quantifying which emoji are most (and least) likely to be misinterpreted. In cases in which participants rated the same emoji rendering, they disagreed on whether the sentiment was positive, neutral, or negative 25% of the time. When considering renderings across platforms, these disagreements only increase. Overall, we find significant potential for miscommunication, both for individual emoji renderings and for different emoji renderings across platforms
Using methods across generations: researcher reflections from a research project involving young people and their parents
In more recent years in geographical research there has been a trend towards ‘child-friendly’ or ‘young people-friendly’ research methods, often involving creativity and participation. Meanwhile, traditional methods such as interviews and focus groups continue to dominate research with adult participants. This paper draws and reflects on fieldnotes documented during a study which used participatory design workshops with activity-based methods to contemporaneously, but separately, engage with young people with Adolescent Idiopathic Scoliosis (AIS) and their parents. This paper contributes to the body of literature concerned with intergenerational practice in children’s geographies and geographical work more broadly. It does so not by focusing on intergenerational perspectives of the research topic, but by teasing out intergenerational engagement in research that used the same methods across generations (with young people and their parents). Finding that the activities were engaged with in a similar depth and commitment by participants, we argue for a loosening of the artificial packaging of young people-friendly and adult oriented methods
Individual differences in emoji comprehension: Gender, age, and culture
Emoji are an important substitute for non-verbal cues (such as facial expressions) in online written communication. So far, however, little is known about individual differences regarding how they are perceived. In the current study, we examined the influence of gender, age, and culture on emoji comprehension. Specifically, a sample of 523 participants across the UK and China completed an emoji classification task. In this task, they were presented with a series of emoji, each representing one of six facial emotional expressions, across four commonly used platforms (Apple, Android, WeChat, and Windows). Their task was to choose from one of six labels (happy, sad, angry, surprised, fearful, disgusted) which emotion was represented by each emoji. Results showed that all factors (age, gender, and culture) had a significant impact on how emojis were classified by participants. This has important implications when considering emoji use, for example, conversation with partners from different cultures
Creating emoji lexica from unsupervised sentiment analysis of their descriptions
Online media, such as blogs and social networking sites, generate massive volumes of unstructured data of great interest to analyze the opinions and sentiments of individuals and organizations. Novel approaches beyond Natural Language Processing are necessary to quantify these opinions with polarity metrics. So far, the sentiment expressed by emojis has received little attention. The use of symbols, however, has boomed in the past four years. About twenty billion are typed in Twitter nowadays, and new emojis keep appearing in each new Unicode version, making them increasingly relevant to sentiment analysis tasks. This has motivated us to propose a novel approach to predict the sentiments expressed by emojis in online textual messages, such as tweets, that does not require human effort to manually annotate data and saves valuable time for other analysis tasks. For this purpose, we automatically constructed a novel emoji sentiment lexicon using an unsupervised sentiment analysis system based on the definitions given by emoji creators in Emojipedia. Additionally, we automatically created lexicon variants by also considering the sentiment distribution of the informal texts accompanying emojis. All these lexica are evaluated and compared regarding the improvement obtained by including them in sentiment analysis of the annotated datasets provided by Kralj Novak, Smailovic, Sluban and Mozetic (2015). The results confirm the competitiveness of our approach.Agencia Estatal de Investigación | Ref. TEC2016-76465-C2-2-RXunta de Galicia | Ref. GRC2014/046Xunta de Galicia | Ref. ED341D R2016/01
Recommended from our members
Toward fairness in artificial intelligence for medical image analysis: Identification and mitigation of potential biases in the roadmap from data collection to model deployment
Purpose: To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach: Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results: Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions: Our findings provide a valuable resource to researchers, clinicians, and the public at large.</p