20 research outputs found

    Who says what about the most-discussed articles of Altmetric?

    Get PDF
    In Altmetrics, tweets are considered as important potential indicators of immediate social impact of scholarly articles. However, it is still unclear to what extent Twitter captures the actual scholarly impact. Therefore, it is necessary to investigate the people who cite the articles and the content of the tweets with attitude towards the articles comprehensively. In this paper, we combine different indicators to identify opinion leaders in the spread of the articles, and use sentimental analysis to quantify the sentimental polarity of tweets. Altmetrics should highlight the positive role of scientific research results to the public, which is more valuable than simple numbers

    Tweeting Library and Information Science: a socio-topical distance analysis

    Get PDF
    The aim of this paper is to demonstrate how topical distance and social distance can provide meaningful results when analysing scholars’ tweets linking to scholarly publications. To do so, we analyse the social and topical distance between tweeted information science papers and their academic tweeters. This allows us to characterize the tweets of scientific papers, the tweeting behavior of scholars, and the relationship between tweets and citations

    Analysis of Tweets Mentioning Scholarly Works from an Institutional Repository

    Get PDF
    Altmetrics derived from Twitter have potential benefits for institutional repository (IR) stakeholders (faculty, students, administrators, and academic libraries) when metrics aggregators (Altmetric, Plum Analytics) are integrated with IRs. There is limited research on tweets mentioning works in IRs and how the results impact IR stakeholders, specifically libraries. In order to address this gap in the literature, the author conducted a content analysis of tweets tracked by a metrics aggregator (Plum X Metrics) in a Digital Commons IR. The study found that the majority of tweets were neutral in attitude, intended for a general audience, included no hashtags, and were written by users unaffiliated with the works. The results are similar to findings from other studies, including low numbers of tweeted works, high numbers of tweets neutral in attitude, and evidence of self-tweets. The discussion addresses these results in relation to the value of tweets and suggested improvements to Twitter metrics based on IR stakeholders’ needs

    Open Source Intelligence for Cybersecurity Events via Twitter Data

    Get PDF
    Open-Source Intelligence (OSINT) is largely regarded as a necessary component for cybersecurity intelligence gathering to secure network systems. With the advancement of artificial intelligence (AI) and increasing usage of social media, like Twitter, we have a unique opportunity to obtain and aggregate information from social media. In this study, we propose an AI-based scheme capable of automatically pulling information from Twitter, filtering out security-irrelevant tweets, performing natural language analysis to correlate the tweets about each cybersecurity event (e.g., a malware campaign), and validating the information. This scheme has many applications, such as providing a means for security operators to gain insight into ongoing events and helping them prioritize vulnerabilities to deal with. To give examples of the possible uses, we present three case studies demonstrating the event discovery and investigation processes. We also examine the potential of OSINT for identifying the network protocols associated with specific events, which can aid in the mitigation procedures by informing operators if the vulnerability is exploitable given their system\u27s network configurations

    Open Source Intelligence for Cybersecurity Events via Twitter Data

    Get PDF
    Open-Source Intelligence (OSINT) is largely regarded as a necessary component for cybersecurity intelligence gathering to secure network systems. With the advancement of artificial intelligence (AI) and increasing usage of social media, like Twitter, we have a unique opportunity to obtain and aggregate information from social media. In this study, we propose an AI-based scheme capable of automatically pulling information from Twitter, filtering out security-irrelevant tweets, performing natural language analysis to correlate the tweets about each cybersecurity event (e.g., a malware campaign), and validating the information. This scheme has many applications, such as providing a means for security operators to gain insight into ongoing events and helping them prioritize vulnerabilities to deal with. To give examples of the possible uses, we present three case studies demonstrating the event discovery and investigation processes. We also examine the potential of OSINT for identifying the network protocols associated with specific events, which can aid in the mitigation procedures by informing operators if the vulnerability is exploitable given their system\u27s network configurations

    Do you cite what you tweet? Investigating the relationship between tweeting and citing research articles

    Full text link
    The last decade of altmetrics research has demonstrated that altmetrics have a low to moderate correlation with citations, depending on the platform and the discipline, among other factors. Most past studies used academic works as their unit of analysis to determine whether the attention they received on Twitter was a good predictor of academic engagement. Our work revisits the relationship between tweets and citations where the tweet itself is the unit of analysis, and the question is to determine if, at the individual level, the act of tweeting an academic work can shed light on the likelihood of the act of citing that same work. We model this relationship by considering the research activity of the tweeter and its relationship to the tweeted work. Results show that tweeters are more likely to cite works affiliated with their same institution, works published in journals in which they also have published, and works in which they hold authorship. It finds that the older the academic age of a tweeter the less likely they are to cite what they tweet, though there is a positive relationship between citations and the number of works they have published and references they have accumulated over time

    Predicting literature’s early impact with sentiment analysis in Twitter

    Get PDF
    Traditional bibliometric techniques gauge the impact of research through quantitative indices based on the citations data. However, due to the lag time involved in the citation-based indices, it may take years to comprehend the full impact of an article. This paper seeks to measure the early impact of research articles through the sentiments expressed in tweets about them. We claim that cited articles in either positive or neutral tweets have a more significant impact than those not cited at all or cited in negative tweets. We used the SentiStrength tool and improved it by incorporating new opinion-bearing words into its sentiment lexicon pertaining to scientific domains. Then, we classified the sentiment of 6,482,260 tweets linked to 1,083,535 publications covered by Altmetric.com. Using positive and negative tweets as an independent variable, and the citation count as the dependent variable, linear regression analysis showed a weak positive prediction of high citation counts across 16 broad disciplines in Scopus. Introducing an additional indicator to the regression model, i.e. ‘number of unique Twitter users’, improved the adjusted R-squared value of regression analysis in several disciplines. Overall, an encouraging positive correlation between tweet sentiments and citation counts showed that Twitter-based opinion may be exploited as a complementary predictor of literature’s early impact

    Sentiment analysis of tweets through Altmetrics: A machine learning approach

    Get PDF
    The purpose of the study is to (a) contribute to annotating an Altmetrics dataset across five disciplines, (b) undertake sentiment analysis using various machine learning and natural language processing–based algorithms, (c) identify the best-performing model and (d) provide a Python library for sentiment analysis of an Altmetrics dataset. First, the researchers gave a set of guidelines to two human annotators familiar with the task of related tweet annotation of scientific literature. They duly labelled the sentiments, achieving an inter-annotator agreement (IAA) of 0.80 (Cohen’s Kappa). Then, the same experiments were run on two versions of the dataset: one with tweets in English and the other with tweets in 23 languages, including English. Using 6388 tweets about 300 papers indexed in Web of Science, the effectiveness of employed machine learning and natural language processing models was measured by comparing with well-known sentiment analysis models, that is, SentiStrength and Sentiment140, as the baseline. It was proved that Support Vector Machine with uni-gram outperformed all the other classifiers and baseline methods employed, with an accuracy of over 85%, followed by Logistic Regression at 83% accuracy and Naïve Bayes at 80%. The precision, recall and F1 scores for Support Vector Machine, Logistic Regression and Naïve Bayes were (0.89, 0.86, 0.86), (0.86, 0.83, 0.80) and (0.85, 0.81, 0.76), respectively
    corecore