439 research outputs found

    Classification of socially generated medical data

    Get PDF
    The growth of online health communities, particularly those involving socially generated content, can provide considerable value for society. Participants can gain knowledge of medical information or interact with peers on medical forum platforms. However, the sheer volume of information so generated – and the consequent ‘noise’ associated with large data volumes – can create difficulties for information consumers. We propose a solution to this problem by applying high-level analytics to the data – primarily sentiment analysis, but also content and topic analysis - for accurate classification. We believe that such analysis can be of significant value to data users, such as identifying a particular aspect of an information space, determining themes that predominate among a large dataset, and allowing people to summarize topics within a big dataset. In this thesis, we apply machine learning strategies to identify sentiments expressed in online medical forums that discuss Lyme Disease. As part of this process, we distinguish a complete and relevant set of categories that can be used to characterize Lyme Disease discourse. We present a feature-based model that employs supervised learning algorithms and assess the feasibility and accuracy of this sentiment classification model. We further evaluate our model by assessing its ability to adapt to an online medical forum discussing a disease with similar characteristics, Lupus. The experimental results demonstrate the effectiveness of our approach. In many sentiment analysis applications, the labelled training datasets are expensive to obtain, whereas unlabelled datasets are readily available. Therefore, we present an adaptation of a well-known semi-supervised learning technique, in which co-training is implemented by combining labelled and unlabelled data. Our results would suggest the ability to learn even with limited labelled data. In addition, we investigate complementary analytic techniques – content and topic analysis – to leverage best used of the data for various consumer groups. Within the work described in this thesis, some particular research issues are addressed, specifically when applied to socially generated medical/health datasets: • When applying binary sentiment analysis to short-form text data (e.g. Twitter), could meta-level features improve performance of classification? • When applying more complex multi-class sentiment analysis to classification of long-form content-rich text data, would meta-level features be a useful addition to more conventional features? • Can this multi-class analysis approach be generalised to other medical/health domains? • How would alternative classification strategies benefit different groups of information consumers

    Sentiment analysis of blogs by combining lexical knowledge with text classification

    Full text link

    A Biased Topic Modeling Approach for Case Control Study from Health Related Social Media Postings

    Get PDF
    abstract: Online social networks are the hubs of social activity in cyberspace, and using them to exchange knowledge, experiences, and opinions is common. In this work, an advanced topic modeling framework is designed to analyse complex longitudinal health information from social media with minimal human annotation, and Adverse Drug Events and Reaction (ADR) information is extracted and automatically processed by using a biased topic modeling method. This framework improves and extends existing topic modelling algorithms that incorporate background knowledge. Using this approach, background knowledge such as ADR terms and other biomedical knowledge can be incorporated during the text mining process, with scores which indicate the presence of ADR being generated. A case control study has been performed on a data set of twitter timelines of women that announced their pregnancy, the goals of the study is to compare the ADR risk of medication usage from each medication category during the pregnancy. In addition, to evaluate the prediction power of this approach, another important aspect of personalized medicine was addressed: the prediction of medication usage through the identification of risk groups. During the prediction process, the health information from Twitter timeline, such as diseases, symptoms, treatments, effects, and etc., is summarized by the topic modelling processes and the summarization results is used for prediction. Dimension reduction and topic similarity measurement are integrated into this framework for timeline classification and prediction. This work could be applied to provide guidelines for FDA drug risk categories. Currently, this process is done based on laboratory results and reported cases. Finally, a multi-dimensional text data warehouse (MTD) to manage the output from the topic modelling is proposed. Some attempts have been also made to incorporate topic structure (ontology) and the MTD hierarchy. Results demonstrate that proposed methods show promise and this system represents a low-cost approach for drug safety early warning.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    NLP-Based Techniques for Cyber Threat Intelligence

    Full text link
    In the digital era, threat actors employ sophisticated techniques for which, often, digital traces in the form of textual data are available. Cyber Threat Intelligence~(CTI) is related to all the solutions inherent to data collection, processing, and analysis useful to understand a threat actor's targets and attack behavior. Currently, CTI is assuming an always more crucial role in identifying and mitigating threats and enabling proactive defense strategies. In this context, NLP, an artificial intelligence branch, has emerged as a powerful tool for enhancing threat intelligence capabilities. This survey paper provides a comprehensive overview of NLP-based techniques applied in the context of threat intelligence. It begins by describing the foundational definitions and principles of CTI as a major tool for safeguarding digital assets. It then undertakes a thorough examination of NLP-based techniques for CTI data crawling from Web sources, CTI data analysis, Relation Extraction from cybersecurity data, CTI sharing and collaboration, and security threats of CTI. Finally, the challenges and limitations of NLP in threat intelligence are exhaustively examined, including data quality issues and ethical considerations. This survey draws a complete framework and serves as a valuable resource for security professionals and researchers seeking to understand the state-of-the-art NLP-based threat intelligence techniques and their potential impact on cybersecurity

    Self-disclosure model for classifying & predicting text-based online disclosure

    Full text link
    Les médias sociaux et les sites de réseaux sociaux sont devenus des babillards numériques pour les internautes à cause de leur évolution accélérée. Comme ces sites encouragent les consommateurs à exposer des informations personnelles via des profils et des publications, l'utilisation accrue des médias sociaux a généré des problèmes d’invasion de la vie privée. Des chercheurs ont fait de nombreux efforts pour détecter l'auto-divulgation en utilisant des techniques d'extraction d'informations. Des recherches récentes sur l'apprentissage automatique et les méthodes de traitement du langage naturel montrent que la compréhension du sens contextuel des mots peut entraîner une meilleure précision que les méthodes d'extraction de données traditionnelles. Comme mentionné précédemment, les utilisateurs ignorent souvent la quantité d'informations personnelles publiées dans les forums en ligne. Il est donc nécessaire de détecter les diverses divulgations en langage naturel et de leur donner le choix de tester la possibilité de divulgation avant de publier. Pour ce faire, ce travail propose le « SD_ELECTRA », un modèle de langage spécifique au contexte. Ce type de modèle détecte les divulgations d'intérêts, de données personnelles, d'éducation et de travail, de relations, de personnalité, de résidence, de voyage et d'accueil dans les données des médias sociaux. L'objectif est de créer un modèle linguistique spécifique au contexte sur une plate-forme de médias sociaux qui fonctionne mieux que les modèles linguistiques généraux. De plus, les récents progrès des modèles de transformateurs ont ouvert la voie à la formation de modèles de langage à partir de zéro et à des scores plus élevés. Les résultats expérimentaux montrent que SD_ELECTRA a surpassé le modèle de base dans toutes les métriques considérées pour la méthode de classification de texte standard. En outre, les résultats montrent également que l'entraînement d'un modèle de langage avec un corpus spécifique au contexte de préentraînement plus petit sur un seul GPU peut améliorer les performances. Une application Web illustrative est conçue pour permettre aux utilisateurs de tester les possibilités de divulgation dans leurs publications sur les réseaux sociaux. En conséquence, en utilisant l'efficacité du modèle suggéré, les utilisateurs pourraient obtenir un apprentissage en temps réel sur l'auto-divulgation.Social media and social networking sites have evolved into digital billboards for internet users due to their rapid expansion. As these sites encourage consumers to expose personal information via profiles and postings, increased use of social media has generated privacy concerns. There have been notable efforts from researchers to detect self-disclosure using Information extraction (IE) techniques. Recent research on machine learning and natural language processing methods shows that understanding the contextual meaning of the words can result in better accuracy than traditional data extraction methods. Driven by the facts mentioned earlier, users are often ignorant of the quantity of personal information published in online forums, there is a need to detect various disclosures in natural language and give them a choice to test the possibility of disclosure before posting. For this purpose, this work proposes "SD_ELECTRA," a context-specific language model to detect Interest, Personal, Education and Work, Relationship, Personality, Residence, Travel plan, and Hospitality disclosures in social media data. The goal is to create a context-specific language model on a social media platform that performs better than the general language models. Moreover, recent advancements in transformer models paved the way to train language models from scratch and achieve higher scores. Experimental results show that SD_ELECTRA has outperformed the base model in all considered metrics for the standard text classification method. In addition, the results also show that training a language model with a smaller pre-training context-specific corpus on a single GPU can improve its performance. An illustrative web application designed allows users to test the disclosure possibilities in their social media posts. As a result, by utilizing the efficiency of the suggested model, users would be able to get real-time learning on self-disclosure

    AI-powered Fraud Detection in Decentralized Finance: A Project Life Cycle Perspective

    Full text link
    In recent years, blockchain technology has introduced decentralized finance (DeFi) as an alternative to traditional financial systems. DeFi aims to create a transparent and efficient financial ecosystem using smart contracts and emerging decentralized applications. However, the growing popularity of DeFi has made it a target for fraudulent activities, resulting in losses of billions of dollars due to various types of frauds. To address these issues, researchers have explored the potential of artificial intelligence (AI) approaches to detect such fraudulent activities. Yet, there is a lack of a systematic survey to organize and summarize those existing works and to identify the future research opportunities. In this survey, we provide a systematic taxonomy of various frauds in the DeFi ecosystem, categorized by the different stages of a DeFi project's life cycle: project development, introduction, growth, maturity, and decline. This taxonomy is based on our finding: many frauds have strong correlations in the stage of the DeFi project. According to the taxonomy, we review existing AI-powered detection methods, including statistical modeling, natural language processing and other machine learning techniques, etc. We find that fraud detection in different stages employs distinct types of methods and observe the commendable performance of tree-based and graph-related models in tackling fraud detection tasks. By analyzing the challenges and trends, we present the findings to provide proactive suggestion and guide future research in DeFi fraud detection. We believe that this survey is able to support researchers, practitioners, and regulators in establishing a secure and trustworthy DeFi ecosystem.Comment: 38 pages, update reference

    Comparing sentiment analysis tools on gitHub project discussions

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáThe context of this work is situated in the rapidly evolving sphere of Natural Language Processing (NLP) within the scope of software engineering, focusing on sentiment analysis in software repositories. Sentiment analysis, a subfield of NLP, provides a potent method to parse, understand, and categorize these sentiments expressed in text. By applying sentiment analysis to software repositories, we can decode developers’ opinions and sentiments, providing key insights into team dynamics, project health, and potential areas of conflict or collaboration. However, the application of sentiment analysis in software engineering comes with its unique set of challenges. Technical jargon, code-specific ambiguities, and the brevity of software-related communications demand tailored NLP tools for effective analysis. The study unfolds in two primary phases. In the initial phase, we embarked on a meticulous investigation into the impacts of expanding the training sets of two prominent sentiment analysis tools, namely, SentiCR and SentiSW. The objective was to delineate the correlation between the size of the training set and the resulting tool performance, thereby revealing any potential enhancements in performance. The subsequent phase of the research encapsulates a practical application of the enhanced tools. We employed these tools to categorize discussions drawn from issue tickets within a varied array of Open-Source projects. These projects span an extensive range, from relatively small repositories to large, well-established repositories, thus providing a rich and diverse sampling ground.O contexto deste trabalho situa-se na esfera em rápida evolução do Processamento de Linguagem Natural (PLN) no âmbito da engenharia de software, com foco na análise de sentimentos em repositórios de software. A análise de sentimentos, um subcampo do PLN, fornece um método poderoso para analisar, compreender e categorizar os sentimentos expressos em texto. Ao aplicar a análise de sentimentos aos repositórios de software, podemos decifrar as opiniões e sentimentos dos desenvolvedores, fornecendo informações importantes sobre a dinâmica da equipe, a saúde do projeto e áreas potenciais de conflito ou colaboração. No entanto, a aplicação da análise de sentimentos na engenharia de software apresenta desafios únicos. Jargão técnico, ambiguidades específicas do código e a breviedade das comunicações relacionadas ao software exigem ferramentas de PLN personalizadas para uma análise eficaz. O estudo se desenvolve em duas fases principais. Na fase inicial, embarcamos em uma investigação meticulosa sobre os impactos da expansão dos conjuntos de treinamento de duas ferramentas proeminentes de análise de sentimentos, nomeadamente, SentiCR e SentiSW. O objetivo foi delinear a correlação entre o tamanho do conjunto de treinamento e o desempenho da ferramenta resultante, revelando assim possíveis aprimoramentos no desempenho. A fase subsequente da pesquisa engloba uma aplicação prática das ferramentas aprimoradas. Utilizamos essas ferramentas para categorizar discussões retiradas de bilhetes de problemas em uma variedade diversificada de projetos de código aberto. Esses projetos abrangem uma ampla gama, desde repositórios relativamente pequenos até repositórios grandes e bem estabelecidos, fornecendo assim um campo de amostragem rico e diversificado
    • …
    corecore