202 research outputs found

    Self-disclosure model for classifying & predicting text-based online disclosure

    Full text link
    Les médias sociaux et les sites de réseaux sociaux sont devenus des babillards numériques pour les internautes à cause de leur évolution accélérée. Comme ces sites encouragent les consommateurs à exposer des informations personnelles via des profils et des publications, l'utilisation accrue des médias sociaux a généré des problèmes d’invasion de la vie privée. Des chercheurs ont fait de nombreux efforts pour détecter l'auto-divulgation en utilisant des techniques d'extraction d'informations. Des recherches récentes sur l'apprentissage automatique et les méthodes de traitement du langage naturel montrent que la compréhension du sens contextuel des mots peut entraîner une meilleure précision que les méthodes d'extraction de données traditionnelles. Comme mentionné précédemment, les utilisateurs ignorent souvent la quantité d'informations personnelles publiées dans les forums en ligne. Il est donc nécessaire de détecter les diverses divulgations en langage naturel et de leur donner le choix de tester la possibilité de divulgation avant de publier. Pour ce faire, ce travail propose le « SD_ELECTRA », un modèle de langage spécifique au contexte. Ce type de modèle détecte les divulgations d'intérêts, de données personnelles, d'éducation et de travail, de relations, de personnalité, de résidence, de voyage et d'accueil dans les données des médias sociaux. L'objectif est de créer un modèle linguistique spécifique au contexte sur une plate-forme de médias sociaux qui fonctionne mieux que les modèles linguistiques généraux. De plus, les récents progrès des modèles de transformateurs ont ouvert la voie à la formation de modèles de langage à partir de zéro et à des scores plus élevés. Les résultats expérimentaux montrent que SD_ELECTRA a surpassé le modèle de base dans toutes les métriques considérées pour la méthode de classification de texte standard. En outre, les résultats montrent également que l'entraînement d'un modèle de langage avec un corpus spécifique au contexte de préentraînement plus petit sur un seul GPU peut améliorer les performances. Une application Web illustrative est conçue pour permettre aux utilisateurs de tester les possibilités de divulgation dans leurs publications sur les réseaux sociaux. En conséquence, en utilisant l'efficacité du modèle suggéré, les utilisateurs pourraient obtenir un apprentissage en temps réel sur l'auto-divulgation.Social media and social networking sites have evolved into digital billboards for internet users due to their rapid expansion. As these sites encourage consumers to expose personal information via profiles and postings, increased use of social media has generated privacy concerns. There have been notable efforts from researchers to detect self-disclosure using Information extraction (IE) techniques. Recent research on machine learning and natural language processing methods shows that understanding the contextual meaning of the words can result in better accuracy than traditional data extraction methods. Driven by the facts mentioned earlier, users are often ignorant of the quantity of personal information published in online forums, there is a need to detect various disclosures in natural language and give them a choice to test the possibility of disclosure before posting. For this purpose, this work proposes "SD_ELECTRA," a context-specific language model to detect Interest, Personal, Education and Work, Relationship, Personality, Residence, Travel plan, and Hospitality disclosures in social media data. The goal is to create a context-specific language model on a social media platform that performs better than the general language models. Moreover, recent advancements in transformer models paved the way to train language models from scratch and achieve higher scores. Experimental results show that SD_ELECTRA has outperformed the base model in all considered metrics for the standard text classification method. In addition, the results also show that training a language model with a smaller pre-training context-specific corpus on a single GPU can improve its performance. An illustrative web application designed allows users to test the disclosure possibilities in their social media posts. As a result, by utilizing the efficiency of the suggested model, users would be able to get real-time learning on self-disclosure

    EcomGPT: Instruction-tuning Large Language Model with Chain-of-Task Tasks for E-commerce

    Full text link
    Recently, instruction-following Large Language Models (LLMs) , represented by ChatGPT, have exhibited exceptional performance in general Natural Language Processing (NLP) tasks. However, the unique characteristics of E-commerce data pose significant challenges to general LLMs. An LLM tailored specifically for E-commerce scenarios, possessing robust cross-dataset/task generalization capabilities, is a pressing necessity. To solve this issue, in this work, we proposed the first e-commerce instruction dataset EcomInstruct, with a total of 2.5 million instruction data. EcomInstruct scales up the data size and task diversity by constructing atomic tasks with E-commerce basic data types, such as product information, user reviews. Atomic tasks are defined as intermediate tasks implicitly involved in solving a final task, which we also call Chain-of-Task tasks. We developed EcomGPT with different parameter scales by training the backbone model BLOOMZ with the EcomInstruct. Benefiting from the fundamental semantic understanding capabilities acquired from the Chain-of-Task tasks, EcomGPT exhibits excellent zero-shot generalization capabilities. Extensive experiments and human evaluations demonstrate that EcomGPT outperforms ChatGPT in term of cross-dataset/task generalization on E-commerce tasks.Comment: Initial version of EcomGP

    Global rule induction for information extraction

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Aspect and Entity Extraction for Opinion Mining

    Get PDF
    Abstract: Opinion mining or sentiment analysis is the computational study of people's opinions, appraisals, attitudes, and emotions toward entities such as products, services, organizations, individuals, events, and their different aspects. It has been an active research area in natural language processing and Web mining in recent years. Researchers have studied opinion mining at the document, sentence and aspect levels. Aspect-level (called aspect-based opinion mining) is often desired in practical applications as it provides the detailed opinions or sentiments about different aspects of entities and entities themselves, which are usually required for action. Aspect extraction and entity extraction are thus two core tasks of aspect-based opinion mining. In this chapter, we provide a broad overview of the tasks and the current state-of-the-art extraction techniques

    A Unified Generative Approach to Product Attribute-Value Identification

    Full text link
    Product attribute-value identification (PAVI) has been studied to link products on e-commerce sites with their attribute values (e.g., <Material, Cotton>) using product text as clues. Technical demands from real-world e-commerce platforms require PAVI methods to handle unseen values, multi-attribute values, and canonicalized values, which are only partly addressed in existing extraction- and classification-based approaches. Motivated by this, we explore a generative approach to the PAVI task. We finetune a pre-trained generative model, T5, to decode a set of attribute-value pairs as a target sequence from the given product text. Since the attribute value pairs are unordered set elements, how to linearize them will matter; we, thus, explore methods of composing an attribute-value pair and ordering the pairs for the task. Experimental results confirm that our generation-based approach outperforms the existing extraction and classification-based methods on large-scale real-world datasets meant for those methods.Comment: Accepted to the Findings of ACL 202

    Towards Effective Extraction and Linking of Software Mentions from User-Generated Support Tickets

    Get PDF
    Software support tickets contain short and noisy text from the customers. Software products are often represented by various surface forms and informal abbreviations. Automatically identifying software mentions from support tickets and determining the official names and versions are helpful for many downstream applications, \eg routing the support tickets to the right expert groups for support. In this work, we study the problem ofsoftware product name extraction andlinking from support tickets. We first annotate and analyze sampled tickets to understand the language patterns. Next, we design features using local, contextual, and external information sources, for extraction and linking models. In experiments, we show that linear models with the proposed features are able to deliver better and more consistent results, compared with the state-of-the-art baseline models, even on dataset with sparse labels

    Extracting Medical Entities from Radiology Reports with Ontology-based Distant Supervision

    Get PDF
    Doctors need to review a substantial amount of medical documents, such as radiology reports, to make medical decisions. Named Entity Recognition~(NER) structuralizes the raw medical text by detecting and classifying medical-related entities. Structuralized documents with medical concepts improve the doctors' work effectiveness and contain medical information benefitting the extraction of important information. Nevertheless, deploying the NER on Finnish medical text is still challenging because of data annotation, in-domain adaptation, label in-completion problem, and label noise. To solve these problems, we develop a NER system called Auto-labeling and Noise-suppressed Network~(ANT). Automated annotation mechanism provides supervised signals for training samples of the NER dataset. Domain continual pretraining transfers in-domain knowledge to the NER model for better model performance. We leverage weak label completion scheme to complete weak labels generated by the automated annotation mechanism. Some noise suppression approaches are applied to further reduce the label noise. Experimental results show that our model has achieved relatively strong performance on a silver standard dataset. We also conduct ablation experiments to explore the effectiveness of our framework's components
    corecore