220,224 research outputs found

    Identifying User Innovations through AI in Online Communities– A Transfer Learning Approach

    Get PDF
    Identifying innovative users and their ideas is crucial, for example, in crowdsourcing. But, analyzing large amounts of unstructured textual data from such online communities poses a challenge for organizations. Therefore, researchers started developing automated approaches to identify innovative users. Our study introduces an advanced machine-learning approach that minimizes manual work by combining transfer learning with a transformer-based design. We train the model on separate datasets, including an online maker community and various internet texts. The maker community posts represent need-solution pairs, which express needs and describe fitting prototypes. Then, we transfer the model and identify potential user innovations in a kitesurfing community. We validate the identified posts by manually checking a subsample and analyzing how words affect the model\u27s classification decision. This study contributes to the growing portfolio of user innovation identification by combining state-of-the-art natural language processing and transfer learning to improve automated identification

    Certain Investigation of Fake News Detection from Facebook and Twitter Using Artificial Intelligence Approach

    Get PDF
    The news platform has moved from traditional newspapers to online communities in the technologically advanced area of Artificial Intelligence. Because Twitter and Facebook allow us to consume news much faster and with less restricted editing, false information continues to spread at an impressive rate and volume. Online Fake News Detection is a promising feld in research and captivates the attention of researchers. The sprawl of huge chunks of misinformation in social network platforms is vulnerable to global risk. This article recommends using a Machine Learning optimization technique for automated news article classification on Facebook and Twitter. The emergence of the research is facilitated by the strategic implementation of Natural Language Processing for social forum fake news findings in order to distort news reports from non-recurrent outlets. The relent from the study is outstanding with text document frequency words, which act as extraction technique�s attribute, and the classifier is acted upon by Hybrid Support Vector Machine by achieving 91.23% accuracy

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Automatic identification of information quality metrics in health news stories

    Get PDF
    Objective: Many online and printed media publish health news of questionable trustworthiness and it may be difficult for laypersons to determine the information quality of such articles. The purpose of this work was to propose a methodology for the automatic assessment of the quality of health-related news stories using natural language processing and machine learning. Materials and Methods: We used a database from the website HealthNewsReview.org that aims to improve the public dialogue about health care. HealthNewsReview.org developed a set of criteria to critically analyze health care interventions' claims. In this work, we attempt to automate the evaluation process by identifying the indicators of those criteria using natural language processing-based machine learning on a corpus of more than 1,300 news stories. We explored features ranging from simple n-grams to more advanced linguistic features and optimized the feature selection for each task. Additionally, we experimented with the use of pre-trained natural language model BERT. Results: For some criteria, such as mention of costs, benefits, harms, and “disease-mongering,” the evaluation results were promising with an F1 measure reaching 81.94%, while for others the results were less satisfactory due to the dataset size, the need of external knowledge, or the subjectivity in the evaluation process. Conclusion: These used criteria are more challenging than those addressed by previous work, and our aim was to investigate how much more difficult the machine learning task was, and how and why it varied between criteria. For some criteria, the obtained results were promising; however, automated evaluation of the other criteria may not yet replace the manual evaluation process where human experts interpret text senses and make use of external knowledge in their assessment

    Changes to Captions: An Attentive Network for Remote Sensing Change Captioning

    Full text link
    In recent years, advanced research has focused on the direct learning and analysis of remote sensing images using natural language processing (NLP) techniques. The ability to accurately describe changes occurring in multi-temporal remote sensing images is becoming increasingly important for geospatial understanding and land planning. Unlike natural image change captioning tasks, remote sensing change captioning aims to capture the most significant changes, irrespective of various influential factors such as illumination, seasonal effects, and complex land covers. In this study, we highlight the significance of accurately describing changes in remote sensing images and present a comparison of the change captioning task for natural and synthetic images and remote sensing images. To address the challenge of generating accurate captions, we propose an attentive changes-to-captions network, called Chg2Cap for short, for bi-temporal remote sensing images. The network comprises three main components: 1) a Siamese CNN-based feature extractor to collect high-level representations for each image pair; 2) an attentive decoder that includes a hierarchical self-attention block to locate change-related features and a residual block to generate the image embedding; and 3) a transformer-based caption generator to decode the relationship between the image embedding and the word embedding into a description. The proposed Chg2Cap network is evaluated on two representative remote sensing datasets, and a comprehensive experimental analysis is provided. The code and pre-trained models will be available online at https://github.com/ShizhenChang/Chg2Cap

    Sequence Mining and Pattern Analysis in Drilling Reports with Deep Natural Language Processing

    Full text link
    Drilling activities in the oil and gas industry have been reported over decades for thousands of wells on a daily basis, yet the analysis of this text at large-scale for information retrieval, sequence mining, and pattern analysis is very challenging. Drilling reports contain interpretations written by drillers from noting measurements in downhole sensors and surface equipment, and can be used for operation optimization and accident mitigation. In this initial work, a methodology is proposed for automatic classification of sentences written in drilling reports into three relevant labels (EVENT, SYMPTOM and ACTION) for hundreds of wells in an actual field. Some of the main challenges in the text corpus were overcome, which include the high frequency of technical symbols, mistyping/abbreviation of technical terms, and the presence of incomplete sentences in the drilling reports. We obtain state-of-the-art classification accuracy within this technical language and illustrate advanced queries enabled by the tool.Comment: 7 pages, 14 figures, technical repor
    • 

    corecore