153 research outputs found

    Leveraging multi-dimensional, multi-source knowledge for user preference modeling and event summarization in social media

    Get PDF
    An unprecedented development of various kinds of social media platforms, such as Twitter, Facebook and Foursquare, has been witnessed in recent years. This huge amount of user generated data are multi-dimensional in nature. Some dimensions are explicitly observed such as user profiles, text of social media posts, time, and location information. Others can be implicit and need to be inferred, reflecting the inherent structures of social media data. Examples include popular topics discussed in Twitter or Facebook, or the geographical clusters based on user check-in activities from Foursquare. It is of great interest to both research communities and commercial organizations to understand such heterogeneous data and leverage available information from multiple dimensions to facilitate social media applications, such as user preference modeling and event summarization. This dissertation first presents a general discriminative learning approach for modeling multi-dimensional knowledge in a supervised setting. A learning protocol is established to model both explicit and implicit knowledge in a unified manner, which applies to general classification/prediction tasks. This approach accommodates heterogeneous data dimensions with a significant boosted expressiveness of existing discriminative learning approaches. It stands out with its capability to model latent features, for which arbitrary generative assumptions are allowed. Besides the multi-dimensional nature, social media data are unstructured, fragmented and noisy. It makes social media data mining even more challenging that a lot of real applications come with no available annotation in an unsupervised setting. This dissertation addresses this issue from a novel angle: external sources such as news media and knowledge bases are exploited to provide supervision. I describe a unified framework which links traditional news data to Twitter and enables effective knowledge discovery such as event detection and summarization

    NLP-Based Techniques for Cyber Threat Intelligence

    Full text link
    In the digital era, threat actors employ sophisticated techniques for which, often, digital traces in the form of textual data are available. Cyber Threat Intelligence~(CTI) is related to all the solutions inherent to data collection, processing, and analysis useful to understand a threat actor's targets and attack behavior. Currently, CTI is assuming an always more crucial role in identifying and mitigating threats and enabling proactive defense strategies. In this context, NLP, an artificial intelligence branch, has emerged as a powerful tool for enhancing threat intelligence capabilities. This survey paper provides a comprehensive overview of NLP-based techniques applied in the context of threat intelligence. It begins by describing the foundational definitions and principles of CTI as a major tool for safeguarding digital assets. It then undertakes a thorough examination of NLP-based techniques for CTI data crawling from Web sources, CTI data analysis, Relation Extraction from cybersecurity data, CTI sharing and collaboration, and security threats of CTI. Finally, the challenges and limitations of NLP in threat intelligence are exhaustively examined, including data quality issues and ethical considerations. This survey draws a complete framework and serves as a valuable resource for security professionals and researchers seeking to understand the state-of-the-art NLP-based threat intelligence techniques and their potential impact on cybersecurity

    ISP/PhD Comprehensive Examination

    Get PDF

    ANALYZING IMAGE TWEETS IN MICROBLOGS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    HTSS: A novel hybrid text summarisation and simplification architecture

    Get PDF
    Text simplification and text summarisation are related, but different sub-tasks in Natural Language Generation. Whereas summarisation attempts to reduce the length of a document, whilst keeping the original meaning, simplification attempts to reduce the complexity of a document. In this work, we combine both tasks of summarisation and simplification using a novel hybrid architecture of abstractive and extractive summarisation called HTSS. We extend the well-known pointer generator model for the combined task of summarisation and simplification. We have collected our parallel corpus from the simplified summaries written by domain experts published on the science news website EurekaAlert (www.eurekalert.org). Our results show that our proposed HTSS model outperforms neural text simplification (NTS) on SARI score and abstractive text summarisation (ATS) on the ROUGE score. We further introduce a new metric (CSS1) which combines SARI and Rouge and demonstrates that our proposed HTSS model outperforms NTS and ATS on the joint task of simplification and summarisation by 38.94% and 53.40%, respectively. We provide all code, models and corpora to the scientific community for future research at the following URL: https://github.com/slab-itu/HTSS/

    Understanding patient experience from online medium

    Get PDF
    Improving patient experience at hospitals leads to better health outcomes. To improve this, we must first understand and interpret patients' written feedback. Patient-generated texts such as patient reviews found on RateMD, or online health forums found on WebMD are venues where patients post about their experiences. Due to the massive amounts of patient-generated texts that exist online, an automated approach to identifying the topics from patient experience taxonomy is the only realistic option to analyze these texts. However, not only is there a lack of annotated taxonomy on these media, but also word usage is colloquial, making it challenging to apply standardized NLP technique to identify the topics that are present in the patient-generated texts. Furthermore, patients may describe multiple topics in the patient-generated texts which drastically increases the complexity of the task. In this thesis, we address the challenges in comprehensively and automatically understanding the patient experience from patient-generated texts. We first built a set of rich semantic features to represent the corpus which helps capture meanings that may not typically be captured by the bag-of-words (BOW) model. Unlike the BOW model, semantic feature representation captures the context and in-depth meaning behind each word in the corpus. To the best of our knowledge, no existing work in understanding patient experience from patient-generated texts delves into which semantic features help capture the characteristics of the corpus. Furthermore, patients generally talk about multiple topics when they write in patient-generated texts, and these are frequently interdependent of each other. There are two types of topic interdependencies, those that are semantically similar, and those that are not. We built a constraint-based deep neural network classifier to capture the two types of topic interdependencies and empirically show the classification performance improvement over the baseline approaches. Past research has also indicated that patient experiences differ depending on patient segments [1-4]. The segments can be based on demographics, for instance, by race, gender, or geographical location. Similarly, the segments can be based on health status, for example, whether or not the patient is taking medication, whether or not the patient has a particular disease, or whether or not the patient is readmitted to the hospital. To better understand patient experiences, we built an automated approach to identify patient segments with a focus on whether the person has stopped taking the medication or not. The technique used to identify the patient segment is general enough that we envision the approach to be applicable to other types of patient segments. With a comprehensive understanding of patient experiences, we envision an application system where clinicians can directly read the most relevant patient-generated texts that pertain to their interest. The system can capture topics from patient experience taxonomy that is of interest to each clinician or designated expert, and we believe the system is one of many approaches that can ultimately help improve the patient experience
    corecore