126,226 research outputs found

    Scalable Text and Link Analysis with Mixed-Topic Link Models

    Full text link
    Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both tasks, our model outperforms several existing state-of-the-art methods, achieving higher accuracy with significantly less computation, analyzing a data set with 1.3 million words and 44 thousand links in a few minutes.Comment: 11 pages, 4 figure

    Automated Deductive Content Analysis of Text: A Deep Contrastive and Active Learning Based Approach

    Get PDF
    Content analysis traditionally involves human coders manually combing through text documents to search for relevant concepts and categories. However, this approach is time-intensive and not scalable, particularly for secondary data like social media content, news articles, or corporate reports. To address this problem, the paper presents an automated framework called Automated Deductive Content Analysis of Text (ADCAT) that uses deep learning-based semantic techniques, ontology of validated construct measures, large language model, human-in-the-loop disambiguation, and a novel augmentation-based weighted contrastive learning approach for improved language representations, to build a scalable approach for deductive content analysis. We demonstrate the effectiveness of the proposed approach to identify firm innovation strategies from their 10-K reports to obtain inferences reasonably close to human coding

    A framework for applying natural language processing in digital health interventions

    Get PDF
    BACKGROUND: Digital health interventions (DHIs) are poised to reduce target symptoms in a scalable, affordable, and empirically supported way. DHIs that involve coaching or clinical support often collect text data from 2 sources: (1) open correspondence between users and the trained practitioners supporting them through a messaging system and (2) text data recorded during the intervention by users, such as diary entries. Natural language processing (NLP) offers methods for analyzing text, augmenting the understanding of intervention effects, and informing therapeutic decision making. OBJECTIVE: This study aimed to present a technical framework that supports the automated analysis of both types of text data often present in DHIs. This framework generates text features and helps to build statistical models to predict target variables, including user engagement, symptom change, and therapeutic outcomes. METHODS: We first discussed various NLP techniques and demonstrated how they are implemented in the presented framework. We then applied the framework in a case study of the Healthy Body Image Program, a Web-based intervention trial for eating disorders (EDs). A total of 372 participants who screened positive for an ED received a DHI aimed at reducing ED psychopathology (including binge eating and purging behaviors) and improving body image. These users generated 37,228 intervention text snippets and exchanged 4285 user-coach messages, which were analyzed using the proposed model. RESULTS: We applied the framework to predict binge eating behavior, resulting in an area under the curve between 0.57 (when applied to new users) and 0.72 (when applied to new symptom reports of known users). In addition, initial evidence indicated that specific text features predicted the therapeutic outcome of reducing ED symptoms. CONCLUSIONS: The case study demonstrates the usefulness of a structured approach to text data analytics. NLP techniques improve the prediction of symptom changes in DHIs. We present a technical framework that can be easily applied in other clinical trials and clinical presentations and encourage other groups to apply the framework in similar contexts

    Closing the gap: Sequence mining at scale

    Full text link
    Frequent sequence mining is one of the fundamental building blocks in data mining. While the problem has been extensively studied, few of the available techniques are sufficiently scalable to handle datasets with billions of sequences; such large-scale datasets arise, for instance, in text mining and session analysis. In this article, we propose MG-FSM, a scalable algorithm for frequent sequence mining on MapReduce. MG-FSM can handle so-called ā€œgap constraintsā€, which can be used to limit the output to a controlled set of frequent sequences. Both positional and temporal gap constraints, as well as appropriate maximality and closedness constraints, are supported. At its heart, MG-FSM partitions the input database in a way that allows us to mine each partition independently using any existing frequent sequence mining algorithm. We introduce the notion of Ļ‰-equivalency, which is a generalization of the notion of a ā€œprojected databaseā€ used by many frequent pattern mining algorithms. We also present a number of optimization techniques that minimize partition size, and therefore computational and communication costs, while still maintaining correctness. Our experimental study in the contexts of text mining and session analysis suggests that MG-FSM is significantly more efficient and scalable than alternative approaches

    Analysis of Statistical QoS in Half Duplex and Full Duplex Dense Heterogeneous Cellular Networks

    Get PDF
    Statistical QoS provisioning as an important performance metric in analyzing next generation mobile cellular network, aka 5G, is investigated. In this context, by quantifying the performance in terms of the effective capacity, we introduce a lower bound for the system performance that facilitates an efficient analysis. Based on the proposed lower bound, which is mainly built on a per resource block analysis, we build a basic mathematical framework to analyze effective capacity in an ultra dense heterogeneous cellular network. We use our proposed scalable approach to give insights about the possible enhancements of the statistical QoS experienced by the end users if heterogeneous cellular networks migrate from a conventional half duplex to an imperfect full duplex mode of operation. Numerical results and analysis are provided, where the network is modeled as a Matern point process. The results demonstrate the accuracy and computational efficiency of the proposed scheme, especially in large scale wireless systems. Moreover, the minimum level of self interference cancellation for the full duplex system to start outperforming its half duplex counterpart is investigated.Comment: arXiv admin note: substantial text overlap with arXiv:1604.0058
    • ā€¦
    corecore