79 research outputs found

    Fine-grained Multimodal Sentiment Analysis Based on Gating and Attention Mechanism

    Get PDF
    In recent years, more and more people express their feelings through both images and texts, boosting the growth of multimodal data. Multimodal data contains richer semantics and is more conducive to judging the real emotions of people. To fully learn the features of every single modality and integrate modal information, this paper proposes a fine-grained multimodal sentiment analysis method FCLAG based on gating and attention mechanism. First, the method is carried out from the character level and the word level in the text aspect. CNN is used to extract more fine-grained emotional information from characters, and the attention mechanism is used to improve the expressiveness of the keywords. In terms of images, a gating mechanism is added to control the flow of image information between networks. The images and text vectors represent the original data collectively. Then the bidirectional LSTM is used to complete further learning, which enhances the information interaction capability between the modalities. Finally, put the multimodal feature expression into the classifier. This method is verified on a self-built image and text dataset. The experimental results show that compared with other sentiment classification models, this method has greater improvement in accuracy and F1 score and it can effectively improve the performance of multimodal sentiment analysis

    Role of Gp120 Glycosylation in Sexual Transmission of HIV

    Get PDF
    Background: In chronic HIV patients, the viral populations are genetically diverse due to mutations introduced by the viral reverse transcriptase during HIV replication. However, more than 80% new infections result from single transmission founder (TF) viruses; therefore, targeting the TFs is key to control AIDS worldwide. Gp120 is a glycosylated envelope protein required for HIV infection, propagation, and transmission. Glycans on gp120 influence HIV infectivity through their interactions with lectins, the carbohydrate-binding immune proteins in the host mucosa. To transmit sexually, viruses must overcome the lectin traps to access more target T cells. Hypothesis: TF viruses are less likely to be trapped by host lectins due to their reduced gp120 glycosylation, thus more infectious. Methods: We aim to characterize and compare the gp120 glycosylation signatures in TF and chronic HIV strains, B4 and Q0 respectively, using mass spectrometry (MS), surface plasmon resonance (SPR), and capillary electrophoresis (CE). To date, we have established a work flow to purify gp120 glycoproteins, perform MS using ETHcD methods, and analyze raw MS data using the GlycoPAT software. We are currently analyzing MS data for three replicates of B4 and the first replicate of Q0. Then we will compare the glycosylation patterns between the two strains. CE and SPR will be performed to test the glycan enrichment and functional interactions between gp120 and lectins, respectively. Discussion: Our results will provide qualitative and quantitative details about gp120 glycosylation underlying the strong infectivity of TF viruses, shedding light on new strategies to develop HIV vaccines

    Node Copying: A Random Graph Model for Effective Graph Sampling

    Full text link
    There has been an increased interest in applying machine learning techniques on relational structured-data based on an observed graph. Often, this graph is not fully representative of the true relationship amongst nodes. In these settings, building a generative model conditioned on the observed graph allows to take the graph uncertainty into account. Various existing techniques either rely on restrictive assumptions, fail to preserve topological properties within the samples or are prohibitively expensive for larger graphs. In this work, we introduce the node copying model for constructing a distribution over graphs. Sampling of a random graph is carried out by replacing each node's neighbors by those of a randomly sampled similar node. The sampled graphs preserve key characteristics of the graph structure without explicitly targeting them. Additionally, sampling from this model is extremely simple and scales linearly with the nodes. We show the usefulness of the copying model in three tasks. First, in node classification, a Bayesian formulation based on node copying achieves higher accuracy in sparse data settings. Second, we employ our proposed model to mitigate the effect of adversarial attacks on the graph topology. Last, incorporation of the model in a recommendation system setting improves recall over state-of-the-art methods

    Memory Augmented Graph Neural Networks for Sequential Recommendation

    Full text link
    The chronological order of user-item interactions can reveal time-evolving and sequential user behaviors in many recommender systems. The items that users will interact with may depend on the items accessed in the past. However, the substantial increase of users and items makes sequential recommender systems still face non-trivial challenges: (1) the hardness of modeling the short-term user interests; (2) the difficulty of capturing the long-term user interests; (3) the effective modeling of item co-occurrence patterns. To tackle these challenges, we propose a memory augmented graph neural network (MA-GNN) to capture both the long- and short-term user interests. Specifically, we apply a graph neural network to model the item contextual information within a short-term period and utilize a shared memory network to capture the long-range dependencies between items. In addition to the modeling of user interests, we employ a bilinear function to capture the co-occurrence patterns of related items. We extensively evaluate our model on five real-world datasets, comparing with several state-of-the-art methods and using a variety of performance metrics. The experimental results demonstrate the effectiveness of our model for the task of Top-K sequential recommendation.Comment: Accepted by the 34th AAAI Conference on Artificial Intelligence (AAAI 2020

    Clinical presentation of hemophagocytic lymphohistiocytosis in adults is less typical than in children

    Get PDF
    OBJECTIVE: Hemophagocytic lymphohistiocytosis in adults is largely underdiagnosed. To improve the rate and accuracy of diagnosis in adults, the clinical and laboratory characteristics of hemophagocytic lymphohistiocytosis were analyzed in and compared between adults and children in a Chinese cohort. METHOD: Data from 50 hemophagocytic lymphohistiocytosis patients, including 34 adults and 16 children who fulfilled the 2004 hemophagocytic lymphohistiocytosis diagnostic criteria, were collected and analyzed. RESULTS: 1. Etiological factors: The proportion of Epstein-Barr virus infection was lower in adults compared with children, whereas fungal infection and natural killer/T cell lymphoma were more frequent in adults (

    Variation in the Analysis of Positively Selected Sites Using Nonsynonymous/Synonymous Rate Ratios: An Example Using Influenza Virus

    Get PDF
    Sites in a gene showing the nonsynonymous/synonymous rate ratio (ω) >1 have been frequently identified to be under positive selection. To examine the performance of such analysis, sites of the ω ratio >1 in the HA1 gene of H3N2 subtype human influenza viruses were identified from seven overlapping sequence data sets in this study. Our results showed that the sites of the ω ratio >1 were of significant variation among the data sets even though they targeted similar clusters, indicating that the analysis is likely to be either of low sensitivity or of low specificity in identifying sites under positive selection. Most (43/45) of the sites showing ω >1 calculated from at least one data set are involved in B-cell epitopes which cover less than a half sites in the protein, suggesting that the analysis is likely to be of low sensitivity rather than of low specificity. It was further found that the analysis sensitivity could not be enhanced by including more sequences or covering longer time intervals. Previously some reports also likely identified only a portion of the sites under positive selection in the viral gene using the ω ratio. Low sensitivity of the analysis may result from that some sites under positive selection in the gene are also under negative (purifying) selection simultaneously for functional constrains, and so their ω ratios could be <1. Theoretically, the sites under the two opposite selection forces at the same time favor only certain nonsynonymous changes, e.g. those changing the antigenicity of the gene and maintaining the gene function. This study also suggested that sometimes we can identify more sites under positive selection using the ω ratio by integrating the positively selected sites estimated from multiple data sets

    A Workflow to Analyze ETHcD Mass Spectrometry Data for Studying HIV gp120 Glycosylation

    No full text
    The great heterogeneity of HIV populations and richness of surface glycan clouds makes it difficult to locate a conserved and exposed protein epitope as an effective vaccine target. However, more than 80% new infections result from single transmitted founder (T/F) viruses. We set out to design a workflow to study the traits of T/Fs that allow for their superior infectivity, specifically, the glycosylation patterns of gp120, a subunit of HIV envelope protein responsible for binding to host cell receptors. Our main research methods include Western blot and mass spectrometry. Our current understanding of the mass spectrometry data indicates that our T/F and chronic HIV strains have differential distributions of glycan density at several key N-sites throughout the gp120 peptide backbones, which may be related to the differential transmission fitness of the two strains and potentially used as novel glycopeptide-based HIV vaccine targets

    Application of Deep Learning for Early Screening of Colorectal Precancerous Lesions under White Light Endoscopy

    No full text
    Background and Objective. Colorectal cancer (CRC) is a common gastrointestinal tumour with high morbidity and mortality. Endoscopic examination is an effective method for early detection of digestive system tumours. However, due to various reasons, missed diagnoses and misdiagnoses are common occurrences. Our goal is to use deep learning methods to establish colorectal lesion detection, positioning, and classification models based on white light endoscopic images and to design a computer-aided diagnosis (CAD) system to help physicians reduce the rate of missed diagnosis and improve the accuracy of the detection rate. Methods. We collected and sorted out the white light endoscopic images of some patients undergoing colonoscopy. The convolutional neural network model is used to detect whether the image contains lesions: CRC, colorectal adenoma (CRA), and colorectal polyps. The accuracy, sensitivity, and specificity rates are used as indicators to evaluate the model. Then, the instance segmentation model is used to locate and classify the lesions on the images containing lesions, and mAP (mean average precision), AP50, and AP75 are used to evaluate the performance of an instance segmentation model. Results. In the process of detecting whether the image contains lesions, we compared ResNet50 with the other four models, that is, AlexNet, VGG19, ResNet18, and GoogLeNet. The result is that ResNet50 performs better than several other models. It scored an accuracy of 93.0%, a sensitivity of 94.3%, and a specificity of 90.6%. In the process of localization and classification of the lesion in images containing lesions by Mask R-CNN, its mAP, AP50, and AP75 were 0.676, 0.903, and 0.833, respectively. Conclusion. We developed and compared five models for the detection of lesions in white light endoscopic images. ResNet50 showed the optimal performance, and Mask R-CNN model could be used to locate and classify lesions in images containing lesions
    • …
    corecore