24,379 research outputs found

    De Novo Drug Design with Joint Transformers

    Full text link
    De novo drug design requires simultaneously generating novel molecules outside of training data and predicting their target properties, making it a hard task for generative models. To address this, we propose Joint Transformer that combines a Transformer decoder, Transformer encoder, and a predictor in a joint generative model with shared weights. We formulate a probabilistic black-box optimization algorithm that employs Joint Transformer to generate novel molecules with improved target properties and outperforms other SMILES-based optimization methods in de novo drug design.Comment: Accepted to NeurIPS 2023 Generative AI and Biology Worksho

    Analyzing Sentiment with Self-Organizing Map and Long Short-Term Memory Algorithms

    Get PDF
    This research delves into the impact of Chat Generative Pre-trained Transformer, one of Open Artificial Intelligence Generative Pretrained Transformer models. This model underwent extensive training on a vast corpus of internet text to gain insights into the mechanics of human language and its role in forming phrases, sentences, and paragraphs. The urgency of this inquiry arises from Chat Generative Pre-trained Transformer emergence, which has stirred significant debate and captured widespread attention in both research and educational circles. Since its debut in November 2022, Chat Generative Pre-trained Transformer has demonstrated substantial potential across numerous domains. However, concerns voiced on Twitter have centered on potential negative consequences, such as increasedforgery and misinformation. Consequently, understanding public sentiment toward Chat Generative Pre-trained Transformer technology through sentiment analysis has become crucial. The research’s primary objective is to conduct Sentiment Analysis Classification of Chat Generative Pre-trained Transformer regarding public opinions on Twitter in Indonesia. This goal involves quantifying and categorizing public sentiment from Twitter’s vast data pool into three clusters: positive, negative, or neutral. In the data clustering stage, the Self-Organizing Map technique is used. After the text data has been weighted and clustered, the next step involves using the classification technique with LongShort-Term Memory to determine the public sentiment outcomes resulting from the presence of Chat Generative Pre-trained Transformer technology. Rigorous testing has demonstrated the robust performance of the model, with optimal parameters: relu activation function, som size of 5, num epoch som and num epoch lstm both at 128, yielding an impressive 95.07% accuracy rate

    LightLM: A Lightweight Deep and Narrow Language Model for Generative Recommendation

    Full text link
    This paper presents LightLM, a lightweight Transformer-based language model for generative recommendation. While Transformer-based generative modeling has gained importance in various AI sub-fields such as NLP and vision, generative recommendation is still in its infancy due to its unique demand on personalized generative modeling. Existing works on generative recommendation often use NLP-oriented Transformer architectures such as T5, GPT, LLaMA and M6, which are heavy-weight and are not specifically designed for recommendation tasks. LightLM tackles the issue by introducing a light-weight deep and narrow Transformer architecture, which is specifically tailored for direct generation of recommendation items. This structure is especially apt for straightforward generative recommendation and stems from the observation that language model does not have to be too wide for this task, as the input predominantly consists of short tokens that are well-suited for the model's capacity. We also show that our devised user and item ID indexing methods, i.e., Spectral Collaborative Indexing (SCI) and Graph Collaborative Indexing (GCI), enables the deep and narrow Transformer architecture to outperform large-scale language models for recommendation. Besides, to address the hallucination problem of generating items as output, we propose the constrained generation process for generative recommenders. Experiments on real-world datasets show that LightLM outperforms various competitive baselines in terms of both recommendation accuracy and efficiency. The code can be found at https://github.com/dongyuanjushi/LightLM

    A Conditional Generative Chatbot using Transformer Model

    Full text link
    A Chatbot serves as a communication tool between a human user and a machine to achieve an appropriate answer based on the human input. In more recent approaches, a combination of Natural Language Processing and sequential models are used to build a generative Chatbot. The main challenge of these models is their sequential nature, which leads to less accurate results. To tackle this challenge, in this paper, a novel end-to-end architecture is proposed using conditional Wasserstein Generative Adversarial Networks and a transformer model for answer generation in Chatbots. While the generator of the proposed model consists of a full transformer model to generate an answer, the discriminator includes only the encoder part of a transformer model followed by a classifier. To the best of our knowledge, this is the first time that a generative Chatbot is proposed using the embedded transformer in both generator and discriminator models. Relying on the parallel computing of the transformer model, the results of the proposed model on the Cornell Movie-Dialog corpus and the Chit-Chat datasets confirm the superiority of the proposed model compared to state-of-the-art alternatives using different evaluation metrics

    The generative quantum eigensolver (GQE) and its application for ground state search

    Full text link
    We introduce the generative quantum eigensolver (GQE), a novel method for applying classical generative models for quantum simulation. The GQE algorithm optimizes a classical generative model to produce quantum circuits with desired properties. Here, we develop a transformer-based implementation, which we name the generative pre-trained transformer-based (GPT) quantum eigensolver (GPT-QE), leveraging both pre-training on existing datasets and training without any prior knowledge. We demonstrate the effectiveness of training and pre-training GPT-QE in the search for ground states of electronic structure Hamiltonians. GQE strategies can extend beyond the problem of Hamiltonian simulation into other application areas of quantum computing.Comment: 16 pages, 7 figure

    Dual-Space Optimization: Improved Molecule Sequence Design by Latent Prompt Transformer

    Full text link
    Designing molecules with desirable properties, such as drug-likeliness and high binding affinities towards protein targets, is a challenging problem. In this paper, we propose the Dual-Space Optimization (DSO) method that integrates latent space sampling and data space selection to solve this problem. DSO iteratively updates a latent space generative model and a synthetic dataset in an optimization process that gradually shifts the generative model and the synthetic data towards regions of desired property values. Our generative model takes the form of a Latent Prompt Transformer (LPT) where the latent vector serves as the prompt of a causal transformer. Our extensive experiments demonstrate effectiveness of the proposed method, which sets new performance benchmarks across single-objective, multi-objective and constrained molecule design tasks
    • …
    corecore