1,462 research outputs found

    Towards automatic generation of Piping and Instrumentation Diagrams (P&IDs) with Artificial Intelligence

    Full text link
    Developing Piping and Instrumentation Diagrams (P&IDs) is a crucial step during the development of chemical processes. Currently, this is a tedious, manual, and time-consuming task. We propose a novel, completely data-driven method for the prediction of control structures. Our methodology is inspired by end-to-end transformer-based human language translation models. We cast the control structure prediction as a translation task where Process Flow Diagrams (PFDs) are translated to P&IDs. To use established transformer-based language translation models, we represent the P&IDs and PFDs as strings using our recently proposed SFILES 2.0 notation. Model training is performed in a transfer learning approach. Firstly, we pre-train our model using generated P&IDs to learn the grammatical structure of the process diagrams. Thereafter, the model is fine-tuned leveraging transfer learning on real P&IDs. The model achieved a top-5 accuracy of 74.8% on 10,000 generated P&IDs and 89.2% on 100,000 generated P&IDs. These promising results show great potential for AI-assisted process engineering. The tests on a dataset of 312 real P&IDs indicate the need of a larger P&IDs dataset for industry applications

    Hierarchical Quantized Representations for Script Generation

    Full text link
    Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modeling-based method.Comment: EMNLP 201

    Vulnerable code repair using Deep Learning

    Get PDF

    Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns

    Full text link
    Attention, specifically scaled dot-product attention, has proven effective for natural language, but it does not have a mechanism for handling hierarchical patterns of arbitrary nesting depth, which limits its ability to recognize certain syntactic structures. To address this shortcoming, we propose stack attention: an attention operator that incorporates stacks, inspired by their theoretical connections to context-free languages (CFLs). We show that stack attention is analogous to standard attention, but with a latent model of syntax that requires no syntactic supervision. We propose two variants: one related to deterministic pushdown automata (PDAs) and one based on nondeterministic PDAs, which allows transformers to recognize arbitrary CFLs. We show that transformers with stack attention are very effective at learning CFLs that standard transformers struggle on, achieving strong results on a CFL with theoretically maximal parsing difficulty. We also show that stack attention is more effective at natural language modeling under a constrained parameter budget, and we include results on machine translation.Comment: 20 pages, 4 figures. Published as a spotlight paper at ICLR 202
    • …
    corecore