656 research outputs found

    CrunchGPT: A chatGPT assisted framework for scientific machine learning

    Full text link
    Scientific Machine Learning (SciML) has advanced recently across many different areas in computational science and engineering. The objective is to integrate data and physics seamlessly without the need of employing elaborate and computationally taxing data assimilation schemes. However, preprocessing, problem formulation, code generation, postprocessing and analysis are still time consuming and may prevent SciML from wide applicability in industrial applications and in digital twin frameworks. Here, we integrate the various stages of SciML under the umbrella of ChatGPT, to formulate CrunchGPT, which plays the role of a conductor orchestrating the entire workflow of SciML based on simple prompts by the user. Specifically, we present two examples that demonstrate the potential use of CrunchGPT in optimizing airfoils in aerodynamics, and in obtaining flow fields in various geometries in interactive mode, with emphasis on the validation stage. To demonstrate the flow of the CrunchGPT, and create an infrastructure that can facilitate a broader vision, we built a webapp based guided user interface, that includes options for a comprehensive summary report. The overall objective is to extend CrunchGPT to handle diverse problems in computational mechanics, design, optimization and controls, and general scientific computing tasks involved in SciML, hence using it as a research assistant tool but also as an educational tool. While here the examples focus in fluid mechanics, future versions will target solid mechanics and materials science, geophysics, systems biology and bioinformatics.Comment: 20 pages, 26 figure

    A review of NLIDB with deep learning: findings, challenges and open issues

    Get PDF
    Relational databases are storage for a massive amount of data. Knowledge of structured query language is a prior requirement to access that data. That is not possible for all non-technical personals, leading to the need for a system that translates text to SQL query itself rather than the user. Text to SQL task is also crucial because of its economic and industrial value. Natural Language Interface to Database (NLIDB) is the system that supports the text-to-SQL task. Developing the NLIDB system is a long-standing problem. Previously they were built based on domain-specific ontologies via pipelining methods. Recently a rising variety of Deep learning ideas and techniques brought this area to the attention again. Now end to end Deep learning models is being proposed for the task. Some publicly available datasets are being used for experimentation of the contributions, making the comparison process convenient. In this paper, we review the current work, summarize the research trends, and highlight challenging issues of NLIDB with Deep learning models. We discussed the importance of datasets, prediction model approaches and open challenges. In addition, methods and techniques are also summarized, along with their influence on the overall structure and performance of NLIDB systems. This paper can help future researchers start having prior knowledge of findings and challenges in NLIDB with Deep learning approaches

    Deep representation learning: Fundamentals, Perspectives, Applications, and Open Challenges

    Full text link
    Machine Learning algorithms have had a profound impact on the field of computer science over the past few decades. These algorithms performance is greatly influenced by the representations that are derived from the data in the learning process. The representations learned in a successful learning process should be concise, discrete, meaningful, and able to be applied across a variety of tasks. A recent effort has been directed toward developing Deep Learning models, which have proven to be particularly effective at capturing high-dimensional, non-linear, and multi-modal characteristics. In this work, we discuss the principles and developments that have been made in the process of learning representations, and converting them into desirable applications. In addition, for each framework or model, the key issues and open challenges, as well as the advantages, are examined

    Accurate and Robust Text-to-SQL Parsing using Intermediate Representation

    Get PDF
    Text-to-SQL studies how to translate natural language descriptions into SQL queries. The key challenge is addressing the mismatch between natural language and SQL queries. To bridge this gap, we propose an SQL intermediate representation (IR) called Natural SQL (NatSQL), which makes inferring SQL easier for models and improves the performance of existing models. We also study the robustness of existing models in light of schema linking and compositional generalization. Specifically, NatSQL preserves the core functionalities of SQL while it simplifies the queries as follows: (1) dispensing with operators and keywords such as GROUP BY, HAVING, FROM, JOIN ON, which are usually hard to find counterparts for in the text descriptions; (2) removing the need for nested subqueries and set operators; and (3) making schema linking easier by reducing the required number of schema items. On Spider, a challenging text-to-SQL benchmark that contains complex and nested SQL queries, NatSQL outperforms other IRs and significantly improves the performance of several previous SOTA models. Furthermore, for existing models that do not support executable SQL generation, NatSQL easily enables them to generate executable SQL queries. This thesis also discusses the robustness of text-to-SQL models.Recently, there has been significant progress in studying neural networks to translate text descriptions into SQL queries. Despite achieving good performance on some public benchmarks, existing text-to-SQL models typically rely on lexical matching between words in natural language (NL) questions and tokens in table schemas, which may render models vulnerable to attacks that break the schema linking mechanism. In particular, this thesis introduces Spider-Syn, a human-curated dataset based on the Spider benchmark for text-to-SQL translation. NL questions in Spider-Syn were modified from Spider, by replacing their schema-related words with manually selected synonyms that reflect real-world question paraphrases. Experiments show that the accuracy dramatically drops with the elimination of such explicit correspondence between NL questions and table schemas, even if the synonyms are not adversarially selected to conduct worst-case adversarial attacks. We present two categories of approaches to improve the model robustness. The first category of approaches utilizes additional synonym annotations for table schemas by modifying the model input, whereas the second category is based on adversarial training. Experiments illustrate that both categories of approaches significantly outperform their counterparts without the defense and that the approaches in the first category are more effective. Based on the above study results, we further discuss the Exact Match based Schema Linking (EMSL). EMSL has become standard in text-to-SQL: many state-of-the-art models employ EMSL, with performance dropping significantly when the EMSL component is removed. However, we show that EMSL reduces robustness, rendering models vulnerable to synonym substitution and typos. Instead of relying on EMSL to make up for deficiencies in question-schema encoding, we show that using a pre-trained language model as an encoder can improve performance without using EMSL, creating a more robust model. We also study the design choice of the schema linking module, finding that a suitable design benefits performance and interpretability. Our experiments show that better understanding of the schema linking mechanism can improve model interpretability, robustness and performance. This thesis finally discusses the text-to-SQL compositional generalization challenge: neural networks struggle with compositional generalization where training and test distributions differ. In this thesis, we propose a clause-level compositional example generation method. We first split the sentences in the Spider text-to-SQL dataset into sub-sentences, annotating each sub-sentence with its corresponding SQL clause, resulting in a new dataset Spider-SS. We then construct a further dataset, Spider-CG, by composing Spider-SS sub-sentences in different combinations, to test the ability of models to generalize compositionally. Experiments show that existing models suffer significant performance degradation when evaluated on Spider-CG, even though every sub-sentence is seen during training. To deal with this problem, we modify a number of state-of-the-art models to train on the segmented data of Spider-SS, and we show that this method improves the generalization performance

    Neural Graph Transfer Learning in Natural Language Processing Tasks

    Get PDF
    Natural language is essential in our daily lives as we rely on languages to communicate and exchange information. A fundamental goal for natural language processing (NLP) is to let the machine understand natural language to help or replace human experts to mine knowledge and complete tasks. Many NLP tasks deal with sequential data. For example, a sentence is considered as a sequence of works. Very recently, deep learning-based language models (i.e.,BERT \citep{devlin2018bert}) achieved significant improvement in many existing tasks, including text classification and natural language inference. However, not all tasks can be formulated using sequence models. Specifically, graph-structured data is also fundamental in NLP, including entity linking, entity classification, relation extraction, abstractive meaning representation, and knowledge graphs \citep{santoro2017simple,hamilton2017representation,kipf2016semi}. In this scenario, BERT-based pretrained models may not be suitable. Graph Convolutional Neural Network (GCN) \citep{kipf2016semi} is a deep neural network model designed for graphs. It has shown great potential in text classification, link prediction, question answering and so on. This dissertation presents novel graph models for NLP tasks, including text classification, prerequisite chain learning, and coreference resolution. We focus on different perspectives of graph convolutional network modeling: for text classification, a novel graph construction method is proposed which allows interpretability for the prediction; for prerequisite chain learning, we propose multiple aggregation functions that utilize neighbors for better information exchange; for coreference resolution, we study how graph pretraining can help when labeled data is limited. Moreover, an important branch is to apply pretrained language models for the mentioned tasks. So, this dissertation also focuses on the transfer learning method that generalizes pretrained models to other domains, including medical, cross-lingual, and web data. Finally, we propose a new task called unsupervised cross-domain prerequisite chain learning, and study novel graph-based methods to transfer knowledge over graphs
    • …
    corecore