78 research outputs found

    Thermal simulation method of solidification process in heavy ingot

    Get PDF

    Learning Transferable Adversarial Examples via Ghost Networks

    Full text link
    Recent development of adversarial attacks has proven that ensemble-based methods outperform traditional, non-ensemble ones in black-box attack. However, as it is computationally prohibitive to acquire a family of diverse models, these methods achieve inferior performance constrained by the limited number of models to be ensembled. In this paper, we propose Ghost Networks to improve the transferability of adversarial examples. The critical principle of ghost networks is to apply feature-level perturbations to an existing model to potentially create a huge set of diverse models. After that, models are subsequently fused by longitudinal ensemble. Extensive experimental results suggest that the number of networks is essential for improving the transferability of adversarial examples, but it is less necessary to independently train different networks and ensemble them in an intensive aggregation way. Instead, our work can be used as a computationally cheap and easily applied plug-in to improve adversarial approaches both in single-model and multi-model attack, compatible with residual and non-residual networks. By reproducing the NeurIPS 2017 adversarial competition, our method outperforms the No.1 attack submission by a large margin, demonstrating its effectiveness and efficiency. Code is available at https://github.com/LiYingwei/ghost-network.Comment: To appear in AAAI-2

    Reboost Large Language Model-based Text-to-SQL, Text-to-Python, and Text-to-Function -- with Real Applications in Traffic Domain

    Full text link
    The previous state-of-the-art (SOTA) method achieved a remarkable execution accuracy on the Spider dataset, which is one of the largest and most diverse datasets in the Text-to-SQL domain. However, during our reproduction of the business dataset, we observed a significant drop in performance. We examined the differences in dataset complexity, as well as the clarity of questions' intentions, and assessed how those differences could impact the performance of prompting methods. Subsequently, We develop a more adaptable and more general prompting method, involving mainly query rewriting and SQL boosting, which respectively transform vague information into exact and precise information and enhance the SQL itself by incorporating execution feedback and the query results from the database content. In order to prevent information gaps, we include the comments, value types, and value samples for columns as part of the database description in the prompt. Our experiments with Large Language Models (LLMs) illustrate the significant performance improvement on the business dataset and prove the substantial potential of our method. In terms of execution accuracy on the business dataset, the SOTA method scored 21.05, while our approach scored 65.79. As a result, our approach achieved a notable performance improvement even when using a less capable pre-trained language model. Last but not least, we also explore the Text-to-Python and Text-to-Function options, and we deeply analyze the pros and cons among them, offering valuable insights to the community

    Stellar Parameters of Main Sequence Turn-off Star Candidates Observed with the LAMOST and Kepler

    Full text link
    Main sequence turn-off (MSTO) stars have advantages as indicators of Galactic evolution since their ages could be robustly estimated from atmospheric parameters. Hundreds of thousands of MSTO stars have been selected from the LAMOST Galactic sur- vey to study the evolution of the Galaxy, and it is vital to derive accurate stellar parameters. In this work, we select 150 MSTO star candidates from the MSTO stars sample of Xiang that have asteroseismic parameters and determine accurate stellar parameters for these stars combing the asteroseismic parameters deduced from the Kepler photometry and atmospheric parameters deduced from the LAMOST spectra.With this sample, we examine the age deter- mination as well as the contamination rate of the MSTO stars sample. A comparison of age between this work and Xiang shows a mean difference of 0.53 Gyr (7%) and a dispersion of 2.71 Gyr (28%). The results show that 79 of the candidates are MSTO stars, while the others are contaminations from either main sequence or sub-giant stars. The contamination rate for the oldest stars is much higher than that for the younger stars. The main cause for the high contamination rate is found to be the relatively large systematic bias in the LAMOST surface gravity estimates.Comment: accepted by RA
    corecore