60 research outputs found

    長慶體、梅村體與“本事詩” : 略論中國詩體的叙事形態

    Full text link
    本文以中國詩學中的“赋”、“本事”、“記事”等概念為據,區分了唐人新樂府“諷諫”之旨與元白長慶體“叙事”之旨的不同;又據徐銑《本事詩》,析出長慶體在明代的餘波,以及清初“梅村體”的成功。此體並貫穿於清中葉,直至民初,成為吾國叙事詩的主型。 Based on the concepts of fa, benshi and jishi in Chinese poetry, this paper distinguishes the difference between the purpose of irony of Tang Dynasty’s xin yuefu and the narratives of changqing style by Yuan Zhen and Bai Juyi. This paper will further analyze the aftermath of the poetic style of changqing in the Ming Dynasty and the success of the poetic style of meicun in early Qing Dynasty according to Xu Qiu’s benshishi. This poetic style was practiced through the middle of the Qing Dynasty until the early years of the Republic of China, becoming the major style of China’s narrative poetry

    How do firms form inflation expectations? Empirical evidence from the United States

    Get PDF
    Inflation expectations of firms affect their micro-decision-making behaviors and therefore impact the macro-economy. Thus, a deep understanding of how firms form inflation expectations benefits the achievement of central bank’s policy objectives on macro-economic sustainability and development. In this paper, we focus on the inflation expectations of firms from surveys. Specifically, the Naïve Expectation, Adaptive Expectation, Rational Expectation, VAR, and Heterogeneous Static Expectation formation models are adopted to test the models being used by firms to form inflation expectations. Empirically, this paper reveals the heterogeneity between the formation mechanisms of households and firms. Then, empirical results reject the rational expectation hypothesis of firms’ inflation expectations, which means that they are not perfectly rational. Finally, we find that the inflation perception is a non-negligible factor in forming firms’ inflation expectations. Therefore, central banks’ monetary policies that aiming to formulate firms’ inflation perceptions can be a useful tool in regulating their inflation expectations, which are expected to benefit the stability of the macro-econom

    Investor attention and carbon return: evidence from the EU-ETS

    Get PDF
    This paper firstly puts forward to employ investor attention obtained from Google trends to explain and forecast carbon futures return in the European Union-Emission Trading Scheme (EU-ETS). Our empirical results show that investor attention is a granger cause to changes in carbon return. Furthermore, investor attention generates both linear and non-linear effects on carbon return. The results demonstrate that investor attention shows excellent explanatory power on carbon return. Moreover, we conduct several out-of-sample forecasts to explore the predictive power of investor attention. The results indicate that incorporating investor attention indeed improve the accuracy of out-of-sample forecasts both in short and long horizons and can generate significant economic values. All results demonstrate that investor attention is a non-negligible pricing factor in carbon market

    Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks

    Full text link
    Text-to-image (T2I) diffusion models (DMs) have shown promise in generating high-quality images from textual descriptions. The real-world applications of these models require particular attention to their safety and fidelity, but this has not been sufficiently explored. One fundamental question is whether existing T2I DMs are robust against variations over input texts. To answer it, this work provides the first robustness evaluation of T2I DMs against real-world attacks. Unlike prior studies that focus on malicious attacks involving apocryphal alterations to the input texts, we consider an attack space spanned by realistic errors (e.g., typo, glyph, phonetic) that humans can make, to ensure semantic consistency. Given the inherent randomness of the generation process, we develop novel distribution-based attack objectives to mislead T2I DMs. We perform attacks in a black-box manner without any knowledge of the model. Extensive experiments demonstrate the effectiveness of our method for attacking popular T2I DMs and simultaneously reveal their non-trivial robustness issues. Moreover, we provide an in-depth analysis of our method to show that it is not designed to attack the text encoder in T2I DMs solely

    BayesAdapter: Being Bayesian, Inexpensively and Reliably, via Bayesian Fine-tuning

    Full text link
    Despite their theoretical appealingness, Bayesian neural networks (BNNs) are left behind in real-world adoption, due to persistent concerns on their scalability, accessibility, and reliability. In this work, we aim to relieve these concerns by developing the BayesAdapter framework for learning variational BNNs. In particular, we propose to adapt the pre-trained deterministic NNs to be BNNs via cost-effective Bayesian fine-tuning. To make BayesAdapter more practical, we technically contribute 1) a modularized, user-friendly implementation for the learning of variational BNNs under two representative variational distributions, 2) a generally applicable strategy for reducing the gradient variance in stochastic variational inference, 3) an explanation for the unreliability issue of BNNs' uncertainty estimates, and a corresponding prescription. Through extensive experiments on diverse benchmarks, we show that BayesAdapter can consistently induce posteriors with higher quality than the from-scratch variational inference and other competitive baselines, especially in large-scale settings, yet significantly reducing training overheads

    Learning Sample Difficulty from Pre-trained Models for Reliable Prediction

    Full text link
    Large-scale pre-trained models have achieved remarkable success in many applications, but how to leverage them to improve the prediction reliability of downstream models is undesirably under-explored. Moreover, modern neural networks have been found to be poorly calibrated and make overconfident predictions regardless of inherent sample difficulty and data uncertainty. To address this issue, we propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization. Pre-trained models that have been exposed to large-scale datasets and do not overfit the downstream training classes enable us to measure each training sample's difficulty via feature-space Gaussian modeling and relative Mahalanobis distance computation. Importantly, by adaptively penalizing overconfident prediction based on the sample difficulty, we simultaneously improve accuracy and uncertainty calibration across challenging benchmarks (e.g., +0.55% ACC and -3.7% ECE on ImageNet1k using ResNet34), consistently surpassing competitive baselines for reliable prediction. The improved uncertainty estimate further improves selective classification (abstaining from erroneous predictions) and out-of-distribution detection.Comment: NeurIPS 202

    Improving transferability of 3D adversarial attacks with scale and shear transformations

    Full text link
    Previous work has shown that 3D point cloud classifiers can be vulnerable to adversarial examples. However, most of the existing methods are aimed at white-box attacks, where the parameters and other information of the classifiers are known in the attack, which is unrealistic for real-world applications. In order to improve the attack performance of the black-box classifiers, the research community generally uses the transfer-based black-box attack. However, the transferability of current 3D attacks is still relatively low. To this end, this paper proposes Scale and Shear (SS) Attack to generate 3D adversarial examples with strong transferability. Specifically, we randomly scale or shear the input point cloud, so that the attack will not overfit the white-box model, thereby improving the transferability of the attack. Extensive experiments show that the SS attack proposed in this paper can be seamlessly combined with the existing state-of-the-art (SOTA) 3D point cloud attack methods to form more powerful attack methods, and the SS attack improves the transferability over 3.6 times compare to the baseline. Moreover, while substantially outperforming the baseline methods, the SS attack achieves SOTA transferability under various defenses. Our code will be available online at https://github.com/cuge1995/SS-attackComment: 10 page
    corecore