178 research outputs found

    The Optimal Mix of Monetary and Climate Policy

    Get PDF
    Given central banks' recent interest in "greening the financial system", this research theoretically investigates the relationship between monetary and climate policy and tries to find their “optimal mix”. We build an Environmental Dynamic Stochastic General Equilibrium (E-DSGE) model with the consideration of illegal emission which is pervasive in many countries. According to the model, we find: First, the dynamic of monetary policy is influenced by the selection of regimes of climate policy and the effectiveness of enforcement of environmental regulation. Second, the coefficients in the traditional Taylor rule of monetary policy can be better set to enhance welfare when a certain regime of climate policy is given in the economy. This helps find the constrained optimums of a policy mix. Third, if the mitigation of climate change is augmented into the target of monetary policy, the economy’s welfare can be enhanced. However, under certain circumstances, a dilemma in such monetary policy makes it incompatible with the traditional mandate of central bank

    Engaging Central Banks in Climate Change? The Mix of Monetary and Climate Policy

    Get PDF
    Given the recent debate on central banks’ role under climate change, this research theoretically investigates the mix of monetary and climate policy and provides some insights for central banks who are considering their engagement in the climate change issue. The “climate-augmented” monetary policy is pioneeringly proposed and studied. We build an extended Environmental Dynamic Stochastic General Equilibrium (E-DSGE) model as the method. By this model, we find the following results. First, the making process of monetary policy should consider the existing climate policy and environmental regulation. Second, the coefficients in traditional monetary policy can be better set to enhance welfare when climate policy is given. This provides a way to optimise the policy mix. Third, if a typical form climate target is augmented into the monetary policy rule, a dilemma could be created. This means that it has some risks for central banks to care for the climate proactively by using the narrow monetary policy. At the current stage, central banks could and should use other measures to help the climate and the financial stability

    Contrastive Triple Extraction with Generative Transformer

    Full text link
    Triple extraction is an essential task in information extraction for natural language processing and knowledge graph construction. In this paper, we revisit the end-to-end triple extraction task for sequence generation. Since generative triple extraction may struggle to capture long-term dependencies and generate unfaithful triples, we introduce a novel model, contrastive triple extraction with a generative transformer. Specifically, we introduce a single shared transformer module for encoder-decoder-based generation. To generate faithful results, we propose a novel triplet contrastive training object. Moreover, we introduce two mechanisms to further improve model performance (i.e., batch-wise dynamic attention-masking and triple-wise calibration). Experimental results on three datasets (i.e., NYT, WebNLG, and MIE) show that our approach achieves better performance than that of baselines.Comment: Accepted by AAAI 202

    Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning

    Full text link
    Pre-trained language models have contributed significantly to relation extraction by demonstrating remarkable few-shot learning abilities. However, prompt tuning methods for relation extraction may still fail to generalize to those rare or hard patterns. Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test. Those long-tailed or hard patterns can hardly be memorized in parameters given few-shot instances. To this end, we regard RE as an open-book examination and propose a new semiparametric paradigm of retrieval-enhanced prompt tuning for relation extraction. We construct an open-book datastore for retrieval regarding prompt-based instance representations and corresponding relation labels as memorized key-value pairs. During inference, the model can infer relations by linearly interpolating the base output of PLM with the non-parametric nearest neighbor distribution over the datastore. In this way, our model not only infers relation through knowledge stored in the weights during training but also assists decision-making by unwinding and querying examples in the open-book datastore. Extensive experiments on benchmark datasets show that our method can achieve state-of-the-art in both standard supervised and few-shot settings. Code are available in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE.Comment: Accepted by SIGIR 2022, short pape

    Optimizing Closed Month Accounting by Utilizing Leases on Data Access and Editing

    Get PDF
    The present disclosure describes computer-implemented systems and methods for improved efficiency of closed month accounting procedures. More particularly, systems and methods can reduce system queries pertaining to a closed month by utilizing time-based locks, hereinafter “leases,” when preparing entries for the closed month. When a month begins to close, it is provided with a lease and an expiration time of the lease. During the lease, a user can commit entries to the closing month without creating data errors

    Learning to Ask for Data-Efficient Event Argument Extraction

    Full text link
    Event argument extraction (EAE) is an important task for information extraction to discover specific argument roles. In this study, we cast EAE as a question-based cloze task and empirically analyze fixed discrete token template performance. As generating human-annotated question templates is often time-consuming and labor-intensive, we further propose a novel approach called "Learning to Ask," which can learn optimized question templates for EAE without human annotations. Experiments using the ACE-2005 dataset demonstrate that our method based on optimized questions achieves state-of-the-art performance in both the few-shot and supervised settings.Comment: work in progres

    Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction

    Full text link
    Recent neural-based relation extraction approaches, though achieving promising improvement on benchmark datasets, have reported their vulnerability towards adversarial attacks. Thus far, efforts mostly focused on generating adversarial samples or defending adversarial attacks, but little is known about the difference between normal and adversarial samples. In this work, we take the first step to leverage the salience-based method to analyze those adversarial samples. We observe that salience tokens have a direct correlation with adversarial perturbations. We further find the adversarial perturbations are either those tokens not existing in the training set or superficial cues associated with relation labels. To some extent, our approach unveils the characters against adversarial samples. We release an open-source testbed, "DiagnoseAdv" in https://github.com/zjunlp/DiagnoseAdv.Comment: IJCKG 202

    One Model for All Domains: Collaborative Domain-Prefix Tuning for Cross-Domain NER

    Full text link
    Cross-domain NER is a challenging task to address the low-resource problem in practical scenarios. Previous typical solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain. Owing to the mismatch issue among entity types in different domains, previous approaches normally tune all parameters of PLMs, ending up with an entirely new NER model for each domain. Moreover, current models only focus on leveraging knowledge in one general source domain while failing to successfully transfer knowledge from multiple sources to the target. To address these issues, we introduce Collaborative Domain-Prefix Tuning for cross-domain NER (CP-NER) based on text-to-text generative PLMs. Specifically, we present text-to-text generation grounding domain-related instructors to transfer knowledge to new domain NER tasks without structural modifications. We utilize frozen PLMs and conduct collaborative domain-prefix tuning to stimulate the potential of PLMs to handle NER tasks across various domains. Experimental results on the Cross-NER benchmark show that the proposed approach has flexible transfer ability and performs better on both one-source and multiple-source cross-domain NER tasks. Codes will be available in https://github.com/zjunlp/DeepKE/tree/main/example/ner/cross.Comment: Work in progres
    • …
    corecore