5,044 research outputs found

    A Partial Order on Preference Profiles

    Full text link
    We propose a theoretical framework under which preference profiles can be meaningfully compared. Specifically, given a finite set of feasible allocations and a preference profile, we first define a ranking vector of an allocation as the vector of all individuals' rankings of this allocation. We then define a partial order on preference profiles and write "Pā‰„Pā€²P \geq P^{'}", if there exists an onto mapping Ļˆ\psi from the Pareto frontier of Pā€²P^{'} onto the Pareto frontier of PP, such that the ranking vector of any Pareto efficient allocation xx under Pā€²P^{'} is weakly dominated by the ranking vector of the image allocation Ļˆ(x)\psi(x) under PP. We provide a characterization of the maximal and minimal elements under the partial order. In particular, we illustrate how an individualistic form of social preferences can be maximal in a specific setting. We also discuss how the framework can be further generalized to incorporate additional economic ingredients

    IV Regressions without Exclusion Restrictions

    Full text link
    We study identification and estimation of endogenous linear and nonlinear regression models without excluded instrumental variables, based on the standard mean independence condition and a nonlinear relevance condition. Based on the identification results, we propose two semiparametric estimators as well as a discretization-based estimator that does not require any nonparametric regressions. We establish their asymptotic normality and demonstrate via simulations their robust finite-sample performances with respect to exclusion restrictions violations and endogeneity. Our approach is applied to study the returns to education, and to test the direct effects of college proximity indicators as well as family background variables on the outcome

    Two-Stage Maximum Score Estimator

    Full text link
    This paper considers the asymptotic theory of a semiparametric M-estimator that is generally applicable to models that satisfy a monotonicity condition in one or several parametric indexes. We call the estimator two-stage maximum score (TSMS) estimator since our estimator involves a first-stage nonparametric regression when applied to the binary choice model of Manski (1975, 1985). We characterize the asymptotic distribution of the TSMS estimator, which features phase transitions depending on the dimension and thus the convergence rate of the first-stage estimation. We show that the TSMS estimator is asymptotically equivalent to the smoothed maximum-score estimator (Horowitz, 1992) when the dimension of the first-step estimation is relatively low, while still achieving partial rate acceleration relative to the cubic-root rate when the dimension is not too high. Effectively, the first-stage nonparametric estimator serves as an imperfect smoothing function on a non-smooth criterion function, leading to the pivotality of the first-stage estimation error with respect to the second-stage convergence rate and asymptotic distributio

    How Flexible is that Functional Form? Quantifying the Restrictiveness of Theories

    Full text link
    We propose a new way to quantify the restrictiveness of an economic model, based on how well the model fits simulated, hypothetical data sets. The data sets are drawn at random from a distribution that satisfies some application-dependent content restrictions (such as that people prefer more money to less). Models that can fit almost all hypothetical data well are not restrictive. To illustrate our approach, we evaluate the restrictiveness of two widely-used behavioral models, Cumulative Prospect Theory and the Poisson Cognitive Hierarchy Model, and explain how restrictiveness reveals new insights about them

    Distinct regions of ATF/CREB proteins Atf1 and Pcr1 control recombination hotspot ade6ā€“M26 and the osmotic stress response

    Get PDF
    The Atf1 protein of Schizosaccharomyces pombe contains a bZIP (DNA-binding/protein dimerization) domain characteristic of ATF/CREB proteins, but no other functional domains or clear homologs have been reported. Atf1-containing, bZIP protein dimers bind to CRE-like DNA sites, regulate numerous stress responses, and activate meiotic recombination at hotspots like ade6ā€“M26. We defined systematically the organization of Atf1 and its heterodimer partner Pcr1, which is required for a subset of Atf1-dependent functions. Surprisingly, only the bZIP domain of Pcr1 is required for hotspot activity and tethering of Atf1 to ade6 promotes recombination in the absence of its bZIP domain and the Pcr1 protein. Therefore the recombinationā€“activation domain of Atf1-Pcr1 heterodimer resides exclusively in Atf1, and Pcr1 confers DNA-binding site specificity in vivo. Atf1 has a modular organization in which distinct regions affect differentially the osmotic stress response (OSA) and meiotic recombination (HRA, HRR). The HRA and HRR regions are necessary and sufficient to activate and repress recombination, respectively. Moreover, Atf1 defines a family of conserved proteins with discrete sequence motifs in the functional domains (OSA, HRA, HRR, bZIP). These findings reveal the functional organization of Atf1 and Pcr1, and illustrate several mechanisms by which bZIP proteins can regulate multiple, seemingly disparate activities

    Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture

    Full text link
    In this paper, we propose a highly parameter-efficient approach to scaling pre-trained language models (PLMs) to a deeper model depth. Unlike prior work that shares all parameters or uses extra blocks, we design a more capable parameter-sharing architecture based on matrix product operator (MPO). MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts: the major part that contains the major information (central tensor) and the supplementary part that only has a small proportion of parameters (auxiliary tensors). Based on such a decomposition, our architecture shares the central tensor across all layers for reducing the model size and meanwhile keeps layer-specific auxiliary tensors (also using adapters) for enhancing the adaptation flexibility. To improve the model training, we further propose a stable initialization algorithm tailored for the MPO-based architecture. Extensive experiments have demonstrated the effectiveness of our proposed model in reducing the model size and achieving highly competitive performance.Comment: 14 pages, 4 figures, 6 table
    • ā€¦
    corecore