2,623 research outputs found

    Fano schemes of determinants and permanents

    Full text link
    Let Dm,nrD_{m,n}^r and Pm,nrP_{m,n}^r denote the subschemes of Pmn−1\mathbb{P}^{mn-1} given by the r×rr\times r determinants (respectively the r×rr\times r permanents) of an m×nm\times n matrix of indeterminates. In this paper, we study the geometry of the Fano schemes Fk(Dm,nr)\mathbf{F}_k(D_{m,n}^r) and Fk(Pm,nr)\mathbf{F}_k(P_{m,n}^r) parametrizing the kk-dimensional planes in Pmn−1\mathbb{P}^{mn-1} lying on Dm,nrD_{m,n}^r and Pm,nrP_{m,n}^r, respectively. We prove results characterizing which of these Fano schemes are smooth, irreducible, and connected; and we give examples showing that they need not be reduced. We show that F1(Dn,nn)\mathbf{F}_1(D_{n,n}^n) always has the expected dimension, and we describe its components exactly. Finally, we give a detailed study of the Fano schemes of kk-planes on the 3×33\times 3 determinantal and permanental hypersurfaces.Comment: 43 pages; v2 minor revisions. To appear in AN

    Relative Richardson Varieties

    Full text link
    A Richardson variety in a flag variety is an intersection of two Schubert varieties defined by transverse flags. We define and study relative Richardson varieties, which are defined over a base scheme with a vector bundle and two flags. To do so, we generalize transversality of flags to a relative notion, versality, that allows the flags to be non-transverse over some fibers. Relative Richardson varieties share many of the geometric properties of Richardson varieties. We generalize several geometric and cohomological facts about Richardson varieties to relative Richardson varieties. We also prove that the local geometry of a relative Richardson variety is governed, in a precise sense, by the two intersecting Schubert varieties, giving a generalization, in the flag variety case, of a theorem of Knutson-Woo-Yong; we also generalize this result to intersections of arbitrarily many relative Schubert varieties. We give an application to Brill-Noether varieties on elliptic curves, and a conjectural generalization to higher genus curves.Comment: 21 page

    Misclassification in Automated Content Analysis Causes Bias in Regression. Can We Fix It? Yes We Can!

    Full text link
    Automated classifiers (ACs), often built via supervised machine learning (SML), can categorize large, statistically powerful samples of data ranging from text to images and video, and have become widely popular measurement devices in communication science and related fields. Despite this popularity, even highly accurate classifiers make errors that cause misclassification bias and misleading results in downstream analyses-unless such analyses account for these errors. As we show in a systematic literature review of SML applications, communication scholars largely ignore misclassification bias. In principle, existing statistical methods can use "gold standard" validation data, such as that created by human annotators, to correct misclassification bias and produce consistent estimates. We introduce and test such methods, including a new method we design and implement in the R package misclassificationmodels, via Monte Carlo simulations designed to reveal each method's limitations, which we also release. Based on our results, we recommend our new error correction method as it is versatile and efficient. In sum, automated classifiers, even those below common accuracy standards or making systematic misclassifications, can be useful for measurement with careful study design and appropriate error correction methods.Comment: 41 page, 21 Figures, Top Paper Award from the 2023 Annual Meeting of The International Communication Association Computational Methods Divisio

    Cost-(in)effective public good provision: an experimental exploration

    Get PDF
    This paper investigates the determinants of cost-(in)effective giving to public goods. We conduct a pre-registered experiment to elucidate how factors at the institutional and individual levels shape individual contributions and the cost-effectiveness of those contributions in a novel public good game. In particular, we examine the role of consequential uncertainty over the value of public good contributions (institutional level) as well as individual characteristics like risk and ambiguity attitudes, giving type, and demographics (individual level). We find cost-ineffective contributions in all institutions, but total contribution levels and the degree of cost-ineffectiveness are similar across institutions. Meanwhile, cost-effectiveness varies by giving type—which is a novel result that is consistent with hypotheses we generate from theory—but other individual characteristics have little influence on the cost-effectiveness of contributions. Our work has important positive and normative implications for charitable giving and public good provision in the real world, and it is particularly germane to emerging online crowdfunding and patronage platforms that confront users with a multitude of competing opportunities for giving

    Simulating non-unitary dynamics using quantum signal processing with unitary block encoding

    Full text link
    We adapt a recent advance in resource-frugal quantum signal processing - the Quantum Eigenvalue Transform with Unitary matrices (QET-U) - to explore non-unitary imaginary time evolution on early fault-tolerant quantum computers using exactly emulated quantum circuits. We test strategies for optimising the circuit depth and the probability of successfully preparing the desired imaginary-time evolved states. For the task of ground state preparation, we confirm that the probability of successful post-selection is quadratic in the initial reference state overlap γ\gamma as O(γ2)O(\gamma^2). When applied instead to thermal state preparation, we show QET-U can directly estimate partition functions at exponential cost. Finally, we combine QET-U with Trotter product formula to perform non-normal Hamiltonian simulation in the propagation of Lindbladian open quantum system dynamics. We find that QET-U for non-unitary dynamics is flexible, intuitive and straightforward to use, and suggest ways for delivering quantum advantage in simulation tasks.Comment: 14 pages, 10 figures, minor corrections and updated citation

    Learning Risk Preferences in Markov Decision Processes: an Application to the Fourth Down Decision in Football

    Full text link
    For decades, National Football League (NFL) coaches' observed fourth down decisions have been largely inconsistent with prescriptions based on statistical models. In this paper, we develop a framework to explain this discrepancy using a novel inverse optimization approach. We model the fourth down decision and the subsequent sequence of plays in a game as a Markov decision process (MDP), the dynamics of which we estimate from NFL play-by-play data from the 2014 through 2022 seasons. We assume that coaches' observed decisions are optimal but that the risk preferences governing their decisions are unknown. This yields a novel inverse decision problem for which the optimality criterion, or risk measure, of the MDP is the estimand. Using the quantile function to parameterize risk, we estimate which quantile-optimal policy yields the coaches' observed decisions as minimally suboptimal. In general, we find that coaches' fourth-down behavior is consistent with optimizing low quantiles of the next-state value distribution, which corresponds to conservative risk preferences. We also find that coaches exhibit higher risk tolerances when making decisions in the opponent's half of the field than in their own, and that league average fourth down risk tolerances have increased over the seasons in our data.Comment: 33 pages, 9 figure
    • …
    corecore