1,697 research outputs found

    Explaining Gabriel-Zisman localization to the computer

    Get PDF
    This explains a computer formulation of Gabriel-Zisman localization of categories in the proof assistant Coq. It includes both the general localization construction with the proof of GZ's Lemma 1.2, as well as the construction using calculus of fractions. The proof files are bundled with the other preprint "Files for GZ localization" posted simultaneously

    Identification in Differentiated Products Markets Using Market Level Data

    Get PDF
    We consider nonparametric identification in models of differentiated products markets, using only market level observables. On the demand side we consider a nonparametric random utility model nesting random coefficients discrete choice models widely used in applied work. We allow for product/market-specific unobservables, endogenous product characteristics (e.g., prices), and high-dimensional taste shocks with arbitrary correlation and heteroskedasticity. On the supply side we specify marginal costs nonparametrically, allow for unobserved firm heterogeneity, and nest a variety of equilibrium oligopoly models. We pursue two approaches to identification. One relies on instrumental variables conditions used previously to demonstrate identification in a nonparametric regression framework. With this approach we can show identification of the demand side without reference to a particular supply model. Adding the supply side allows identification of firms' marginal costs as well. Our second approach, more closely linked to classical identification arguments for supply and demand models, employs a change of variables approach. This leads to constructive identification results relying on exclusion and support conditions. Our results lead to a testable restriction that provides the first general formalization of Bresnahan's (1982) intuition for empirically discriminating between alternative models of oligopoly competition.

    Constraint Satisfaction Techniques for Combinatorial Problems

    Get PDF
    The last two decades have seen extraordinary advances in tools and techniques for constraint satisfaction. These advances have in turn created great interest in their industrial applications. As a result, tools and techniques are often tailored to meet the needs of industrial applications out of the box. We claim that in the case of abstract combinatorial problems in discrete mathematics, the standard tools and techniques require special considerations in order to be applied effectively. The main objective of this thesis is to help researchers in discrete mathematics weave through the landscape of constraint satisfaction techniques in order to pick the right tool for the job. We consider constraint satisfaction paradigms like satisfiability of Boolean formulas and answer set programming, and techniques like symmetry breaking. Our contributions range from theoretical results to practical issues regarding tool applications to combinatorial problems. We prove search-versus-decision complexity results for problems about backbones and backdoors of Boolean formulas. We consider applications of constraint satisfaction techniques to problems in graph arrowing (specifically in Ramsey and Folkman theory) and computational social choice. Our contributions show how applying constraint satisfaction techniques to abstract combinatorial problems poses additional challenges. We show how these challenges can be addressed. Additionally, we consider the issue of trusting the results of applying constraint satisfaction techniques to combinatorial problems by relying on verified computations

    The Michaelis-Menten-Stueckelberg Theorem

    Full text link
    We study chemical reactions with complex mechanisms under two assumptions: (i) intermediates are present in small amounts (this is the quasi-steady-state hypothesis or QSS) and (ii) they are in equilibrium relations with substrates (this is the quasiequilibrium hypothesis or QE). Under these assumptions, we prove the generalized mass action law together with the basic relations between kinetic factors, which are sufficient for the positivity of the entropy production but hold even without microreversibility, when the detailed balance is not applicable. Even though QE and QSS produce useful approximations by themselves, only the combination of these assumptions can render the possibility beyond the "rarefied gas" limit or the "molecular chaos" hypotheses. We do not use any a priori form of the kinetic law for the chemical reactions and describe their equilibria by thermodynamic relations. The transformations of the intermediate compounds can be described by the Markov kinetics because of their low density ({\em low density of elementary events}). This combination of assumptions was introduced by Michaelis and Menten in 1913. In 1952, Stueckelberg used the same assumptions for the gas kinetics and produced the remarkable semi-detailed balance relations between collision rates in the Boltzmann equation that are weaker than the detailed balance conditions but are still sufficient for the Boltzmann HH-theorem to be valid. Our results are obtained within the Michaelis-Menten-Stueckelbeg conceptual framework.Comment: 54 pages, the final version; correction of a misprint in Attachment

    Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts published in a same volume. Part II is dedicated to the relation between logic and information system, within the scope of Kolmogorov algorithmic information theory. We present a recent application of Kolmogorov complexity: classification using compression, an idea with provocative implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses how Kolmogorov complexity, besides being a foundation to randomness, is also related to classification. Another approach to classification is also considered: the so-called "Google classification". It uses another original and attractive idea which is connected to the classification using compression and to Kolmogorov complexity from a conceptual point of view. We present and unify these different approaches to classification in terms of Bottom-Up versus Top-Down operational modes, of which we point the fundamental principles and the underlying duality. We look at the way these two dual modes are used in different approaches to information system, particularly the relational model for database introduced by Codd in the 70's. This allows to point out diverse forms of a fundamental duality. These operational modes are also reinterpreted in the context of the comprehension schema of axiomatic set theory ZF. This leads us to develop how Kolmogorov's complexity is linked to intensionality, abstraction, classification and information system.Comment: 43 page
    • …
    corecore