25 research outputs found

    Approximability in the GPAC

    Full text link
    Most of the physical processes arising in nature are modeled by either ordinary or partial differential equations. From the point of view of analog computability, the existence of an effective way to obtain solutions of these systems is essential. A pioneering model of analog computation is the General Purpose Analog Computer (GPAC), introduced by Shannon as a model of the Differential Analyzer and improved by Pour-El, Lipshitz and Rubel, Costa and Gra\c{c}a and others. Its power is known to be characterized by the class of differentially algebraic functions, which includes the solutions of initial value problems for ordinary differential equations. We address one of the limitations of this model, concerning the notion of approximability, a desirable property in computation over continuous spaces that is however absent in the GPAC. In particular, the Shannon GPAC cannot be used to generate non-differentially algebraic functions which can be approximately computed in other models of computation. We extend the class of data types using networks with channels which carry information on a general complete metric space XX; for example X=C(R,R)X=C(R,R), the class of continuous functions of one real (spatial) variable. We consider the original modules in Shannon's construction (constants, adders, multipliers, integrators) and we add \emph{(continuous or discrete) limit} modules which have one input and one output. We then define an L-GPAC to be a network built with XX-stream channels and the above-mentioned modules. This leads us to a framework in which the specifications of such analog systems are given by fixed points of certain operators on continuous data streams. We study these analog systems and their associated operators, and show how some classically non-generable functions, such as the gamma function and the zeta function, can be captured with the L-GPAC

    Solving Smullyan puzzles with formal systems

    Get PDF
    info:eu-repo/semantics/publishedVersio

    Robust Revenue Maximization Under Minimal Statistical Information

    Full text link
    We study the problem of multi-dimensional revenue maximization when selling mm items to a buyer that has additive valuations for them, drawn from a (possibly correlated) prior distribution. Unlike traditional Bayesian auction design, we assume that the seller has a very restricted knowledge of this prior: they only know the mean μj\mu_j and an upper bound σj\sigma_j on the standard deviation of each item's marginal distribution. Our goal is to design mechanisms that achieve good revenue against an ideal optimal auction that has full knowledge of the distribution in advance. Informally, our main contribution is a tight quantification of the interplay between the dispersity of the priors and the aforementioned robust approximation ratio. Furthermore, this can be achieved by very simple selling mechanisms. More precisely, we show that selling the items via separate price lotteries achieves an O(logr)O(\log r) approximation ratio where r=maxj(σj/μj)r=\max_j(\sigma_j/\mu_j) is the maximum coefficient of variation across the items. If forced to restrict ourselves to deterministic mechanisms, this guarantee degrades to O(r2)O(r^2). Assuming independence of the item valuations, these ratios can be further improved by pricing the full bundle. For the case of identical means and variances, in particular, we get a guarantee of O(log(r/m))O(\log(r/m)) which converges to optimality as the number of items grows large. We demonstrate the optimality of the above mechanisms by providing matching lower bounds. Our tight analysis for the deterministic case resolves an open gap from the work of Azar and Micali [ITCS'13]. As a by-product, we also show how one can directly use our upper bounds to improve and extend previous results related to the parametric auctions of Azar et al. [SODA'13]

    System FωμF^\mu_\omega with Context-free Session Types

    Full text link
    We study increasingly expressive type systems, from FμF^\mu -- an extension of the polymorphic lambda calculus with equirecursive types -- to Fωμ;F^{\mu;}_\omega -- the higher-order polymorphic lambda calculus with equirecursive types and context-free session types. Type equivalence is given by a standard bisimulation defined over a novel labelled transition system for types. Our system subsumes the contractive fragment of FωμF^\mu_\omega as studied in the literature. Decidability results for type equivalence of the various type languages are obtained from the translation of types into objects of an appropriate computational model: finite-state automata, simple grammars and deterministic pushdown automata. We show that type equivalence is decidable for a significant fragment of the type language. We further propose a message-passing, concurrent functional language equipped with the expressive type language and show that it enjoys preservation and absence of runtime errors for typable processes.Comment: 38 pages, 13 figure

    Oracles that measure thresholds: The Turing machine and the broken balance

    Get PDF
    info:eu-repo/semantics/publishedVersio
    corecore