1,056,013 research outputs found

    Computational Efficiency Requires Simple Taxation

    Full text link
    We characterize the communication complexity of truthful mechanisms. Our departure point is the well known taxation principle. The taxation principle asserts that every truthful mechanism can be interpreted as follows: every player is presented with a menu that consists of a price for each bundle (the prices depend only on the valuations of the other players). Each player is allocated a bundle that maximizes his profit according to this menu. We define the taxation complexity of a truthful mechanism to be the logarithm of the maximum number of menus that may be presented to a player. Our main finding is that in general the taxation complexity essentially equals the communication complexity. The proof consists of two main steps. First, we prove that for rich enough domains the taxation complexity is at most the communication complexity. We then show that the taxation complexity is much smaller than the communication complexity only in "pathological" cases and provide a formal description of these extreme cases. Next, we study mechanisms that access the valuations via value queries only. In this setting we establish that the menu complexity -- a notion that was already studied in several different contexts -- characterizes the number of value queries that the mechanism makes in exactly the same way that the taxation complexity characterizes the communication complexity. Our approach yields several applications, including strengthening the solution concept with low communication overhead, fast computation of prices, and hardness of approximation by computationally efficient truthful mechanisms

    On the efficiency of computational imaging with structured illumination

    Full text link
    A generic computational imaging setup is considered which assumes sequential illumination of a semi-transparent object by an arbitrary set of structured illumination patterns. For each incident illumination pattern, all transmitted light is collected by a photon-counting bucket (single-pixel) detector. The transmission coefficients measured in this way are then used to reconstruct the spatial distribution of the object's projected transmission. It is demonstrated that the squared spatial resolution of such a setup is usually equal to the ratio of the image area to the number of linearly independent illumination patterns. If the noise in the measured transmission coefficients is dominated by photon shot noise, then the ratio of the spatially-averaged squared mean signal to the spatially-averaged noise variance in the "flat" distribution reconstructed in the absence of the object, is equal to the average number of registered photons when the illumination patterns are orthogonal. The signal-to-noise ratio in a reconstructed transmission distribution is always lower in the case of non-orthogonal illumination patterns due to spatial correlations in the measured data. Examples of imaging methods relevant to the presented analysis include conventional imaging with a pixelated detector, computational ghost imaging, compressive sensing, super-resolution imaging and computed tomography.Comment: Minor corrections and clarifications compared to the original versio

    Computational efficiency of staggered Wilson fermions: A first look

    Get PDF
    Results on the computational efficiency of 2-flavor staggered Wilson fermions compared to usual Wilson fermions in a quenched lattice QCD simulation on 163×3216^3\times32 lattice at β=6\beta=6 are reported. We compare the cost of inverting the Dirac matrix on a source by the conjugate gradient (CG) method for both of these fermion formulations, at the same pion masses, and without preconditioning. We find that the number of CG iterations required for convergence, averaged over the ensemble, is less by a factor of almost 2 for staggered Wilson fermions, with only a mild dependence on the pion mass. We also compute the condition number of the fermion matrix and find that it is less by a factor of 4 for staggered Wilson fermions. The cost per CG iteration, dominated by the cost of matrix-vector multiplication for the Dirac matrix, is known from previous work to be less by a factor 2-3 for staggered Wilson compared to usual Wilson fermions. Thus we conclude that staggered Wilson fermions are 4-6 times cheaper for inverting the Dirac matrix on a source in the quenched backgrounds of our study.Comment: v2: Major correction and revisions: we had overlooked a factor 1/4 in the cost estimate for matrix-vector multiplication with the staggered Wilson Dirac matrix. This gives an increased speed-up by a factor 4 for the overall computation cost. 7 pages, 3 figures, presented at the 31st International Symposium on Lattice Field Theory (Lattice 2013), 29 July - 3 August 2013, Mainz, German

    A Broad-Spectrum Computational Approach for Market Efficiency

    Get PDF
    The Efficient Market Hypothesis (EMH) is one of the most investigated questions in Finance. Nevertheless, it is still a puzzle, despite the enormous amount of research it has provoked. For instance, it is still discussed that market cannot be outperformed in the long run (Detry and Gregoire, 2001), persistent market anomalies cannot be easily explained in this theoretical framework (Shiller, 2003) and some talented hedge-fund managers keep earning excess risk-adjusted rates of returns regularly. We concentrate in this paper on the weak form of efficiency(Fama, 1970). We focus on the efficacity of simple technical trading rules, following a large research stream presented in Park and Irwin (2004). Nevertheless, we depart from previous works in many ways : we first have a large population of technical investment rules (more than 260.000) exploiting real-world data to manage a financial portfolio. Very few researches have used such a large amount of calculus to examine the EMH. Our experimental design allows for strategy selection based on past absolute performance. We take into account the data-snooping risk, which is an unavoidable problem in such broad-spectrum researches, using a rigorous Bootstrap Reality Check procedure. While market inefficiencies, after including transaction costs, cannot clearly be successfully exploited, our experiments present troubling outcomes inviting close re-consideration of the weak-form EMH.efficient market hypothesis, large scale simulations, bootstrap

    Computational Efficiency in Bayesian Model and Variable Selection

    Get PDF
    This paper is concerned with the efficient implementation of Bayesian model averaging (BMA) and Bayesian variable selection, when the number of candidate variables and models is large, and estimation of posterior model probabilities must be based on a subset of the models. Efficient implementation is concerned with two issues, the efficiency of the MCMC algorithm itself and efficient computation of the quantities needed to obtain a draw from the MCMC algorithm. For the first aspect, it is desirable that the chain moves well and quickly through the model space and takes draws from regions with high probabilities. In this context there is a natural trade-off between local moves, which make use of the current parameter values to propose plausible values for model parameters, and more global transitions, which potentially allow exploration of the distribution of interest in fewer steps, but where each step is more computationally intensive. We assess the convergence properties of simple samplers based on local moves and some recently proposed algorithms intended to improve on the basic samplers. For the second aspect, efficient computation within the sampler, we focus on the important case of linear models where the computations essentially reduce to least squares calculations. When the chain makes local moves, adding or dropping a variable, substantial gains in efficiency can be made by updating the previous least squares solution.

    Computational Efficiency in Bayesian Model and Variable Selection

    Get PDF
    Large scale Bayesian model averaging and variable selection exercises present, despite the great increase in desktop computing power, considerable computational challenges. Due to the large scale it is impossible to evaluate all possible models and estimates of posterior probabilities are instead obtained from stochastic (MCMC) schemes designed to converge on the posterior distribution over the model space. While this frees us from the requirement of evaluating all possible models the computational effort is still substantial and efficient implementation is vital. Efficient implementation is concerned with two issues: the efficiency of the MCMC algorithm itself and efficient computation of the quantities needed to obtain a draw from the MCMC algorithm. We evaluate several different MCMC algorithms and find that relatively simple algorithms with local moves perform competitively except possibly when the data is highly collinear. For the second aspect, efficient computation within the sampler, we focus on the important case of linear models where the computations essentially reduce to least squares calculations. Least squares solvers that update a previous model estimate are appealing when the MCMC algorithm makes local moves and we find that the Cholesky update is both fast and accurate.Bayesian Model Averaging; Sweep operator; Cholesky decomposition; QR decomposition; Swendsen-Wang algorithm

    Forward Flux Sampling-type schemes for simulating rare events: Efficiency analysis

    Full text link
    We analyse the efficiency of several simulation methods which we have recently proposed for calculating rate constants for rare events in stochastic dynamical systems, in or out of equilibrium. We derive analytical expressions for the computational cost of using these methods, and for the statistical error in the final estimate of the rate constant, for a given computational cost. These expressions can be used to determine which method to use for a given problem, to optimize the choice of parameters, and to evaluate the significance of the results obtained. We apply the expressions to the two-dimensional non-equilibrium rare event problem proposed by Maier and Stein. For this problem, our analysis gives accurate quantitative predictions for the computational efficiency of the three methods.Comment: 19 pages, 13 figure

    Some Incipient Techniques For Improving Efficiency in Computational Mechanics

    Get PDF
    This contribution presents a review of different techniques available for alleviating simulation cost in computational mechanics. The first one is based on a separated representation of the unknown fields; the second one uses a model reduction based on the Karhunen-Loève decomposition within an adaptive scheme, and the last one is a mixed technique specially adapted for reducing models involving local singularities. These techniques can be applied in a large variety of models

    Improving the Computational Efficiency in Symmetrical Numeric Constraint Satisfaction Problems

    Get PDF
    Models are used in science and engineering for experimentation, analysis, diagnosis or design. In some cases, they can be considered as numeric constraint satisfaction problems (NCSP). Many models are symmetrical NCSP. The consideration of symmetries ensures that NCSP-solver will find solutions if they exist on a smaller search space. Our work proposes a strategy to perform it. We transform the symmetrical NCSP into a newNCSP by means of addition of symmetry-breaking constraints before the search begins. The specification of a library of possible symmetries for numeric constraints allows an easy choice of these new constraints. The summarized results of the studied cases show the suitability of the symmetry-breaking constraints to improve the solving process of certain types of symmetrical NCSP. Their possible speedup facilitates the application of modelling and solving larger and more realistic problems.Ministerio de Ciencia y Tecnología DIP2003-0666-02-

    Optimizing the remeshing procedure by computational cost estimation of adaptive fem technique

    Get PDF
    The objective of adaptive techniques is to obtain a mesh which is optimal in the sense that the computational costs involved are minimal under the constraint that the error in the finite element solution is acceptable within a certain limit. But adaptive FEM procedure imposes extra computational cost to the solution. If we repeat the adaptive process without any limit, it will reduce efficiency of remeshing procedure. Sometimes it is better to take an initial very fine mesh instead of multilevel mesh refinement. So it is needed to estimate the computational cost of adaptive finite element technique and compare it with the FEM computational cost. The remeshing procedure can be optimized by balancing these computational costs
    corecore