2,382 research outputs found

    Obvious strategyproofness needs monitoring for good approximations (extended abstract)

    Get PDF
    Obvious strategyproofness (OSP) is an appealing concept as it allows to maintain incentive compatibility even in the presence of agents that are not fully rational, e.g., those who struggle with contingent reasoning [10]. However, it has been shown to impose some limitations, e.g., no OSP mechanism can return a stable matching [3] . We here deepen the study of the limitations of OSP mechanisms by look-ing at their approximation guarantees for basic optimization problems paradigmatic of the area, i.e., machine scheduling and facility location. We prove a number of bounds on the approximation guarantee of OSP mechanisms, which show that OSP can come at a signifificant cost. How-ever, rather surprisingly, we prove that OSP mechanisms can return opti-mal solutions when they use monitoring|a mechanism design paradigm that introduces a mild level of scrutiny on agents' declarations [9]

    Obvious strategyproofness needs monitoring for good approximations

    Get PDF
    Obvious strategyproofness (OSP) is an appealing concept as it allows to maintain incentive compatibility even in the presence of agents that are not fully rational, e.g., those who struggle with contingent reasoning (Li 2015). However, it has been shown to impose some limitations, e.g., no OSP mechanism can return a stable matching (Ashlagi and Gonczarowski 2015). We here deepen the study of the limitations of OSP mechanisms by looking at their approximation guarantees for basic optimization problems paradigmatic of the area, i.e., machine scheduling and facility location. We prove a number of bounds on the approximation guarantee of OSP mechanisms, which show that OSP can come at a significant cost. However, rather surprisingly, we prove that OSP mechanisms can return optimal solutions when they use monitoring?a novel mechanism design paradigm that introduces a mild level of scrutiny on agents? declarations (Kovács, Meyer, and Ventre 2015)

    Social Pressure in Opinion Games

    Get PDF
    Motivated by privacy and security concerns in online social networks, we study the role of social pressure in opinion games. These are games, important in economics and sociology, that model the formation of opinions in a social network. We enrich the definition of (noisy) best-response dynamics for opinion games by introducing the pressure, increasing with time, to reach an agreement. We prove that for clique social networks, the dynamics always converges to consensus (no matter the level of noise) if the social pressure is high enough. Moreover, we provide (tight) bounds on the speed of convergence; these bounds are polynomial in the number of players provided that the pressure grows sufficiently fast. We finally look beyond cliques: we characterize the graphs for which consensus is guaranteed, and make some considerations on the computational complexity of checking whether a graph satisfies such a condition

    On Augmented Stochastic Submodular Optimization: Adaptivity, Multi-Rounds, Budgeted, and Robustness

    Get PDF
    In this work we consider the problem of Stochastic Submodular Maximization, in which we would like to maximize the value of a monotone and submodular objective function, subject to the fact that the values of this function depend on the realization of stochastic events. This problem has applications in several areas, and in particular it well models basic problems such as influence maximization and stochastic probing. In this work, we advocate the necessity to extend the study of this problem in order to include several different features such as a budget constraint on the number of observations, the chance of adaptively choosing what we observe or the presence of multiple rounds. We here speculate on the possible directions that this line of research can take. In particular, we will discuss about interesting open problems mainly in the settings of robust optimization and online learning

    General Opinion Formation Games with Social Group Membership (Short Paper)

    Get PDF
    Modeling how agents form their opinions is of paramount importance for designing marketing and electoral campaigns. In this work, we present a new framework for opinion formation which generalizes the well-known Friedkin-Johnsen model by incorporating three important features: (i) social group membership, that limits the amount of influence that people not belonging to the same group may lead on a given agent; (ii) both attraction among friends, and repulsion among enemies; (iii) different strengths of influence lead from different people on a given agent, even if the social relationships among them are the same. We show that, despite its generality, our model always admits a pure Nash equilibrium which, under opportune mild conditions, is even unique. Next, we analyze the performances of these equilibria with respect to a social objective function defined as a convex combination, parametrized by a value λ ∈ [0, 1], of the costs yielded by the untruthfulness of the declared opinions and the total cost of social pressure. We prove bounds on both the price of anarchy and the price of stability which show that, for not-too-extreme values of λ, performance at equilibrium are very close to optimal ones. For instance, in several interesting scenarios, the prices of anarchy and stability are both equal to (Equation presented) which never exceeds 2 for λ ∈ [1/5, 1/2]

    Seismic retrofit of an existing reinforced concrete building with buckling-restrained braces

    Get PDF
    Background: The seismic retrofitting of frame structures using hysteretic dampers is a very effective strategy to mitigate earthquake-induced risks. However, its application in current practice is rather limited since simple and efficient design methods are still lacking, and the more accurate time-history analysis is time-consuming and computationally demanding. Aims: This paper develops and applies a seismic retrofit design method to a complex real case study: An eight-story reinforced concrete residential building equipped with buckling-restrained braces. Methods: The design method permits the peak seismic response to be predicted, as well as the dampers to be added in the structure to obtain a uniform distribution of the ductility demand. For that purpose, a pushover analysis with the first mode load pattern is carried out. The corresponding story pushover curves are first idealized using a degrading trilinear model and then used to define the SDOF (Single Degree-of-Freedom) system equivalent to the RC frame. The SDOF system, equivalent to the damped braces, is designed to meet performance criteria based on a target drift angle. An optimal damper distribution rule is used to distribute the damped braces along the elevation to maximize the use of all dampers and obtain a uniform distribution of the ductility demand. Results: The effectiveness of the seismic retrofit is finally demonstrated by non-linear time-history analysis using a set of earthquake ground motions with various hazard levels. Conclusion: The results proved the design procedure is feasible and effective since it achieves the performance objectives of damage control in structural members and uniform ductility demand in dampers

    Bayesian parameter estimation in the second LISA Pathfinder Mock Data Challenge

    Get PDF
    A main scientific output of the LISA Pathfinder mission is to provide a noise model that can be extended to the future gravitational wave observatory, LISA. The success of the mission depends thus upon a deep understanding of the instrument, especially the ability to correctly determine the parameters of the underlying noise model. In this work we estimate the parameters of a simplified model of the LISA Technology Package (LTP) instrument. We describe the LTP by means of a closed-loop model that is used to generate the data, both injected signals and noise. Then, parameters are estimated using a Bayesian framework and it is shown that this method reaches the optimal attainable error, the Cramer-Rao bound. We also address an important issue for the mission: how to efficiently combine the results of different experiments to obtain a unique set of parameters describing the instrument.Comment: 14 pages, 4 figures, submitted to PR

    Accelerating global parameter estimation of gravitational waves from Galactic binaries using a genetic algorithm and GPUs

    Full text link
    The Laser Interferometer Space Antenna (LISA) is a planned space-based gravitational wave telescope with the goal of measuring gravitational waves in the milli-Hertz frequency band, which is dominated by millions of Galactic binaries. While some of these binaries produce signals that are loud enough to stand out and be extracted, most of them blur into a confusion foreground. Current methods for analyzing the full frequency band recorded by LISA to extract as many Galactic binaries as possible and to obtain Bayesian posterior distributions for each of the signals are computationally expensive. We introduce a new approach to accelerate the extraction of the best fitting solutions for Galactic binaries across the entire frequency band from data with multiple overlapping signals. Furthermore, we use these best fitting solutions to omit the burn-in stage of a Markov chain Monte Carlo method and to take full advantage of GPU-accelerated signal simulation, allowing us to compute posterior distributions in 2 seconds per signal on a laptop-grade GPU.Comment: 13 pages, 11 figure

    Metastability of Asymptotically Well-Behaved Potential Games

    Full text link
    One of the main criticisms to game theory concerns the assumption of full rationality. Logit dynamics is a decentralized algorithm in which a level of irrationality (a.k.a. "noise") is introduced in players' behavior. In this context, the solution concept of interest becomes the logit equilibrium, as opposed to Nash equilibria. Logit equilibria are distributions over strategy profiles that possess several nice properties, including existence and uniqueness. However, there are games in which their computation may take time exponential in the number of players. We therefore look at an approximate version of logit equilibria, called metastable distributions, introduced by Auletta et al. [SODA 2012]. These are distributions that remain stable (i.e., players do not go too far from it) for a super-polynomial number of steps (rather than forever, as for logit equilibria). The hope is that these distributions exist and can be reached quickly by logit dynamics. We identify a class of potential games, called asymptotically well-behaved, for which the behavior of the logit dynamics is not chaotic as the number of players increases so to guarantee meaningful asymptotic results. We prove that any such game admits distributions which are metastable no matter the level of noise present in the system, and the starting profile of the dynamics. These distributions can be quickly reached if the rationality level is not too big when compared to the inverse of the maximum difference in potential. Our proofs build on results which may be of independent interest, including some spectral characterizations of the transition matrix defined by logit dynamics for generic games and the relationship of several convergence measures for Markov chains

    Bayesian parameter-estimation of Galactic binaries in LISA data with Gaussian Process Regression

    Full text link
    The Laser Interferometer Space Antenna (LISA), which is currently under construction, is designed to measure gravitational wave signals in the milli-Hertz frequency band. It is expected that tens of millions of Galactic binaries will be the dominant sources of observed gravitational waves. The Galactic binaries producing signals at mHz frequency range emit quasi monochromatic gravitational waves, which will be constantly measured by LISA. To resolve as many Galactic binaries as possible is a central challenge of the upcoming LISA data set analysis. Although it is estimated that tens of thousands of these overlapping gravitational wave signals are resolvable, and the rest blurs into a galactic foreground noise; extracting tens of thousands of signals using Bayesian approaches is still computationally expensive. We developed a new end-to-end pipeline using Gaussian Process Regression to model the log-likelihood function in order to rapidly compute Bayesian posterior distributions. Using the pipeline we are able to solve the Lisa Data Challange (LDC) 1-3 consisting of noisy data as well as additional challenges with overlapping signals and particularly faint signals.Comment: 12 pages, 10 figure
    corecore