1,296 research outputs found

    Privacy and Truthful Equilibrium Selection for Aggregative Games

    Full text link
    We study a very general class of games --- multi-dimensional aggregative games --- which in particular generalize both anonymous games and weighted congestion games. For any such game that is also large, we solve the equilibrium selection problem in a strong sense. In particular, we give an efficient weak mediator: a mechanism which has only the power to listen to reported types and provide non-binding suggested actions, such that (a) it is an asymptotic Nash equilibrium for every player to truthfully report their type to the mediator, and then follow its suggested action; and (b) that when players do so, they end up coordinating on a particular asymptotic pure strategy Nash equilibrium of the induced complete information game. In fact, truthful reporting is an ex-post Nash equilibrium of the mediated game, so our solution applies even in settings of incomplete information, and even when player types are arbitrary or worst-case (i.e. not drawn from a common prior). We achieve this by giving an efficient differentially private algorithm for computing a Nash equilibrium in such games. The rates of convergence to equilibrium in all of our results are inverse polynomial in the number of players nn. We also apply our main results to a multi-dimensional market game. Our results can be viewed as giving, for a rich class of games, a more robust version of the Revelation Principle, in that we work with weaker informational assumptions (no common prior), yet provide a stronger solution concept (ex-post Nash versus Bayes Nash equilibrium). In comparison to previous work, our main conceptual contribution is showing that weak mediators are a game theoretic object that exist in a wide variety of games -- previously, they were only known to exist in traffic routing games

    Will they take this offer? A machine learning price elasticity model for predicting upselling acceptance of premium airline seating

    Get PDF
    Employing customer information from one of the world's largest airline companies, we develop a price elasticity model (PREM) using machine learning to identify customers likely to purchase an upgrade offer from economy to premium class and predict a customer's acceptable price range. A simulation of 64.3 million flight bookings and 14.1 million email offers over three years mirroring actual data indicates that PREM implementation results in approximately 1.12 million (7.94%) fewer non-relevant customer email messages, a predicted increase of 72,200 (37.2%) offers accepted, and an estimated $72.2 million (37.2%) of increased revenue. Our results illustrate the potential of automated pricing information and targeting marketing messages for upselling acceptance. We also identified three customer segments: (1) Never Upgrades are those who never take the upgrade offer, (2) Upgrade Lovers are those who generally upgrade, and (3) Upgrade Lover Lookalikes have no historical record but fit the profile of those that tend to upgrade. We discuss the implications for airline companies and related travel and tourism industries.© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Dynamic Pricing with Volume Discounts in Online Settings

    Get PDF

    Challenges of Big Data Analysis

    Full text link
    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article give overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasis on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions

    Realized volatility: a review

    Get PDF
    This paper reviews the exciting and rapidly expanding literature on realized volatility. After presenting a general univariate framework for estimating realized volatilities, a simple discrete time model is presented in order to motivate the main results. A continuous time specification provides the theoretical foundation for the main results in this literature. Cases with and without microstructure noise are considered, and it is shown how microstructure noise can cause severe problems in terms of consistent estimation of the daily realized volatility. Independent and dependent noise processes are examined. The most important methods for providing consistent estimators are presented, and a critical exposition of different techniques is given. The finite sample properties are discussed in comparison with their asymptotic properties. A multivariate model is presented to discuss estimation of the realized covariances. Various issues relating to modelling and forecasting realized volatilities are considered. The main empirical findings using univariate and multivariate methods are summarized.

    Do not Waste Money on Advertising Spend: Bid Recommendation via Concavity Changes

    Full text link
    In computational advertising, a challenging problem is how to recommend the bid for advertisers to achieve the best return on investment (ROI) given budget constraint. This paper presents a bid recommendation scenario that discovers the concavity changes in click prediction curves. The recommended bid is derived based on the turning point from significant increase (i.e. concave downward) to slow increase (convex upward). Parametric learning based method is applied by solving the corresponding constraint optimization problem. Empirical studies on real-world advertising scenarios clearly demonstrate the performance gains for business metrics (including revenue increase, click increase and advertiser ROI increase).Comment: 10 page

    Real-time Tactical and Strategic Sales Management for Intelligent Agents Guided By Economic Regimes

    Get PDF
    Many enterprises that participate in dynamic markets need to make product pricing and inventory resource utilization decisions in real-time. We describe a family of statistical models that address these needs by combining characterization of the economic environment with the ability to predict future economic conditions to make tactical (short-term) decisions, such as product pricing, and strategic (long-term) decisions, such as level of finished goods inventories. Our models characterize economic conditions, called economic regimes, in the form of recurrent statistical patterns that have clear qualitative interpretations. We show how these models can be used to predict prices, price trends, and the probability of receiving a customer order at a given price. These “regime†models are developed using statistical analysis of historical data, and are used in real-time to characterize observed market conditions and predict the evolution of market conditions over multiple time scales. We evaluate our models using a testbed derived from the Trading Agent Competition for Supply Chain Management (TAC SCM), a supply chain environment characterized by competitive procurement and sales markets, and dynamic pricing. We show how regime models can be used to inform both short-term pricing decisions and longterm resource allocation decisions. Results show that our method outperforms more traditional shortand long-term predictive modeling approaches.dynamic pricing;trading agent competition;agent-mediated electronic commerce;dynamic markets;economic regimes;enabling technologies;price forecasting;supply-chain

    Exact and Asymptotic Tests on a Factor Model in Low and Large Dimensions with Applications

    Full text link
    In the paper, we suggest three tests on the validity of a factor model which can be applied for both small dimensional and large dimensional data. Both the exact and asymptotic distributions of the resulting test statistics are derived under classical and high-dimensional asymptotic regimes. It is shown that the critical values of the proposed tests can be calibrated empirically by generating a sample from the inverse Wishart distribution with identity parameter matrix. The powers of the suggested tests are investigated by means of simulations. The results of the simulation study are consistent with the theoretical findings and provide general recommendations about the application of each of the three tests. Finally, the theoretical results are applied to two real data sets, which consist of returns on stocks from the DAX index and on stocks from the S&P 500 index. Our empirical results do not support the hypothesis that all linear dependencies between the returns can be entirely captured by the factors considered in the paper
    corecore