7,559 research outputs found

    An Agent Based Market Design Methodology for Combinatorial Auctions

    Get PDF
    Auction mechanisms have attracted a great deal of interest and have been used in diverse e-marketplaces. In particular, combinatorial auctions have the potential to play an important role in electronic transactions. Therefore, diverse combinatorial auction market types have been proposed to satisfy market needs. These combinatorial auction types have diverse market characteristics, which require an effective market design approach. This study proposes a comprehensive and systematic market design methodology for combinatorial auctions based on three phases: market architecture design, auction rule design, and winner determination design. A market architecture design is for designing market architecture types by Backward Chain Reasoning. Auction rules design is to design transaction rules for auctions. The specific auction process type is identified by the Backward Chain Reasoning process. Winner determination design is about determining the decision model for selecting optimal bids and auctioneers. Optimization models are identified by Forward Chain Reasoning. Also, we propose an agent based combinatorial auction market design system using Backward and Forward Chain Reasoning. Then we illustrate a design process for the general n-bilateral combinatorial auction market. This study serves as a guideline for practical implementation of combinatorial auction markets design.Combinatorial Auction, Market Design Methodology, Market Architecture Design, Auction Rule Design, Winner Determination Design, Agent-Based System

    Yardstick Based Procurement Design In Natural Resource Management

    Get PDF
    This paper discuss the design of multidimensional yardstick based procurement auction. The suggested design combines Data Envelopment Analysis (DEA) based yardstick schemes with the multidimensional score auction. The principal select a single winner to perform a project, characterized by a multidimensional vector. The design is especially useful when there are uncertainty about the underlying common cost structure as well as the principal's valuation function. Potential applications in natural resource management is provided.Resource /Energy Economics and Policy,

    Model Selection in an Information Economy : Choosing what to Learn

    Get PDF
    As online markets for the exchange of goods and services become more common, the study of markets composed at least in part of autonomous agents has taken on increasing importance. In contrast to traditional completeinformation economic scenarios, agents that are operating in an electronic marketplace often do so under considerable uncertainty. In order to reduce their uncertainty, these agents must learn about the world around them. When an agent producer is engaged in a learning task in which data collection is costly, such as learning the preferences of a consumer population, it is faced with a classic decision problem: when to explore and when to exploit. If the agent has a limited number of chances to experiment, it must explicitly consider the cost of learning (in terms of foregone profit) against the value of the information acquired. Information goods add an additional dimension to this problem; due to their flexibility, they can be bundled and priced according to a number of different price schedules. An optimizing producer should consider the profit each price schedule can extract, as well as the difficulty of learning of this schedule. In this paper, we demonstrate the tradeoff between complexity and profitability for a number of common price schedules. We begin with a one-shot decision as to which schedule to learn. Schedules with moderate complexity are preferred in the short and medium term, as they are learned quickly, yet extract a significant fraction of the available profit. We then turn to the repeated version of this one-shot decision and show that moderate complexity schedules, in particular two-part tariff, perform well when the producer must adapt to nonstationarity in the consumer population. When a producer can dynamically change schedules as it learns, it can use an explicit decision-theoretic formulation to greedily select the schedule which appears to yield the greatest profit in the next period. By explicitly considering the both the learnability and the profit extracted by different price schedules, a producer can extract more profit as it learns than if it naively chose models that are accurate once learned.Online learning; information economics; model selection; direct search

    Budget Feasible Mechanism Design: From Prior-Free to Bayesian

    Full text link
    Budget feasible mechanism design studies procurement combinatorial auctions where the sellers have private costs to produce items, and the buyer(auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is "which valuation domains admit truthful budget feasible mechanisms with `small' approximations (compared to the social optimum)?" Singer showed that additive and submodular functions have such constant approximations. Recently, Dobzinski, Papadimitriou, and Singer gave an O(log^2 n)-approximation mechanism for subadditive functions; they also remarked that: "A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive functions." We address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis. For the prior-free framework, we use an LP that describes the fractional cover of the valuation function; it is also connected to the concept of approximate core in cooperative game theory. We provide an O(I)-approximation mechanism for subadditive functions, via the worst case integrality gap I of LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, and for valuations with a constant I. XOS valuations are an important class of functions that lie between submodular and subadditive classes. We give another polynomial time O(log n/loglog n) sub-logarithmic approximation mechanism for subadditive valuations. For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.Comment: to appear in STOC 201

    Building an End-To-End Security Infrastructure for Web-Based Aerospace Components E-Trading

    Get PDF
    The research paper focuses on the development of a generic framework and architecture for building an integrated end-to-end security infrastructure and closedloop solution to secure e-commerce and m-commerce. As an integral component, an intelligent decision support mechanism is developed in helping systems designers and managers make architectural, design, implementation, and deployment decisions on employing particular security solutions to issues and requirements arising in various e-commerce and m-commerce scenarios. In addition, this research identifies the key features, options and benefits of several security technologies as well as provide guidelines in managing the costs and complexities involved in the deployment of those security solutions. As an important groundwork for building a prototype based on the proposed research work, a study has been conducted to investigate the current B2B ecommerce operations between Pratt & Whitney (P&W) [15] (a division of United Technologies Corporation (UTC) [17]) and its partnering e-business and supply chain players in the aviation industry

    The SEEMP Approach to Semantic Interoperability for E-Employment

    Get PDF
    SEEMP is a European Project that promotes increased partnership between labour market actors and the development of closer relations between private and public employment services, making optimal use of the various actors’ specific characteristics, thus providing job-seekers and employers with better services. The need for a flexible collaboration gives rise to the issue of interoperability in both data exchange and share of services. SEEMP proposes a solution that relies on the concepts of services and semantics in order to provide a meaningful service-based communication among labour market actors requiring a minimal shared commitment

    Statistical Arbitrage Mining for Display Advertising

    Full text link
    We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A/B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.Comment: In the proceedings of the 21st ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2015
    corecore