100 research outputs found

    Clash of the Titans: Regulating the Competition Between Established and Emerging Electronic Payment Systems

    Get PDF
    This article equates the providers of traditional electronic payment services with the Titans of Greek mythology, and the providers of new electronic payment technologies with the Olympians. Professor Winn concludes, however, that unlike the Titans of Greek mythology, these modern Titans appear to be winning in their battle with the upstart Olympians. This article describes the fundamental characteristics of payment systems, reviews the applicable law, and describes the new technologies that were, until quite recently, expected to displace older electronic payment systems. Professor Winn finds that consumers and merchants, by and large, are happy with the existing regulatory structure. And, because of the failure of new technologies to gain significant market share yet, regulators have not yet been obliged to revise existing regulations to take account of these new technologies

    Adaptive-Aggressive Traders Don't Dominate

    Get PDF
    For more than a decade Vytelingum's Adaptive-Aggressive (AA) algorithm has been recognized as the best-performing automated auction-market trading-agent strategy currently known in the AI/Agents literature; in this paper, we demonstrate that it is in fact routinely outperformed by another algorithm when exhaustively tested across a sufficiently wide range of market scenarios. The novel step taken here is to use large-scale compute facilities to brute-force exhaustively evaluate AA in a variety of market environments based on those used for testing it in the original publications. Our results show that even in these simple environments AA is consistently out-performed by IBM's GDX algorithm, first published in 2002. We summarize here results from more than one million market simulation experiments, orders of magnitude more testing than was reported in the original publications that first introduced AA. A 2019 ICAART paper by Cliff claimed that AA's failings were revealed by testing it in more realistic experiments, with conditions closer to those found in real financial markets, but here we demonstrate that even in the simple experiment conditions that were used in the original AA papers, exhaustive testing shows AA to be outperformed by GDX. We close this paper with a discussion of the methodological implications of our work: any results from previous papers where any one trading algorithm is claimed to be superior to others on the basis of only a few thousand trials are probably best treated with some suspicion now. The rise of cloud computing means that the compute-power necessary to subject trading algorithms to millions of trials over a wide range of conditions is readily available at reasonable cost: we should make use of this; exhaustive testing such as is shown here should be the norm in future evaluations and comparisons of new trading algorithms.Comment: To be published as a chapter in "Agents and Artificial Intelligence" edited by Jaap van den Herik, Ana Paula Rocha, and Luc Steels; forthcoming 2019/2020. 24 Pages, 1 Figure, 7 Table

    Stopping Science: The Case of Cryptography

    Get PDF

    Benchmarking purely functional data structures.

    Get PDF
    When someone designs a new data structure, they want to know how well it performs. Previously, the only way to do this involves finding, coding and testing some applications to act as benchmarks. This can be tedious and time-consuming. Worse, how a benchmark uses a data structure may considerably affect the efficiency of the data structure. Thus, the choice of benchmarks may bias the results. For these reasons, new data structures developed for functional languages often pay little attention to empirical performance. We solve these problems by developing a benchmarking tool, Auburn, that can generate benchmarks across a fair distribution of uses. We precisely define "the use of a data structure", upon which we build the core algorithms of Auburn: how to generate a benchmark from a description of use, and how to extract a description of use from an application. We consider how best to use these algorithms to benchmark competing data structures. Finally, we test Auburn by benchmarking ..
    corecore