431 research outputs found

    Studies on heterotrophic nitrification in a lake. [Translation from: Z.allg.Mikrobiol. 12 567-574, 1973. ]

    Get PDF
    In a lake the nitrogen compounds are liable to regular cycling in which nitrate is reduced and ammonium oxidised. As a nitrate maximum is regularly established in the upper part of the hypolimnion of a stratified summer lake, the authors have dealt in particular with the oxidising side of the nitrogen cycle. Described here are partial results of the nitrification in Plusssee. The Plusssee was chosen, since it is almost entirely without inflows, and, lying in a wooded basin, is well protected from the wind, and therefore stably stratified. In order to determine the number of autotrophic nitrificants the distribution of the Nitrosomonas and Nitrobacter spores in the lake were analysed. From the estimates on the determination of spore numbers of the heterotrophic nitrificants, 14 species in the pure culture were isolated and examined from morphological, biochemical and taxonomic viewpoints

    Rational bidding using reinforcement learning: an application in automated resource allocation

    Get PDF
    The application of autonomous agents by the provisioning and usage of computational resources is an attractive research field. Various methods and technologies in the area of artificial intelligence, statistics and economics are playing together to achieve i) autonomic resource provisioning and usage of computational resources, to invent ii) competitive bidding strategies for widely used market mechanisms and to iii) incentivize consumers and providers to use such market-based systems. The contributions of the paper are threefold. First, we present a framework for supporting consumers and providers in technical and economic preference elicitation and the generation of bids. Secondly, we introduce a consumer-side reinforcement learning bidding strategy which enables rational behavior by the generation and selection of bids. Thirdly, we evaluate and compare this bidding strategy against a truth-telling bidding strategy for two kinds of market mechanisms – one centralized and one decentralized

    The Strategic Exploitation of Limited Information and Opportunity in Networked Markets

    No full text
    This paper studies the effect of constraining interactions within a market. A model is analysed in which boundedly rational agents trade with and gather information from their neighbours within a trade network. It is demonstrated that a trader’s ability to profit and to identify the equilibrium price is positively correlated with its degree of connectivity within the market. Where traders differ in their number of potential trading partners, well-connected traders are found to benefit from aggressive trading behaviour.Where information propagation is constrained by the topology of the trade network, connectedness affects the nature of the strategies employed

    Q-Strategy: A Bidding Strategy for Market-Based Allocation of Grid Services

    Get PDF
    The application of autonomous agents by the provisioning and usage of computational services is an attractive research field. Various methods and technologies in the area of artificial intelligence, statistics and economics are playing together to achieve i) autonomic service provisioning and usage of Grid services, to invent ii) competitive bidding strategies for widely used market mechanisms and to iii) incentivize consumers and providers to use such market-based systems. The contributions of the paper are threefold. First, we present a bidding agent framework for implementing artificial bidding agents, supporting consumers and providers in technical and economic preference elicitation as well as automated bid generation by the requesting and provisioning of Grid services. Secondly, we introduce a novel consumer-side bidding strategy, which enables a goal-oriented and strategic behavior by the generation and submission of consumer service requests and selection of provider offers. Thirdly, we evaluate and compare the Q-strategy, implemented within the presented framework, against the Truth-Telling bidding strategy in three mechanisms – a centralized CDA, a decentralized on-line machine scheduling and a FIFO-scheduling mechanisms

    Implied cost of capital investment strategies - evidence from international stock markets

    Get PDF
    Investors can generate excess returns by implementing trading strategies based on publicly available equity analyst forecasts. This paper captures the information provided by analysts by the implied cost of capital (ICC), the internal rate of return that equates a firm's share price to the present value of analysts' earnings forecasts. We find that U.S. stocks with a high ICC outperform low ICC stocks on average by 6.0% per year. This spread is significant when controlling the investment returns for their risk exposure as proxied by standard pricing models. Further analysis across the world's largest equity markets validates these results

    Adaptive-Aggressive Traders Don't Dominate

    Get PDF
    For more than a decade Vytelingum's Adaptive-Aggressive (AA) algorithm has been recognized as the best-performing automated auction-market trading-agent strategy currently known in the AI/Agents literature; in this paper, we demonstrate that it is in fact routinely outperformed by another algorithm when exhaustively tested across a sufficiently wide range of market scenarios. The novel step taken here is to use large-scale compute facilities to brute-force exhaustively evaluate AA in a variety of market environments based on those used for testing it in the original publications. Our results show that even in these simple environments AA is consistently out-performed by IBM's GDX algorithm, first published in 2002. We summarize here results from more than one million market simulation experiments, orders of magnitude more testing than was reported in the original publications that first introduced AA. A 2019 ICAART paper by Cliff claimed that AA's failings were revealed by testing it in more realistic experiments, with conditions closer to those found in real financial markets, but here we demonstrate that even in the simple experiment conditions that were used in the original AA papers, exhaustive testing shows AA to be outperformed by GDX. We close this paper with a discussion of the methodological implications of our work: any results from previous papers where any one trading algorithm is claimed to be superior to others on the basis of only a few thousand trials are probably best treated with some suspicion now. The rise of cloud computing means that the compute-power necessary to subject trading algorithms to millions of trials over a wide range of conditions is readily available at reasonable cost: we should make use of this; exhaustive testing such as is shown here should be the norm in future evaluations and comparisons of new trading algorithms.Comment: To be published as a chapter in "Agents and Artificial Intelligence" edited by Jaap van den Herik, Ana Paula Rocha, and Luc Steels; forthcoming 2019/2020. 24 Pages, 1 Figure, 7 Table
    corecore