20,019 research outputs found

    Memory effects can make the transmission capability of a communication channel uncomputable

    Full text link
    Most communication channels are subjected to noise. One of the goals of Information Theory is to add redundancy in the transmission of information so that the information is transmitted reliably and the amount of information transmitted through the channel is as large as possible. The maximum rate at which reliable transmission is possible is called the capacity. If the channel does not keep memory of its past, the capacity is given by a simple optimization problem and can be efficiently computed. The situation of channels with memory is less clear. Here we show that for channels with memory the capacity cannot be computed to within precision 1/5. Our result holds even if we consider one of the simplest families of such channels -information-stable finite state machine channels-, restrict the input and output of the channel to 4 and 1 bit respectively and allow 6 bits of memory.Comment: Improved presentation and clarified claim

    Automatic supervised information extraction of structured web data

    Get PDF
    The overall purpose of this project is, in short words, to create a system able to extract vital information from product web pages just like a human would. Information like the name of the product, its description, price tag, company that produces it, and so on. At a first glimpse, this may not seem extraordinary or technically difficult, since web scraping techniques exist from long ago (like the python library Beautiful Soup for instance, an HTML parser1 released in 2004). But let us think for a second on what it actually means being able to extract desired information from any given web source: the way information is displayed can be extremely varied, not only visually, but also semantically. For instance, some hotel booking web pages display at once all prices for the different room types, while medium-sized consumer products in websites like Amazon offer the main product in detail and then more small-sized product recommendations further down the page, being the latter the preferred way of displaying assets by most retail companies. And each with its own styling and search engines. With the above said, the task of mining valuable data from the web now does not sound as easy as it first seemed. Hence the purpose of this project is to shine some light on the Automatic Supervised Information Extraction of Structured Web Data problem. It is important to think if developing such a solution is really valuable at all. Such an endeavour both in time and computing resources should lead to a useful end result, at least on paper, to justify it. The opinion of this author is that it does lead to a potentially valuable result. The targeted extraction of information of publicly available consumer-oriented content at large scale in an accurate, reliable and future proof manner could provide an incredibly useful and large amount of data. This data, if kept updated, could create endless opportunities for Business Intelligence, although exactly which ones is beyond the scope of this work. A simple metaphor explains the potential value of this work: if an oil company were to be told where are all the oil reserves in the planet, it still should need to invest in machinery, workers and time to successfully exploit them, but half of the job would have already been done2. As the reader will see in this work, the way the issue is tackled is by building a somehow complex architecture that ends in an Artificial Neural Network3. A quick overview of such architecture is as follows: first find the URLs that lead to the product pages that contain the desired data that is going to be extracted inside a given site (like URLs that lead to ”action figure” products inside the site ebay.com); second, per each URL passed, extract its HTML and make a screenshot of the page, and store this data in a suitable and scalable fashion; third, label the data that will be fed to the NN4; fourth, prepare the aforementioned data to be input in an NN; fifth, train the NN; and sixth, deploy the NN to make [hopefully accurate] predictions

    In whose backyard? A generalized bidding approach

    Get PDF
    We analyze situations in which a group of agents (and possibly a designer) have to reach a decision that will affect all the agents. Examples of such scenarios are the location of a nuclear reactor or the siting of a major sport event. To address the problem of reaching a decision, we propose a one-stage multi-bidding mechanism where agents compete for the project by submitting bids. All Nash equilibria of this mechanism are efficient. Moreover, the payoffs attained in equilibrium by the agents satisfy intuitively appealing lower bounds..externalities, bidding, implementation

    Implementation of the Ordinal Shapley Value for a three-agent economy

    Get PDF
    We propose a simple mechanism that implements the Ordinal Shapley Value (PĂ©rez-Castrillo and Wettstein 2005) for economies with three or less agents.

    Implementation of the Ordinal Shapley Value for a three-agent economy

    Get PDF
    We propose a simple mechanism that implements the Ordinal Shapley Value (PĂ©rez-Castrillo and Wettstein [2005]) for economies with three or less agents.Ordinal Shapley Value, implementation, mechanism design

    An Ordinal Shapley Value for Economic Environments

    Get PDF
    We propose a new solution concept to address the problem of sharing a surplus among the agents generating it. The sharing problem is formulated in the preferences-endowments space. The solution is defined in a recursive manner incorporating notions of consistency and fairness and relying on properties satisfied by the Shapley value for Transferable Utility (TU) games. We show a solution exists, and refer to it as an Ordinal Shapley value (OSV). The OSV associates with each problem an allocation as well as a matrix of concessions ``measuring'' the gains each agent foregoes in favor of the other agents. We analyze the structure of the concessions, and show they are unique and symmetric. Next we characterize the OSV using the notion of coalitional dividends, and furthermore show it is monotone in an agent's initial endowments and satisfies anonymity. Finally, similarly to the weighted Shapley value for TU games, we construct a weighted OSV as well.Non-Transferable utility games, Shapley value, consistency, fairness

    Chern-Simons theory encoded on a spin chain

    Get PDF
    We construct a 1d spin chain Hamiltonian with generic interactions and prove that the thermal correlation functions of the model admit an explicit random matrix representation. As an application of the result, we show how the observables of U(N)U(N) Chern-Simons theory on S3S^{3} can be reproduced with the thermal correlation functions of the 1d spin chain, which is of the XX type, with a suitable choice of exponentially decaying interactions between infinitely many neighbours. We show that for this model, the correlation functions of the spin chain at a finite temperature ÎČ=1\beta =1 give the Chern-Simons partition function, quantum dimensions and the full topological SS-matrix.Comment: v2, 11 pages. Expanded, more detailed version. Misprints correcte
    • 

    corecore