1,714 research outputs found

    The Opium Wars, Opium Legalization, and Opium Consumption in China

    Get PDF
    The effect of drug prohibition on drug consumption is a critical issue in debates over drug policy. One episode that provides information on the consumption-reducing effect of drug prohibition is the Chinese legalization of opium in 1858. In this paper we examine the impact of China's opium legalization on the quantity and price of British opium exports from India to China during the 19th century. We find little evidence that legalization increased exports or decreased price. Thus, the evidence suggests China's opium prohibition had a minimal impact on opium consumpton.

    The Opium Wars, Opium Legalization, and Opium Consumption in China

    Get PDF
    The effect of drug prohibition on drug consumption is a critical issue in debates over drug policy. One episode that provides information on the consumption-reducing effect of drug prohibition is the Chinese legalization of opium in 1858. In this paper we examine the impact of China's opium legalization on the quantity and price of British opium exports from India to China during the 19th century. We find little evidence that legalization increased exports or decreased price. Thus, the evidence suggests China's opium prohibition had a minimal impact on opium consumption.

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page

    Quantum Interactive Proofs with Competing Provers

    Full text link
    This paper studies quantum refereed games, which are quantum interactive proof systems with two competing provers: one that tries to convince the verifier to accept and the other that tries to convince the verifier to reject. We prove that every language having an ordinary quantum interactive proof system also has a quantum refereed game in which the verifier exchanges just one round of messages with each prover. A key part of our proof is the fact that there exists a single quantum measurement that reliably distinguishes between mixed states chosen arbitrarily from disjoint convex sets having large minimal trace distance from one another. We also show how to reduce the probability of error for some classes of quantum refereed games.Comment: 13 pages, to appear in STACS 200

    A Thirty-Four Billion Solar Mass Black Hole in SMSS J2157-3602, the Most Luminous Known Quasar

    Get PDF
    From near-infrared spectroscopic measurements of the MgII emission line doublet, we estimate the black hole (BH) mass of the quasar, SMSS J215728.21-360215.1, as being (3.4 +/- 0.6) x 10^10 M_sun and refine the redshift of the quasar to be z=4.692. SMSS J2157 is the most luminous known quasar, with a 3000A luminosity of (4.7 +/- 0.5) x 10^47 erg/s and an estimated bolometric luminosity of 1.6 x 10^48 erg/s, yet its Eddington ratio is only ~0.4. Thus, the high luminosity of this quasar is a consequence of its extremely large BH -- one of the most massive BHs at z > 4.Comment: 7 pages, 3 figures. Accepted for publication in MNRA

    Algorithmic and Hardness Results for the Colorful Components Problems

    Full text link
    In this paper we investigate the colorful components framework, motivated by applications emerging from comparative genomics. The general goal is to remove a collection of edges from an undirected vertex-colored graph GG such that in the resulting graph GG' all the connected components are colorful (i.e., any two vertices of the same color belong to different connected components). We want GG' to optimize an objective function, the selection of this function being specific to each problem in the framework. We analyze three objective functions, and thus, three different problems, which are believed to be relevant for the biological applications: minimizing the number of singleton vertices, maximizing the number of edges in the transitive closure, and minimizing the number of connected components. Our main result is a polynomial time algorithm for the first problem. This result disproves the conjecture of Zheng et al. that the problem is NP NP-hard (assuming PNPP \neq NP). Then, we show that the second problem is APX APX-hard, thus proving and strengthening the conjecture of Zheng et al. that the problem is NP NP-hard. Finally, we show that the third problem does not admit polynomial time approximation within a factor of V1/14ϵ|V|^{1/14 - \epsilon} for any ϵ>0\epsilon > 0, assuming PNPP \neq NP (or within a factor of V1/2ϵ|V|^{1/2 - \epsilon}, assuming ZPPNPZPP \neq NP).Comment: 18 pages, 3 figure

    Finding Connected Dense kk-Subgraphs

    Full text link
    Given a connected graph GG on nn vertices and a positive integer knk\le n, a subgraph of GG on kk vertices is called a kk-subgraph in GG. We design combinatorial approximation algorithms for finding a connected kk-subgraph in GG such that its density is at least a factor Ω(max{n2/5,k2/n2})\Omega(\max\{n^{-2/5},k^2/n^2\}) of the density of the densest kk-subgraph in GG (which is not necessarily connected). These particularly provide the first non-trivial approximations for the densest connected kk-subgraph problem on general graphs

    Approximating k-Forest with Resource Augmentation: A Primal-Dual Approach

    Full text link
    In this paper, we study the kk-forest problem in the model of resource augmentation. In the kk-forest problem, given an edge-weighted graph G(V,E)G(V,E), a parameter kk, and a set of mm demand pairs V×V\subseteq V \times V, the objective is to construct a minimum-cost subgraph that connects at least kk demands. The problem is hard to approximate---the best-known approximation ratio is O(min{n,k})O(\min\{\sqrt{n}, \sqrt{k}\}). Furthermore, kk-forest is as hard to approximate as the notoriously-hard densest kk-subgraph problem. While the kk-forest problem is hard to approximate in the worst-case, we show that with the use of resource augmentation, we can efficiently approximate it up to a constant factor. First, we restate the problem in terms of the number of demands that are {\em not} connected. In particular, the objective of the kk-forest problem can be viewed as to remove at most mkm-k demands and find a minimum-cost subgraph that connects the remaining demands. We use this perspective of the problem to explain the performance of our algorithm (in terms of the augmentation) in a more intuitive way. Specifically, we present a polynomial-time algorithm for the kk-forest problem that, for every ϵ>0\epsilon>0, removes at most mkm-k demands and has cost no more than O(1/ϵ2)O(1/\epsilon^{2}) times the cost of an optimal algorithm that removes at most (1ϵ)(mk)(1-\epsilon)(m-k) demands

    AMS measurements of cosmogenic and supernova-ejected radionuclides in deep-sea sediment cores

    Full text link
    Samples of two deep-sea sediment cores from the Indian Ocean are analyzed with accelerator mass spectrometry (AMS) to search for traces of recent supernova activity around 2 Myr ago. Here, long-lived radionuclides, which are synthesized in massive stars and ejected in supernova explosions, namely 26Al, 53Mn and 60Fe, are extracted from the sediment samples. The cosmogenic isotope 10Be, which is mainly produced in the Earths atmosphere, is analyzed for dating purposes of the marine sediment cores. The first AMS measurement results for 10Be and 26Al are presented, which represent for the first time a detailed study in the time period of 1.7-3.1 Myr with high time resolution. Our first results do not support a significant extraterrestrial signal of 26Al above terrestrial background. However, there is evidence that, like 10Be, 26Al might be a valuable isotope for dating of deep-sea sediment cores for the past few million years.Comment: 5 pages, 2 figures, Proceedings of the Heavy Ion Accelerator Symposium on Fundamental and Applied Science, 2013, will be published by the EPJ Web of conference

    Combinatorial Assortment Optimization

    Full text link
    Assortment optimization refers to the problem of designing a slate of products to offer potential customers, such as stocking the shelves in a convenience store. The price of each product is fixed in advance, and a probabilistic choice function describes which product a customer will choose from any given subset. We introduce the combinatorial assortment problem, where each customer may select a bundle of products. We consider a model of consumer choice where the relative value of different bundles is described by a valuation function, while individual customers may differ in their absolute willingness to pay, and study the complexity of the resulting optimization problem. We show that any sub-polynomial approximation to the problem requires exponentially many demand queries when the valuation function is XOS, and that no FPTAS exists even for succinctly-representable submodular valuations. On the positive side, we show how to obtain constant approximations under a "well-priced" condition, where each product's price is sufficiently high. We also provide an exact algorithm for kk-additive valuations, and show how to extend our results to a learning setting where the seller must infer the customers' preferences from their purchasing behavior
    corecore