15,008 research outputs found

    A Candidate Access Structure for Super-polynomial Lower Bound on Information Ratio

    Get PDF
    The contribution vector (convec) of a secret sharing scheme is the vector of all share sizes divided by the secret size. A measure on the convec (e.g., its maximum or average) is considered as a criterion of efficiency of secret sharing schemes, which is referred to as the information ratio. It is generally believed that there exists a family of access structures such that the information ratio of any secret sharing scheme realizing it is 2Ω(n)2^{\mathrm{\Omega}(n)}, where the parameter nn stands for the number of participants. The best known lower bound, due to Csirmaz (1994), is Ω(n/logn)\mathrm{\Omega}(n/\log n). Closing this gap is a long-standing open problem in cryptology. Using a technique called \emph{substitution}, we recursively construct a family of access structures by starting from that of Csirmaz, which might be a candidate for super-polynomial information ratio. We provide support for this possibility by showing that our family has information ratio nΩ(lognloglogn){n^{\mathrm{\Omega}(\frac{\log n}{\log \log n})}}, assuming the truth of a well-stated information-theoretic conjecture, called the \emph{substitution conjecture}. The substitution method is a technique for composition of access structures, similar to the so called block composition of Boolean functions, and the substitution conjecture is reminiscent of the Karchmer-Raz-Wigderson conjecture on depth complexity of Boolean functions. It emerges after introducing the notion of convec set for an access structure, a subset of nn-dimensional real space, which includes all achievable convecs. We prove some topological properties about convec sets and raise several open problems

    Preference Elicitation in Matching Markets Via Interviews: A Study of Offline Benchmarks

    Get PDF
    The stable marriage problem and its extensions have been extensively studied, with much of the work in the literature assuming that agents fully know their own preferences over alternatives. This assumption however is not always practical (especially in large markets) and agents usually need to go through some costly deliberation process in order to learn their preferences. In this paper we assume that such deliberations are carried out via interviews, where an interview involves a man and a woman, each of whom learns information about the other as a consequence. If everybody interviews everyone else, then clearly agents can fully learn their preferences. But interviews are costly, and we may wish to minimize their use. It is often the case, especially in practical settings, that due to correlation between agents’ preferences, it is unnecessary for all potential interviews to be carried out in order to obtain a stable matching. Thus the problem is to find a good strategy for interviews to be carried out in order to minimize their use, whilst leading to a stable matching. One way to evaluate the performance of an interview strategy is to compare it against a na¨ıve algorithm that conducts all interviews. We argue however that a more meaningful comparison would be against an optimal offline algorithm that has access to agents’ preference orderings under complete information. We show that, unless P=NP, no offline algorithm can compute the optimal interview strategy in polynomial time. If we are additionally aiming for a particular stable matching (perhaps one with certain desirable properties), we provide restricted settings under which efficient optimal offline algorithms exist

    Nilpotent Networks and 4D RG Flows

    Full text link
    Starting from a general N=2\mathcal{N} = 2 SCFT, we study the network of N=1\mathcal{N} = 1 SCFTs obtained from relevant deformations by nilpotent mass parameters. We also study the case of flipper field deformations where the mass parameters are promoted to a chiral superfield, with nilpotent vev. Nilpotent elements of semi-simple algebras admit a partial ordering connected by a corresponding directed graph. We find strong evidence that the resulting fixed points are connected by a similar network of 4D RG flows. To illustrate these general concepts, we also present a full list of nilpotent deformations in the case of explicit N=2\mathcal{N} = 2 SCFTs, including the case of a single D3-brane probing a DD- or EE-type F-theory 7-brane, and 6D (G,G)(G,G) conformal matter compactified on a T2T^2, as described by a single M5-brane probing a DD- or EE-type singularity. We also observe a number of numerical coincidences of independent interest, including a collection of theories with rational values for their conformal anomalies, as well as a surprisingly nearly constant value for the ratio aIR/cIRa_{\mathrm{IR}} / c_{\mathrm{IR}} for the entire network of flows associated with a given UV N=2\mathcal{N} = 2 SCFT. The arXiv\texttt{arXiv} submission also includes the full dataset of theories which can be accessed with a companion Mathematica\texttt{Mathematica} script.Comment: v2: 73 pages, 12 figures, clarifications and references adde

    On Compact Routing for the Internet

    Full text link
    While there exist compact routing schemes designed for grids, trees, and Internet-like topologies that offer routing tables of sizes that scale logarithmically with the network size, we demonstrate in this paper that in view of recent results in compact routing research, such logarithmic scaling on Internet-like topologies is fundamentally impossible in the presence of topology dynamics or topology-independent (flat) addressing. We use analytic arguments to show that the number of routing control messages per topology change cannot scale better than linearly on Internet-like topologies. We also employ simulations to confirm that logarithmic routing table size scaling gets broken by topology-independent addressing, a cornerstone of popular locator-identifier split proposals aiming at improving routing scaling in the presence of network topology dynamics or host mobility. These pessimistic findings lead us to the conclusion that a fundamental re-examination of assumptions behind routing models and abstractions is needed in order to find a routing architecture that would be able to scale ``indefinitely.''Comment: This is a significantly revised, journal version of cs/050802

    Constrained Monotone Function Maximization and the Supermodular Degree

    Get PDF
    The problem of maximizing a constrained monotone set function has many practical applications and generalizes many combinatorial problems. Unfortunately, it is generally not possible to maximize a monotone set function up to an acceptable approximation ratio, even subject to simple constraints. One highly studied approach to cope with this hardness is to restrict the set function. An outstanding disadvantage of imposing such a restriction on the set function is that no result is implied for set functions deviating from the restriction, even slightly. A more flexible approach, studied by Feige and Izsak, is to design an approximation algorithm whose approximation ratio depends on the complexity of the instance, as measured by some complexity measure. Specifically, they introduced a complexity measure called supermodular degree, measuring deviation from submodularity, and designed an algorithm for the welfare maximization problem with an approximation ratio that depends on this measure. In this work, we give the first (to the best of our knowledge) algorithm for maximizing an arbitrary monotone set function, subject to a k-extendible system. This class of constraints captures, for example, the intersection of k-matroids (note that a single matroid constraint is sufficient to capture the welfare maximization problem). Our approximation ratio deteriorates gracefully with the complexity of the set function and k. Our work can be seen as generalizing both the classic result of Fisher, Nemhauser and Wolsey, for maximizing a submodular set function subject to a k-extendible system, and the result of Feige and Izsak for the welfare maximization problem. Moreover, when our algorithm is applied to each one of these simpler cases, it obtains the same approximation ratio as of the respective original work.Comment: 23 page
    corecore