1,468 research outputs found

    Methods and problems of wavelength-routing in all-optical networks

    Get PDF
    We give a survey of recent theoretical results obtained for wavelength-routing in all-optical networks. The survey is based on the previous survey in [Beauquier, B., Bermond, J-C., Gargano, L., Hell, P., Perennes, S., Vaccaro, U.: Graph problems arising from wavelength-routing in all-optical networks. In: Proc. of the 2nd Workshop on Optics and Computer Science, part of IPPS'97, 1997]. We focus our survey on the current research directions and on the used methods. We also state several open problems connected with this line of research, and give an overview of several related research directions

    Parameterized and approximation complexity of the detection pair problem in graphs

    Full text link
    We study the complexity of the problem DETECTION PAIR. A detection pair of a graph GG is a pair (W,L)(W,L) of sets of detectors with WV(G)W\subseteq V(G), the watchers, and LV(G)L\subseteq V(G), the listeners, such that for every pair u,vu,v of vertices that are not dominated by a watcher of WW, there is a listener of LL whose distances to uu and to vv are different. The goal is to minimize W+L|W|+|L|. This problem generalizes the two classic problems DOMINATING SET and METRIC DIMENSION, that correspond to the restrictions L=L=\emptyset and W=W=\emptyset, respectively. DETECTION PAIR was recently introduced by Finbow, Hartnell and Young [A. S. Finbow, B. L. Hartnell and J. R. Young. The complexity of monitoring a network with both watchers and listeners. Manuscript, 2015], who proved it to be NP-complete on trees, a surprising result given that both DOMINATING SET and METRIC DIMENSION are known to be linear-time solvable on trees. It follows from an existing reduction by Hartung and Nichterlein for METRIC DIMENSION that even on bipartite subcubic graphs of arbitrarily large girth, DETECTION PAIR is NP-hard to approximate within a sub-logarithmic factor and W[2]-hard (when parameterized by solution size). We show, using a reduction to SET COVER, that DETECTION PAIR is approximable within a factor logarithmic in the number of vertices of the input graph. Our two main results are a linear-time 22-approximation algorithm and an FPT algorithm for DETECTION PAIR on trees.Comment: 13 page

    Gathering an even number of robots in an odd ring without global multiplicity detection

    Full text link
    We propose a gathering protocol for an even number of robots in a ring-shaped network that allows symmetric but not periodic configurations as initial configurations, yet uses only local weak multiplicity detection. Robots are assumed to be anonymous and oblivious, and the execution model is the non- atomic CORDA model with asynchronous fair scheduling. In our scheme, the number of robots k must be greater than 8, the number of nodes n on a network must be odd and greater than k+3. The running time of our protocol is O(n2) asynchronous rounds.Comment: arXiv admin note: text overlap with arXiv:1104.566

    Centroidal bases in graphs

    Get PDF
    We introduce the notion of a centroidal locating set of a graph GG, that is, a set LL of vertices such that all vertices in GG are uniquely determined by their relative distances to the vertices of LL. A centroidal locating set of GG of minimum size is called a centroidal basis, and its size is the centroidal dimension CD(G)CD(G). This notion, which is related to previous concepts, gives a new way of identifying the vertices of a graph. The centroidal dimension of a graph GG is lower- and upper-bounded by the metric dimension and twice the location-domination number of GG, respectively. The latter two parameters are standard and well-studied notions in the field of graph identification. We show that for any graph GG with nn vertices and maximum degree at least~2, (1+o(1))lnnlnlnnCD(G)n1(1+o(1))\frac{\ln n}{\ln\ln n}\leq CD(G) \leq n-1. We discuss the tightness of these bounds and in particular, we characterize the set of graphs reaching the upper bound. We then show that for graphs in which every pair of vertices is connected via a bounded number of paths, CD(G)=Ω(E(G))CD(G)=\Omega\left(\sqrt{|E(G)|}\right), the bound being tight for paths and cycles. We finally investigate the computational complexity of determining CD(G)CD(G) for an input graph GG, showing that the problem is hard and cannot even be approximated efficiently up to a factor of o(logn)o(\log n). We also give an O(nlnn)O\left(\sqrt{n\ln n}\right)-approximation algorithm

    THE TWO MODELS BEHIND LOW COST PRODUCTS

    No full text
    International audienceLow cost products and services are nowadays present in most sectors. However a clear definition of what makes a low cost product seems to be missing. This article proposes a state of the art on low cost products (through the study of a sample of 42 products recognized as "low cost") and aims to develop a framework to classify them through their design principles, to identify their main characteristics, how they emerge, how they are managed, as well as the impact they have on markets. One of the main conclusions of this work is that two main low cost models should be distinguished. They are labeled i) 'low cost adaptation', where the classical products are striped naked of their non-essential functions to reduce costs, following a functionalist design approach; and ii) 'smart low cost design', that develops a less costly new product from scratch answering to consumer needs, and that can be linked to innovative design theories. These two models should not be mixed up with cost efficiencies models, which are also aimed at reducing costs, but are not a company's main strategy. The studied products show that 'smart low cost design' products are more innovative than 'low cost adaptation' products. The second model is richer and uses elements of the first one. Furthermore, similar effects on the market are observed for both low cost product models, like the creation of demand and the overall price reduction, but the second model seems to have a stronger impact. This work illustrates that a low cost approach can be used as a design tool

    THE DESIGN AND CHARACTERISTICS OF LOW COST PRODUCTS

    No full text
    International audienceLow cost products and services are nowadays present in most sectors. However a clear definition of what makes a low cost product seems to be missing. This article proposes a state of the art on low cost products (through the study of a sample of 50 products recognized as "low cost") and aims to develop a framework to classify them through their design principles, to identify their main characteristics, how they emerge, how they are managed, as well as the impact they have on markets. One of the main conclusions of this work is that two main low cost models should be distinguished. They are labeled i) 'low cost adaptation', where the classical products are striped naked of their non-essential functions to reduce costs, following a functionalist design approach; and ii) 'smart low cost design', that develops a less costly new product from scratch answering to consumer needs, and that can be linked to innovative design theories. These two models should not be mixed up with cost efficiencies models, which are also aimed at reducing costs, but are not a company's main strategy. The studied products show that 'smart low cost design' products are more innovative than 'low cost adaptation' products. The second model is richer and uses elements of the first one. Furthermore, similar effects on the market are observed for both low cost product models, like the creation of demand and the overall price reduction, but the second model seems to have a stronger impact. This work illustrates that a low cost approach can be used as a design tool

    Improved Analysis of Deterministic Load-Balancing Schemes

    Full text link
    We consider the problem of deterministic load balancing of tokens in the discrete model. A set of nn processors is connected into a dd-regular undirected network. In every time step, each processor exchanges some of its tokens with each of its neighbors in the network. The goal is to minimize the discrepancy between the number of tokens on the most-loaded and the least-loaded processor as quickly as possible. Rabani et al. (1998) present a general technique for the analysis of a wide class of discrete load balancing algorithms. Their approach is to characterize the deviation between the actual loads of a discrete balancing algorithm with the distribution generated by a related Markov chain. The Markov chain can also be regarded as the underlying model of a continuous diffusion algorithm. Rabani et al. showed that after time T=O(log(Kn)/μ)T = O(\log (Kn)/\mu), any algorithm of their class achieves a discrepancy of O(dlogn/μ)O(d\log n/\mu), where μ\mu is the spectral gap of the transition matrix of the graph, and KK is the initial load discrepancy in the system. In this work we identify some natural additional conditions on deterministic balancing algorithms, resulting in a class of algorithms reaching a smaller discrepancy. This class contains well-known algorithms, eg., the Rotor-Router. Specifically, we introduce the notion of cumulatively fair load-balancing algorithms where in any interval of consecutive time steps, the total number of tokens sent out over an edge by a node is the same (up to constants) for all adjacent edges. We prove that algorithms which are cumulatively fair and where every node retains a sufficient part of its load in each step, achieve a discrepancy of O(min{dlogn/μ,dn})O(\min\{d\sqrt{\log n/\mu},d\sqrt{n}\}) in time O(T)O(T). We also show that in general neither of these assumptions may be omitted without increasing discrepancy. We then show by a combinatorial potential reduction argument that any cumulatively fair scheme satisfying some additional assumptions achieves a discrepancy of O(d)O(d) almost as quickly as the continuous diffusion process. This positive result applies to some of the simplest and most natural discrete load balancing schemes.Comment: minor corrections; updated literature overvie

    Translating Practical Knowledge: Three Theories of Portraiture from the Mid-Qing Dynasty

    Get PDF
    This essay discusses three Chinese treatises on portraiture techniques written during the 18th century and how the authors codified practical knowledge.Part of book or chapter of boo
    corecore