16 research outputs found

    On Ranges and Partitions in Optimal TCAMs

    Full text link
    Traffic splitting is a required functionality in networks, for example for load balancing over paths or servers, or by the source's access restrictions. The capacities of the servers (or the number of users with particular access restrictions) determine the sizes of the parts into which traffic should be split. A recent approach implements traffic splitting within the ternary content addressable memory (TCAM), which is often available in switches. It is important to reduce the amount of memory allocated for this task since TCAMs are power consuming and are often also required for other tasks such as classification and routing. In the longest-prefix model (LPM), Draves et al. (INFOCOM 1999) find a minimal representation of a function, and Sadeh et al. (INFOCOM 2019) find a minimal representation of a partition. In certain situations, range-functions are of special interest, that is, all the addresses with the same target, or action, are consecutive. In this paper we show that minimizing the amount of TCAM entries to represent a partition comes at the cost of fragmentation, such that for some partitions some actions must have multiple ranges. Then, we also study the case where each target must have a single segment of addresses

    Dynamic Binary Search Trees: Improved Lower Bounds for the Greedy-Future Algorithm

    Get PDF
    Binary search trees (BSTs) are one of the most basic and widely used data structures. The best static tree for serving a sequence of queries (searches) can be computed by dynamic programming. In contrast, when the BSTs are allowed to be dynamic (i.e. change by rotations between searches), we still do not know how to compute the optimal algorithm (OPT) for a given sequence. One of the candidate algorithms whose serving cost is suspected to be optimal up-to a (multiplicative) constant factor is known by the name Greedy Future (GF). In an equivalent geometric way of representing queries on BSTs, GF is in fact equivalent to another algorithm called Geometric Greedy (GG). Most of the results on GF are obtained using the geometric model and the study of GG. Despite this intensive recent fruitful research, the best lower bound we have on the competitive ratio of GF is 4/3. Furthermore, it has been conjectured that the additive gap between the cost of GF and OPT is only linear in the number of queries. In this paper we prove a lower bound of 2 on the competitive ratio of GF, and we prove that the additive gap between the cost of GF and OPT can be ?(m ? log log n) where n is the number of items in the tree and m is the number of queries

    Caching Connections in Matchings

    Full text link
    Motivated by the desire to utilize a limited number of configurable optical switches by recent advances in Software Defined Networks (SDNs), we define an online problem which we call the Caching in Matchings problem. This problem has a natural combinatorial structure and therefore may find additional applications in theory and practice. In the Caching in Matchings problem our cache consists of kk matchings of connections between servers that form a bipartite graph. To cache a connection we insert it into one of the kk matchings possibly evicting at most two other connections from this matching. This problem resembles the problem known as Connection Caching, where we also cache connections but our only restriction is that they form a graph with bounded degree kk. Our results show a somewhat surprising qualitative separation between the problems: The competitive ratio of any online algorithm for caching in matchings must depend on the size of the graph. Specifically, we give a deterministic O(nk)O(nk) competitive and randomized O(nlogk)O(n \log k) competitive algorithms for caching in matchings, where nn is the number of servers and kk is the number of matchings. We also show that the competitive ratio of any deterministic algorithm is Ω(max(nk,k))\Omega(\max(\frac{n}{k},k)) and of any randomized algorithm is Ω(lognk2logklogk)\Omega(\log \frac{n}{k^2 \log k} \cdot \log k). In particular, the lower bound for randomized algorithms is Ω(logn)\Omega(\log n) regardless of kk, and can be as high as Ω(log2n)\Omega(\log^2 n) if k=n1/3k=n^{1/3}, for example. We also show that if we allow the algorithm to use at least 2k12k-1 matchings compared to kk used by the optimum then we match the competitive ratios of connection catching which are independent of nn. Interestingly, we also show that even a single extra matching for the algorithm allows to get substantially better bounds

    Codes for Load Balancing in TCAMs: Size Analysis

    Full text link
    Traffic splitting is a required functionality in networks, for example for load balancing over paths or servers, or by the source's access restrictions. The capacities of the servers (or the number of users with particular access restrictions) determine the sizes of the parts into which traffic should be split. A recent approach implements traffic splitting within the ternary content addressable memory (TCAM), which is often available in switches. It is important to reduce the amount of memory allocated for this task since TCAMs are power consuming and are often also required for other tasks such as classification and routing. Recent works suggested algorithms to compute a smallest implementation of a given partition in the longest prefix match (LPM) model. In this paper we analyze properties of such minimal representations and prove lower and upper bounds on their size. The upper bounds hold for general TCAMs, and we also prove an additional lower-bound for general TCAMs. We also analyze the expected size of a representation, for uniformly random ordered partitions. We show that the expected representation size of a random partition is at least half the size for the worst-case partition, and is linear in the number of parts and in the logarithm of the size of the address space

    The emergence of synaesthesia in a Neuronal Network Model via changes in perceptual sensitivity and plasticity

    Get PDF
    Synaesthesia is an unusual perceptual experience in which an inducer stimulus triggers a percept in a different domain in addition to its own. To explore the conditions under which synaesthesia evolves, we studied a neuronal network model that represents two recurrently connected neural systems. The interactions in the network evolve according to learning rules that optimize sensory sensitivity. We demonstrate several scenarios, such as sensory deprivation or heightened plasticity, under which synaesthesia can evolve even though the inputs to the two systems are statistically independent and the initial cross-talk interactions are zero. Sensory deprivation is the known causal mechanism for acquired synaesthesia and increased plasticity is implicated in developmental synaesthesia. The model unifies different causes of synaesthesia within a single theoretical framework and repositions synaesthesia not as some quirk of aberrant connectivity, but rather as a functional brain state that can emerge as a consequence of optimising sensory information processing

    Optimal Weighted Load Balancing in TCAMs

    Full text link
    Traffic splitting is a required functionality in networks, for example for load balancing over multiple paths or among different servers. The capacities of the servers determine the partition by which traffic should be split. A recent approach implements traffic splitting within the ternary content addressable memory (TCAM), which is often available in switches. It is important to reduce the amount of memory allocated for this task since TCAMs are power consuming and are often also required for other tasks such as classification and routing. Previous work showed how to compute the smallest prefix-matching TCAM necessary to implement a given partition exactly. In this paper we solve the more practical case, where at most nn prefix-matching TCAM rules are available, restricting the ability to implement exactly the desired partition. We give simple and efficient algorithms to find nn rules that generate a partition closest in LL_\infty to the desired one. We do the same for a one-sided version of LL_\infty which equals to the maximum overload on a server and for a relative version of it. We use our algorithms to evaluate how the expected error changes as a function of the number of rules, the number of servers, and the width of the TCAM.Comment: This is an extended version of a paper presented in ACM CoNEXT 202

    A simple network architecture to study the conditions for evolution of cross-talk interactions.

    No full text
    <p>(A) The network contains four neurons, one input neuron and one output neuron in each modality. The feedforward connections are denoted by W<sub>11</sub> and W<sub>22</sub> and the recurrent cross-talk connections are denoted by K<sub>12</sub> (from neuron 2 to 1) and K<sub>21</sub> (from neuron 1 to 2). In this simple case, there are no internal recurrent connections within each modality, only between them. (B) The inputs were drawn from two independent Gaussian distributions with zero mean. We analysed the effect of the variances of the two Gaussian distributions on the evolution of cross-talk connections.</p

    A network model for studying the evolution of synaesthetic mappings.

    No full text
    <p>A. Network architecture. The network is composed of two interacting modalities. Each modality receives a two-dimensional input characterized by an angle and a distance from the origin. This input is mapped into a high dimensional representation. There are recurrent connections among all the neurons in the output layer, namely within and between modalities. For clarity, only a few connections are shown. B. Feedforward connections and input distribution. The feedforward connections (red radial lines) are unit vectors with angles equally spaced from 0° to 360°. They are fixed throughout the learning. The input to each neuron is proportional to the projection of the input on the corresponding unit vector and has a cosine tuning around the corresponding angle, which represents its preferred feature. For clarity, the figure shows only a few lines, but in the numerical simulations we used 71 output neurons in each modality. The blue dots depict the input distribution to a single modality. The angles are uniformly distributed and the distance from the origin has a Gaussian distribution around a characteristic distance (0.1 in this example), which represents stimulus intensity.</p
    corecore