5 research outputs found

    Codes for Load Balancing in TCAMs: Size Analysis

    Full text link
    Traffic splitting is a required functionality in networks, for example for load balancing over paths or servers, or by the source's access restrictions. The capacities of the servers (or the number of users with particular access restrictions) determine the sizes of the parts into which traffic should be split. A recent approach implements traffic splitting within the ternary content addressable memory (TCAM), which is often available in switches. It is important to reduce the amount of memory allocated for this task since TCAMs are power consuming and are often also required for other tasks such as classification and routing. Recent works suggested algorithms to compute a smallest implementation of a given partition in the longest prefix match (LPM) model. In this paper we analyze properties of such minimal representations and prove lower and upper bounds on their size. The upper bounds hold for general TCAMs, and we also prove an additional lower-bound for general TCAMs. We also analyze the expected size of a representation, for uniformly random ordered partitions. We show that the expected representation size of a random partition is at least half the size for the worst-case partition, and is linear in the number of parts and in the logarithm of the size of the address space

    Baire and automata

    Get PDF
    In his thesis Baire defined functions of Baire class 1. A function f is of Baire class 1 if it is the pointwise limit of a sequence of continuous functions. Baire proves the following theorem. A function f is not of class 1 if and only if there exists a closed nonempty set F such that the restriction of f to F has no point of continuity. We prove the automaton version of this theorem. An ω-rational function is not of class 1 if and only if there exists a closed nonempty set F recognized by a Büchi automaton such that the restriction of f to F has no point of continuity. This gives us the opportunity for a discussion on Hausdorff's analysis of Δ°2, ordinals, transfinite induction and some applications of computer science

    Development of Pixel Modules for the Belle II Detector

    Get PDF
    The future Belle II experiment at the SuperKEKB accelerator will improve the current understanding of flavor physics by increasing the recorded data set fiftyfold; allowing high precision measurements of CKM parameters and measurements of rare decays. The experiment will feature 2 layers of ultra-thin DEPFET pixel modules close to the interaction point to increase the vertex resolution while coping with an instantaneous luminosity 40 times higher than the previous Belle experiment. A DEPFET is a DEpleted P-channel Field Effect Transistor. Charge collected in the depleted bulk drifts into a potential minimum below the transistor gate and modulates the drain current, which is processed by two ASICs, the DCD and DHP. In this thesis the protptype ASICs for Belle II were characterized with an emphasis on the Drain Current Digitizer (DCD). To ensure data quality during the experiment runtime of Belle II, the radiation hardness and temperature stability of the DCD was investigated. With characterized ASICs, small and large Belle II prototype sensors were evaluated and used to investigate system performance. Sensor measurements were performed using a laser system with in-pixel resolution, radioactive sources and high energy electron beams at the DESY beam test facility

    Development, Characterization and Operation of the DCDB, the Front-End Readout Chip for the Pixel Vertex Detector of the Future BELLE-II Experiment

    Get PDF
    The BELLE-II detector is the upgrade of its predecessor named BELLE at KEK research centre in Tsukuba, Japan, which was successfully used in the past to find evidence for CP violating decays. The upgraded SuperKEKB accelerator is specified to produce a luminosity of 8*10^35 cm^-2 s^-1. Consequently, the BELLE-II detector and particularly the innermost pixel vertex detector (PXD) suffers from enormous occupancy due to background events. Coping with this harsh environment while providing the required physics performance results in tough specifications for the front-end readout electronics. The PXD pixel detector system is based on the DEPFET technology. DEPFET transistors combine particle detection and signal amplification within one device. The DCDB chip is developed to sample and digitize signals from these transistors while complying with the specifications of BELLE-II. The presented work illustrates the chip’s features and describes its implementation process. The device is comprehensively characterized using an individually developed test environment. The obtained results are presented. The DCDB’s ability to serve as a readout device for particle physics applications is demonstrated by its successful operation within a DEPFET detector prototype system. Highlights are a decay spectrum measurement using Cd-109 and the successful operation in a beam test experiment at CERN

    On-the-Fly Algorithms and Sequential Machines

    No full text
    Frougny has presented a method that generalizes various on-the-fly” operations that have been presented, mainly in connection with computer arithmetic. First, we shall trace the origin of this method to its source, which is the celebrated paper of Rabin and Scott that introduced the notion of nondeterminism and the power-set construction. Second, we shall show that an understanding of this origin may lead to great quantitative improvements in applications of the method. Finally, we shall show by a pathological example that the method as originally presented by Frougny may result in circuits that are larger, in terms of gates per step, by two exponentiations than those that are constructed as described in the present paper
    corecore