532 research outputs found

    Telemetry for Next-Generation Networks

    Get PDF
    Software-defined networking enables tight integration between packet-processing hardware and centralized controllers, highlighting the importance of deep network insight for informed decision-making. Modern network telemetry aims to provide per-packet insights into networks, enabling significant optimizations and security enhancements. However, the increasing gap between network speeds and the stagnating performance of CPUs presents significant challenges to these efforts. Attempts to circumvent this slowdown by deploying monitoring functionality directly into the data plane, which is capable of line-rate processing, are hindered by the hardware's resource limitations and the data collection capacities of analysis servers. This dissertation introduces a dual strategy to enhance centralized network insights: Firstly, it improves probabilistic network monitoring data structures, achieving fault-tolerant monitoring in heterogeneous environments with significantly higher accuracy and reduced resource demands. Secondly, it redesigns the interface between networking hardware and analysis servers to substantially lower telemetry collection and aggregation costs, thus enabling insights at unprecedented granularities. These advancements collectively mark a significant stride towards realizing the full potential of fine-grained network monitoring, offering a scalable and efficient solution to address the challenges brought by the rapid evolution of network technologies

    Lightweight Acquisition and Ranging of Flows in the Data Plane

    Get PDF
    As networks get more complex, the ability to track almost all the flows is becoming of paramount importance. This is because we can then detect transient events impacting only a subset of the traffic. Solutions for flow monitoring exist, but it is getting very difficult to produce accurate estimations for every tuple given the memory constraints of commodity programmable switches. Indeed, as networks grow in size, more flows have to be tracked, increasing the number of tuples to be recorded. At the same time, end-host virtualization requires more specific flowIDs, enlarging the memory cost for every single entry. Finally, the available memory resources have to be shared with other important functions as well (e.g., load balancing, forwarding, ACL). To address those issues, we present FlowLiDAR (Flow Lightweight Detection and Ranging), a new solution that is capable of tracking almost all the flows in the network while requiring only a modest amount of data plane memory which is not dependent on the size of flowIDs. We implemented the scheme in P4, tested it using real traffic from ISPs and compared it against four state-of-the-art solutions: FlowRadar, NZE, PR-sketch, and Elastic Sketch. While those can only reconstruct up to 60% of the tuples, FlowLiDAR can track 98.7% of them with the same amount of memory

    Direct Telemetry Access

    Get PDF
    Fine-grained network telemetry is becoming a modern datacenter standard and is the basis of essential applications such as congestion control, load balancing, and advanced troubleshooting. As network size increases and telemetry gets more fine-grained, there is a tremendous growth in the amount of data needed to be reported from switches to collectors to enable network-wide view. As a consequence, it is progressively hard to scale data collection systems.We introduce Direct Telemetry Access (DTA), a solution optimized for aggregating and moving hundreds of millions of reports per second from switches into queryable data structures in collectors' memory. DTA is lightweight and it is able to greatly reduce overheads at collectors. DTA is built on top of RDMA, and we propose novel and expressive reporting primitives to allow easy integration with existing state-of-the-art telemetry mechanisms such as INT or Marple.We show that DTA significantly improves telemetry collection rates. For example, when used with INT, it can collect and aggregate over 400M reports per second with a single server, improving over the Atomic MultiLog by up to 16x

    Survival, Reproduction and Calcification of Three Benthic Foraminiferal Species in Response to Experimentally Induced Hypoxia

    Get PDF
    An experiment was conducted to test the survival rates, growth (calcification), and reproduction capacities of three benthic foraminiferal species (Ammonia tepida, Melonis barleeanus and Bulimina marginata) under strongly oxygen-depleted conditions alternating with short periods of anoxia. Protocols were determined to use accurate methods (1) to follow oxygen concentrations in the aquaria (continuously recorded using microsensors), (2) to distinguish live foraminifera (fluorogenic probe), (3) to determine foraminiferal growth (calcein-marked shells and automatic measurement of the shell size). Our results show a very high survival rate, and growth of A. tepida and M. barleeanus in all experimental conditions, suggesting that survival and growth are not negatively impacted by hypoxia. Unfortunately, no reproduction was observed for these species, so that we cannot draw firm conclusions on their ability to reproduce under hypoxic/anoxic conditions. The survival rates of Bulimina marginata are much lower than for the other two species. In the oxic treatments, the presence of juveniles is indicative of reproductive events, which can explain an important part of the mortality. The absence of juveniles in the hypoxic/anoxic treatments could indicate that these conditions inhibit reproduction. Alternatively, the perceived absence of juveniles could also be due to the fact that the juveniles resulting from reproduction (causing similar mortality rates as in the oxic treatments) were not able to calcify, and remained at a propagule stage. Additional experiments are needed to distinguish these two options

    How simple can a model of an empty viral capsid be? Charge distributions in viral capsids

    Full text link
    We investigate and quantify salient features of the charge distributions on viral capsids. Our analysis combines the experimentally determined capsid geometry with simple models for ionization of amino acids, thus yielding the detailed description of spatial distribution for positive and negative charge across the capsid wall. The obtained data is processed in order to extract the mean radii of distributions, surface charge densities and dipole moment densities. The results are evaluated and examined in light of previously proposed models of capsid charge distributions, which are shown to have to some extent limited value when applied to real viruses.Comment: 10 pages, 10 figures; accepted for publication in Journal of Biological Physic

    Design and tests of high sensitivity NTD Ge thermometers for the Planck-High Frequency Instrument

    Get PDF
    The High Frequency Instrument of Planck needs high sensitivity semi-conductors at low temperature to monitor the temperature of the bolometer plate. We have modeled such thermometers by using a semi-analytical approach of Anderson insulators, taking into account both the electrical field and the electron/phonon decoupling effects. The optimized design uses convenient NTD Ge material and has larger dimension than the initial design. The first measurements of these optimized thermometers showed a significant thermal de-coupling effect due to Kapitza resistance with its mechanical support. Nevertheless, a sensitivity of about 8 nK.Hz^(–0.5), not far from the predicted one, was obtained. The noise spectrum of the thermometer was flat down to 1 Hz, dominated at lower frequency by the thermal fluctuations
    corecore