340 research outputs found

    Carbon-Intelligent Global Routing in Path-Aware Networks

    Full text link
    The growing energy consumption of Information and Communication Technology (ICT) has raised concerns about its environmental impact. However, the carbon efficiency of data transmission over the Internet has so far received little attention. This carbon efficiency can be enhanced effectively by sending traffic over carbon-efficient inter-domain paths. However, challenges in estimating and disseminating carbon intensity of inter-domain paths have prevented carbon-aware path selection from becoming a reality. In this paper, we take advantage of path-aware network architectures to overcome these challenges. In particular, we design CIRo, a system for forecasting the carbon intensity of inter-domain paths and disseminating them across the Internet. We implement a proof of concept for CIRo on the codebase of the SCION path-aware Internet architecture and test it on the SCIONLab global research testbed. Further, we demonstrate the potential of CIRo for reducing the carbon footprint of endpoints and end domains through large-scale simulations. We show that CIRo can reduce the carbon intensity of communications by at least 47% for half of the domain pairs and the carbon footprint of Internet usage by at least 50% for 87% of end domains

    Incentivizing Stable Path Selection in Future Internet Architectures

    Full text link
    By delegating path control to end-hosts, future Internet architectures offer flexibility for path selection. However, there is a concern that the distributed routing decisions by end-hosts, in particular load-adaptive routing, can lead to oscillations if path selection is performed without coordination or accurate load information. Prior research has addressed this problem by devising path-selection policies that lead to stability. However, little is known about the viability of these policies in the Internet context, where selfish end-hosts can deviate from a prescribed policy if such a deviation is beneficial fromtheir individual perspective. In order to achieve network stability in future Internet architectures, it is essential that end-hosts have an incentive to adopt a stability-oriented path-selection policy. In this work, we perform the first incentive analysis of the stability-inducing path-selection policies proposed in the literature. Building on a game-theoretic model of end-host path selection, we show that these policies are in fact incompatible with the self-interest of end-hosts, as these strategies make it worthwhile to pursue an oscillatory path-selection strategy. Therefore, stability in networks with selfish end-hosts must be enforced by incentive-compatible mechanisms. We present two such mechanisms and formally prove their incentive compatibility.Comment: 38th International Symposium on Computer Performance, Modeling, Measurements and Evaluation (PERFORMANCE 2020

    Mapping dominant runoff processes : an evaluation of different approaches using similarity measures and synthetic runoff simulations

    Get PDF
    The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use expert knowledge for only model building and constraining, but also in the phase of landscape classification

    Towards an understanding of third-order galaxy-galaxy lensing

    Full text link
    Third-order galaxy-galaxy lensing (G3L) is a next generation galaxy-galaxy lensing technique that either measures the excess shear about lens pairs or the excess shear-shear correlations about lenses. It is clear that these statistics assess the three-point correlations between galaxy positions and projected matter density. For future applications of these novel statistics, we aim at a more intuitive understanding of G3L to isolate the main features that possibly can be measured. We construct a toy model ("isolated lens model"; ILM) for the distribution of galaxies and associated matter to determine the measured quantities of the two G3L correlation functions and traditional galaxy-galaxy lensing (GGL) in a simplified context. The ILM presumes single lens galaxies to be embedded inside arbitrary matter haloes that, however, are statistically independent ("isolated") from any other halo or lens position. In the ILM, the average mass-to-galaxy number ratio of clusters of any size cannot change. GGL and galaxy clustering alone cannot distinguish an ILM from any more complex scenario. The lens-lens-shear correlator in combination with second-order statistics enables us to detect deviations from a ILM, though. This can be quantified by a difference signal defined in the paper. We demonstrate with the ILM that this correlator picks up the excess matter distribution about galaxy pairs inside clusters. The lens-shear-shear correlator is sensitive to variations among matter haloes. In principle, it could be devised to constrain the ellipticities of haloes, without the need for luminous tracers, or maybe even random halo substructure. [Abridged]Comment: 14 pages, 3 figures, 1 table, accepted by A&A; some "lens-shear-shear" were falsely "lens-lens-shear

    Non-central chi estimation of multi-compartment models improves model selection by reducing overfitting

    No full text
    International audienceDiffusion images are known to be corrupted with a non-central chi (NCC)-distributed noise [1]. There has been a number of proposed image denoising methods that account for this particular noise distribution [1,2,3]. However, to the best of our knowledge, no study was performed to assess the influence of the noise model in the context of diffusion model estimation as was suggested in [4]. In particular, multi-compartment models [5] are an appealing class of models to describe the white matter microstructure but require the optimal number of compartments to be known a priori. Its estimation is no easy task since more complex models will always better fit the data, which is known as over-fitting. However, MCM estimation in the literature is performed assuming a Gaussian-distributed noise [5,6]. In this preliminary study, we aim at showing that using the appropriate NCC distribution for modelling the noise model reduces significantly the over-fitting, which could be helpful for unravelling model selection issues and obtaining better model parameter estimates

    ALBUS: a Probabilistic Monitoring Algorithm to Counter Burst-Flood Attacks

    Full text link
    Modern DDoS defense systems rely on probabilistic monitoring algorithms to identify flows that exceed a volume threshold and should thus be penalized. Commonly, classic sketch algorithms are considered sufficiently accurate for usage in DDoS defense. However, as we show in this paper, these algorithms achieve poor detection accuracy under burst-flood attacks, i.e., volumetric DDoS attacks composed of a swarm of medium-rate sub-second traffic bursts. Under this challenging attack pattern, traditional sketch algorithms can only detect a high share of the attack bursts by incurring a large number of false positives. In this paper, we present ALBUS, a probabilistic monitoring algorithm that overcomes the inherent limitations of previous schemes: ALBUS is highly effective at detecting large bursts while reporting no legitimate flows, and therefore improves on prior work regarding both recall and precision. Besides improving accuracy, ALBUS scales to high traffic rates, which we demonstrate with an FPGA implementation, and is suitable for programmable switches, which we showcase with a P4 implementation.Comment: Accepted at the 42nd International Symposium on Reliable Distributed Systems (SRDS 2023

    Brittleness index of machinable dental materials and its relation to the marginal chipping factor

    Get PDF
    OBJECTIVES: The machinability of a material can be measured with the calculation of its brittleness index (BI). It is possible that different materials with different BI could produce restorations with varied marginal integrity. The degree of marginal chipping of a milled restoration can be estimated by the calculation of the marginal chipping factor (CF). The aim of this study is to investigate any possible correlation between the BI of machinable dental materials and the CF of the final restorations. METHODS: The CERECTM system was used to mill a wide range of materials used with that system; namely the Paradigm MZ100TM (3M/ESPE), Vita Mark II (VITA), ProCAD (Ivoclar-Vivadent) and IPS e.max CAD (Ivoclar-Vivadent). A Vickers hardness Tester was used for the calculation of BI, while for the calculation of CF the percentage of marginal chipping of crowns prepared with bevelled marginal angulations was estimated. RESULTS: The results of this study showed that Paradigm MZ100 had the lowest BI and CF, while IPS e.max CAD demonstrated the highest BI and CF. Vita Mark II and ProCAD had similar BI and CF and were lying between the above materials. Statistical analysis of the results showed that there is a perfect positive correlation between BI and CF for all the materials. CONCLUSIONS: The BI and CF could be both regarded as indicators of a material’s machinability. Within the limitations of this study it was shown that as the BI increases so does the potential for marginal chipping, indicating that the BI of a material can be used as a predictor of the CF

    The thawing dark energy dynamics: Can we detect it?

    Full text link
    We consider different classes of scalar field models including quintessence, and tachyon scalar fields with a variety of generic potential belonging to thawing type. Assuming the scalar field is initially frozen at w=1w=-1, we evolve the system until the present time. We focus on observational quantities like Hubble parameter, luminosity distance as well as quantities related to the Baryon Acoustic Oscillation measurement. Our study shows that with present state of observations, one can not distinguish amongst various models which in turn can not be distinguished from cosmological constant. This lead us to a conclusion that there is a thin chance to observe the dark energy metamorphosis in near future.Comment: 7 pages, Revtex Style, 6 eps figures, replaced with revised version, some figures are modified, minor changes, conclusions remain the same, Accepted for publication in Physics Letters

    Thawing Versus. Tracker Behaviour: Observational Evidence

    Full text link
    Currently there is a variety of scalar field models to explain the late time acceleration of the Universe. This includes the standard canonical and non-canonical scalar field models together with recently proposed Galileon scalar field models. One can divide all these scalar field models into two broad categories, namely the thawing and the tracker class. In this work we investigate the evidence for these models with the presently available observational data using the Bayesian approach. We use the Generalized Chaplygin Gas (GCG) parametrization for dark energy equation of state (EoS) as it gives rise to both the thawing and tracking behaviours for different values of the parameters. Analysis of the observational data does not give any clear evidence for either thawing or tracking behaviour within the context of background cosmology, However, if we consider the evolution of inhomogenities and analyze the data in this context then there is a significant evidence in favour of thawing behaviour.Comment: 6 Pages, three eps figures, new material added, new references added. Conclusion changed. Accepted for publication MNRA

    The Weak Energy Condition and the Expansion History of the Universe

    Get PDF
    We examine flat models containing a dark matter component and an arbitrary dark energy component, subject only to the constraint that the dark energy satisfies the weak energy condition. We determine the constraints that these conditions place on the evolution of the Hubble parameter with redshift, H(z), and on the scaling of the coordinate distance with redshift, r(z). Observational constraints on H(z) are used to derive an upper bound on the current matter density. We demonstrate how the weak energy condition constrains fitting functions for r(z).Comment: 5 pages, 3 figures, references and discussion adde
    corecore