446 research outputs found

    An inventory of undiscovered Canadian mineral resources

    Get PDF
    Unit regional value (URV) and unit regional weight are area standardized measures of the expected value and quantity, respectively, of the mineral resources of a region. Estimation and manipulation of the URV statistic is the basis of an approach to mineral resource evaluation. Estimates of the kind and value of exploitable mineral resources yet to be discovered in the provinces of Canada are used as an illustration of the procedure. The URV statistic is set within a previously developed model wherein geology, as measured by point counting geologic maps, is related to the historical record of mineral resource production of well-developed regions of the world, such as the 50 states of the U.S.A.; these may be considered the training set. The Canadian provinces are related to this training set using geological information obtained in the same way from geologic maps of the provinces. The desired predictions of yet to be discovered mineral resources in the Canadian provinces arise as a consequence. The implicit assumption is that regions of similar geology, if equally well developed, will produce similar weights and values of mineral resources

    Critical load and congestion instabilities in scale-free networks

    Get PDF
    We study the tolerance to congestion failures in communication networks with scale-free topology. The traffic load carried by each damaged element in the network must be partly or totally redistributed among the remaining elements. Overloaded elements might fail on their turn, triggering the occurrence of failure cascades able to isolate large parts of the network. We find a critical traffic load above which the probability of massive traffic congestions destroying the network communication capabilities is finite.Comment: 4 pages, 3 figure

    Understanding the internet topology evolution dynamics

    Get PDF
    The internet structure is extremely complex. The Positive-Feedback Preference (PFP) model is a recently introduced internet topology generator. The model uses two generic algorithms to replicate the evolution dynamics observed on the internet historic data. The phenomenological model was originally designed to match only two topology properties of the internet, i.e. the rich-club connectivity and the exact form of degree distribution. Whereas numerical evaluation has shown that the PFP model accurately reproduces a large set of other nontrivial characteristics as well. This paper aims to investigate why and how this generative model captures so many diverse properties of the internet. Based on comprehensive simulation results, the paper presents a detailed analysis on the exact origin of each of the topology properties produced by the model. This work reveals how network evolution mechanisms control the obtained topology properties and it also provides insights on correlations between various structural characteristics of complex networks.Comment: 15 figure

    Revisiting Old Friends: Is CoDel Really Achieving What RED Cannot?

    Get PDF
    We use ns-2 simulations to compare RED's gentle mode to CoDel in terms of their ability to reduce the latency for various TCP variants. We use a common dumbbell topology with Pareto background traffic, and measure the packet delays and transmission time of a 10MB FTP transfer. In our scenarios, we find that CoDel reduces the latency by 87%, but RED still manages to reduce it by 75%. However, the use of CoDel results in a transmission time 42% longer than when using RED. In light of its maturity, we therefore argue that RED could be considered as a good candidate to tackle Bufferbloat

    Steering hyper-giants' traffic at scale

    Get PDF
    Large content providers, known as hyper-giants, are responsible for sending the majority of the content traffic to consumers. These hyper-giants operate highly distributed infrastructures to cope with the ever-increasing demand for online content. To achieve 40 commercial-grade performance of Web applications, enhanced end-user experience, improved reliability, and scaled network capacity, hyper-giants are increasingly interconnecting with eyeball networks at multiple locations. This poses new challenges for both (1) the eyeball networks having to perform complex inbound traffic engineering, and (2) hyper-giants having to map end-user requests to appropriate servers. We report on our multi-year experience in designing, building, rolling-out, and operating the first-ever large scale system, the Flow Director, which enables automated cooperation between one of the largest eyeball networks and a leading hyper-giant. We use empirical data collected at the eyeball network to evaluate its impact over two years of operation. We find very high compliance of the hyper-giant to the Flow Director’s recommendations, resulting in (1) close to optimal user-server mapping, and (2) 15% reduction of the hyper-giant’s traffic overhead on the ISP’s long-haul links, i.e., benefits for both parties and end-users alike.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Antiinflammatory Therapy with Canakinumab for Atherosclerotic Disease

    Get PDF
    Background: Experimental and clinical data suggest that reducing inflammation without affecting lipid levels may reduce the risk of cardiovascular disease. Yet, the inflammatory hypothesis of atherothrombosis has remained unproved. Methods: We conducted a randomized, double-blind trial of canakinumab, a therapeutic monoclonal antibody targeting interleukin-1β, involving 10,061 patients with previous myocardial infarction and a high-sensitivity C-reactive protein level of 2 mg or more per liter. The trial compared three doses of canakinumab (50 mg, 150 mg, and 300 mg, administered subcutaneously every 3 months) with placebo. The primary efficacy end point was nonfatal myocardial infarction, nonfatal stroke, or cardiovascular death. RESULTS: At 48 months, the median reduction from baseline in the high-sensitivity C-reactive protein level was 26 percentage points greater in the group that received the 50-mg dose of canakinumab, 37 percentage points greater in the 150-mg group, and 41 percentage points greater in the 300-mg group than in the placebo group. Canakinumab did not reduce lipid levels from baseline. At a median follow-up of 3.7 years, the incidence rate for the primary end point was 4.50 events per 100 person-years in the placebo group, 4.11 events per 100 person-years in the 50-mg group, 3.86 events per 100 person-years in the 150-mg group, and 3.90 events per 100 person-years in the 300-mg group. The hazard ratios as compared with placebo were as follows: in the 50-mg group, 0.93 (95% confidence interval [CI], 0.80 to 1.07; P = 0.30); in the 150-mg group, 0.85 (95% CI, 0.74 to 0.98; P = 0.021); and in the 300-mg group, 0.86 (95% CI, 0.75 to 0.99; P = 0.031). The 150-mg dose, but not the other doses, met the prespecified multiplicity-adjusted threshold for statistical significance for the primary end point and the secondary end point that additionally included hospitalization for unstable angina that led to urgent revascularization (hazard ratio vs. placebo, 0.83; 95% CI, 0.73 to 0.95; P = 0.005). Canakinumab was associated with a higher incidence of fatal infection than was placebo. There was no significant difference in all-cause mortality (hazard ratio for all canakinumab doses vs. placebo, 0.94; 95% CI, 0.83 to 1.06; P = 0.31). Conclusions: Antiinflammatory therapy targeting the interleukin-1β innate immunity pathway with canakinumab at a dose of 150 mg every 3 months led to a significantly lower rate of recurrent cardiovascular events than placebo, independent of lipid-level lowering. (Funded by Novartis; CANTOS ClinicalTrials.gov number, NCT01327846.

    High Availability in the Future Internet

    Get PDF
    With the evolution of the Internet, a huge number of real- time applications, like Voice over IP, has started to use IP as primary transmission medium. These services require high availability, which is not amongst the main features of today’s heterogeneous Internet where fail- ures occur frequently. Unfortunately, the primary fast resilience scheme implemented in IP routers, Loop-Free Alternates (LFA), usually does not provide full protection against failures. Consequently, there has been a growing interest in LFA-based network optimization methods, aimed at tuning some aspect of the underlying IP topology to maximize the ratio of failure cases covered by LFA. The main goal of this chapter is to give a comprehensive overview of LFA and survey the related LFA network op- timization methods, pointing out that these optimization tools can turn LFA into an easy-to-deploy yet highly effective IP fast resilience scheme
    corecore