1,094 research outputs found

    Hosting Industry Centralization and Consolidation

    Get PDF
    There have been growing concerns about the concentration and centralization of Internet infrastructure. In this work, we scrutinize the hosting industry on the Internet by using active measurements, covering 19 Top-Level Domains (TLDs). We show how the market is heavily concentrated: 1/3 of the domains are hosted by only 5 hosting providers, all US-based companies. For the country-code TLDs (ccTLDs), however, hosting is primarily done by local, national hosting providers and not by the large American cloud and content providers. We show how shared languages (and borders) shape the hosting market -- German hosting companies have a notable presence in Austrian and Swiss markets, given they all share German as official language. While hosting concentration has been relatively high and stable over the past four years, we see that American hosting companies have been continuously increasing their presence in the market related to high traffic, popular domains within ccTLDs -- except for Russia, notably.Comment: to appear in IEEE/IFIP Network Operations and Management Symposium https://noms2022.ieee-noms.org

    An SLA-driven framework for dynamic multimedia content delivery federations

    Get PDF
    Recently, the Internet has become a popular platform for the delivery of multimedia content. However, its best effort delivery approach is ill-suited to guarantee the stringent Quality of Service (QoS) requirements of many existing multimedia services, which results in a significant reduction of the Quality of Experience. This paper presents a solution to these problems, in the form of a framework for dynamically setting up federations between the stakeholders involved in the content delivery chain. More specifically, the framework provides an automated mechanism to set up end-to-end delivery paths from the content provider to the access Internet Service Providers (ISPs), which act as its direct customers and represent a group of end-users. Driven by Service Level Agreements (SLAs), QoS contracts are automatically negotiated between the content provider, the access ISPs, and the intermediary network domains along the delivery paths. These contracts capture the delivered QoS and resource reservation costs, which are subsequently used in the price negotiations between content provider and access ISPs. Additionally, it supports the inclusion of cloud providers within the federations, supporting on-the-fly allocation of computational and storage resources. This allows the automatic deployment and configuration of proxy caches along the delivery paths, which potentially reduce delivery costs and increase delivered quality

    Using resource-level information into nonadditive negotiation models for cloud market environments

    Get PDF
    Markets arise as an efficient way of organising resources in Cloud Computing scenarios. In Cloud Computing Markets, Brokers that represent both Clients and Service Providers meet in a Market and negotiate for the sales of resources or services. This paper defends the idea that efficient negotiations require of the usage of resource-level information for increasing the accuracy of negotiated Service Level Agreements and facilitating the achievement of both performance and business goals. A negotiation model based on the maximisation of nonadditive utility functions that considers multiple objectives is defined, and its validity is demonstrated in the experiments.Postprint (published version

    Checkpoint-based Fault-tolerant Infrastructure for Virtualized Service Providers

    Get PDF
    Crash and omission failures are common in service providers: a disk can break down or a link can fail anytime. In addition, the probability of a node failure increases with the number of nodes. Apart from reducing the provider’s computation power and jeopardizing the fulfillment of his contracts, this can also lead to computation time wasting when the crash occurs before finishing the task execution. In order to avoid this problem, efficient checkpoint infrastructures are required, especially in virtualized environments where these infrastructures must deal with huge virtual machine images. This paper proposes a smart checkpoint infrastructure for virtualized service providers. It uses Another Union File System to differentiate read-only from read-write parts in the virtual machine image. In this way, read-only parts can be checkpointed only once, while the rest of checkpoints must only save the modifications in read-write parts, thus reducing the time needed to make a checkpoint. The checkpoints are stored in a Hadoop Distributed File System. This allows resuming a task execution faster after a node crash and increasing the fault tolerance of the system, since checkpoints are distributed and replicated in all the nodes of the provider. This paper presents a running implementation of this infrastructure and its evaluation, demonstrating that it is an effective way to make faster checkpoints with low interference on task execution and efficient task recovery after a node failure.Peer ReviewedPostprint (published version

    Using resource-level information into nonadditive negotiation models for cloud market environments

    Get PDF
    Markets arise as an efficient way of organising resources in Cloud Computing scenarios. In Cloud Computing Markets, Brokers that represent both Clients and Service Providers meet in a Market and negotiate for the sales of resources or services. This paper defends the idea that efficient negotiations require of the usage of resource-level information for increasing the accuracy of negotiated Service Level Agreements and facilitating the achievement of both performance and business goals. A negotiation model based on the maximisation of nonadditive utility functions that considers multiple objectives is defined, and its validity is demonstrated in the experiments.Postprint (published version

    IT Intrusion Detection Using Statistical Learning and Testbed Measurements

    Full text link
    We study automated intrusion detection in an IT infrastructure, specifically the problem of identifying the start of an attack, the type of attack, and the sequence of actions an attacker takes, based on continuous measurements from the infrastructure. We apply statistical learning methods, including Hidden Markov Model (HMM), Long Short-Term Memory (LSTM), and Random Forest Classifier (RFC) to map sequences of observations to sequences of predicted attack actions. In contrast to most related research, we have abundant data to train the models and evaluate their predictive power. The data comes from traces we generate on an in-house testbed where we run attacks against an emulated IT infrastructure. Central to our work is a machine-learning pipeline that maps measurements from a high-dimensional observation space to a space of low dimensionality or to a small set of observation symbols. Investigating intrusions in offline as well as online scenarios, we find that both HMM and LSTM can be effective in predicting attack start time, attack type, and attack actions. If sufficient training data is available, LSTM achieves higher prediction accuracy than HMM. HMM, on the other hand, requires less computational resources and less training data for effective prediction. Also, we find that the methods we study benefit from data produced by traditional intrusion detection systems like SNORT.Comment: A shortened version of this paper will appear in the conference proceedings of NOMS 2024 (IEEE/IFIP Network Operations and Management Symposium
    corecore