5,761 research outputs found

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Management and Service-aware Networking Architectures (MANA) for Future Internet Position Paper: System Functions, Capabilities and Requirements

    Get PDF
    Future Internet (FI) research and development threads have recently been gaining momentum all over the world and as such the international race to create a new generation Internet is in full swing: GENI, Asia Future Internet, Future Internet Forum Korea, European Union Future Internet Assembly (FIA). This is a position paper identifying the research orientation with a time horizon of 10 years, together with the key challenges for the capabilities in the Management and Service-aware Networking Architectures (MANA) part of the Future Internet (FI) allowing for parallel and federated Internet(s)

    A prototype and demonstrator of Akogrimo’s architecture: An approach of merging grids, SOA, and the mobile Internet

    Full text link
    The trend of merging telecommunication infrastructures with traditional Information Technology (IT) infrastructures is ongoing and important for commercial service providers. The driver behind this development is, on one hand, the strong need for enhanced services and on the other hand, the need of telecommunication operators aiming at value-added service provisioning to a wide variety of customers. In the telecommunications sector, the IP Multimedia Subsystem (IMS) is a promising service platform, which may become a ''standard'' for supporting added-value services on top of the next generation network infrastructure. However, since its range of applicability is bound to SIP- enabled services, IMS extensions are being proposed by ''SIPifying'' applications. In parallel to these developments within the traditional IT sector, the notion of Virtual Organizations (VO) enabling collaborative businesses across organizational boundaries is addressed in the framework of Web Services (WS) standards implementing a Service-oriented Architecture (SOA). Here, concepts for controlled resource and service sharing based on WS and Semantic Technologies have been consolidated. Since the telecommunications sector has become, in the meantime ''mobile'', all concepts brought into this infrastructure must cope with the dynamics mobility brings in. Therefore, within the Akogrimo project the VO concept has been extended towards a Mobile Dynamic Virtual Organization (MDVO) concept, additionally considering key requirements of mobile users and resources. Especial attention is given to ensure the duality of the merge of both, SOA and IMS approaches to holistically support SOA-enabled mobile added-value services and their users. This work describes major results of the Akogrimo project, paying special attention to the overall Akogrimo architecture, the prototype implemented, and the key scenario in which the instantiated Akogrimo architecture shows a very clear picture of applicability, use, and an additional functional evaluation

    Herding Vulnerable Cats: A Statistical Approach to Disentangle Joint Responsibility for Web Security in Shared Hosting

    Full text link
    Hosting providers play a key role in fighting web compromise, but their ability to prevent abuse is constrained by the security practices of their own customers. {\em Shared} hosting, offers a unique perspective since customers operate under restricted privileges and providers retain more control over configurations. We present the first empirical analysis of the distribution of web security features and software patching practices in shared hosting providers, the influence of providers on these security practices, and their impact on web compromise rates. We construct provider-level features on the global market for shared hosting -- containing 1,259 providers -- by gathering indicators from 442,684 domains. Exploratory factor analysis of 15 indicators identifies four main latent factors that capture security efforts: content security, webmaster security, web infrastructure security and web application security. We confirm, via a fixed-effect regression model, that providers exert significant influence over the latter two factors, which are both related to the software stack in their hosting environment. Finally, by means of GLM regression analysis of these factors on phishing and malware abuse, we show that the four security and software patching factors explain between 10\% and 19\% of the variance in abuse at providers, after controlling for size. For web-application security for instance, we found that when a provider moves from the bottom 10\% to the best-performing 10\%, it would experience 4 times fewer phishing incidents. We show that providers have influence over patch levels--even higher in the stack, where CMSes can run as client-side software--and that this influence is tied to a substantial reduction in abuse levels

    The simplicity project: easing the burden of using complex and heterogeneous ICT devices and services

    Get PDF
    As of today, to exploit the variety of different "services", users need to configure each of their devices by using different procedures and need to explicitly select among heterogeneous access technologies and protocols. In addition to that, users are authenticated and charged by different means. The lack of implicit human computer interaction, context-awareness and standardisation places an enormous burden of complexity on the shoulders of the final users. The IST-Simplicity project aims at leveraging such problems by: i) automatically creating and customizing a user communication space; ii) adapting services to user terminal characteristics and to users preferences; iii) orchestrating network capabilities. The aim of this paper is to present the technical framework of the IST-Simplicity project. This paper is a thorough analysis and qualitative evaluation of the different technologies, standards and works presented in the literature related to the Simplicity system to be developed

    Federated Access Management for Collaborative Environments

    Get PDF
    abstract: Access control has been historically recognized as an effective technique for ensuring that computer systems preserve important security properties. Recently, attribute-based access control (ABAC) has emerged as a new paradigm to provide access mediation by leveraging the concept of attributes: observable properties that become relevant under a certain security context and are exhibited by the entities normally involved in the mediation process, namely, end-users and protected resources. Also recently, independently-run organizations from the private and public sectors have recognized the benefits of engaging in multi-disciplinary research collaborations that involve sharing sensitive proprietary resources such as scientific data, networking capabilities and computation time and have recognized ABAC as the paradigm that suits their needs for restricting the way such resources are to be shared with each other. In such a setting, a robust yet flexible access mediation scheme is crucial to guarantee participants are granted access to such resources in a safe and secure manner. However, no consensus exists either in the literature with respect to a formal model that clearly defines the way the components depicted in ABAC should interact with each other, so that the rigorous study of security properties to be effectively pursued. This dissertation proposes an approach tailored to provide a well-defined and formal definition of ABAC, including a description on how attributes exhibited by different independent organizations are to be leveraged for mediating access to shared resources, by allowing for collaborating parties to engage in federations for the specification, discovery, evaluation and communication of attributes, policies, and access mediation decisions. In addition, a software assurance framework is introduced to support the correct construction of enforcement mechanisms implementing our approach by leveraging validation and verification techniques based on software assertions, namely, design by contract (DBC) and behavioral interface specification languages (BISL). Finally, this dissertation also proposes a distributed trust framework that allows for exchanging recommendations on the perceived reputations of members of our proposed federations, in such a way that the level of trust of previously-unknown participants can be properly assessed for the purposes of access mediation.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Service management for multi-domain Active Networks

    Get PDF
    The Internet is an example of a multi-agent system. In our context, an agent is synonymous with network operators, Internet service providers (ISPs) and content providers. ISPs mutually interact for connectivity's sake, but the fact remains that two peering agents are inevitably self-interested. Egoistic behaviour manifests itself in two ways. Firstly, the ISPs are able to act in an environment where different ISPs would have different spheres of influence, in the sense that they will have control and management responsibilities over different parts of the environment. On the other hand, contention occurs when an ISP intends to sell resources to another, which gives rise to at least two of its customers sharing (hence contending for) a common transport medium. The multi-agent interaction was analysed by simulating a game theoretic approach and the alignment of dominant strategies adopted by agents with evolving traits were abstracted. In particular, the contention for network resources is arbitrated such that a self-policing environment may emerge from a congested bottleneck. Over the past 5 years, larger ISPs have simply peddled as fast as they could to meet the growing demand for bandwidth by throwing bandwidth at congestion problems. Today, the dire financial positions of Worldcom and Global Crossing illustrate, to a certain degree, the fallacies of over-provisioning network resources. The proposed framework in this thesis enables subscribers of an ISP to monitor and police each other's traffic in order to establish a well-behaved norm in utilising limited resources. This framework can be expanded to other inter-domain bottlenecks within the Internet. One of the main objectives of this thesis is also to investigate the impact on multi-domain service management in the future Internet, where active nodes could potentially be located amongst traditional passive routers. The advent of Active Networking technology necessitates node-level computational resource allocations, in addition to prevailing resource reservation approaches for communication bandwidth. Our motivation is to ensure that a service negotiation protocol takes account of these resources so that the response to a specific service deployment request from the end-user is consistent and predictable. To promote the acceleration of service deployment by means of Active Networking technology, a pricing model is also evaluated for computational resources (e.g., CPU time and memory). Previous work in these areas of research only concentrate on bandwidth (i.e., communication) - related resources. Our pricing approach takes account of both guaranteed and best-effort service by adapting the arbitrage theorem from financial theory. The central tenet for our approach is to synthesise insights from different disciplines to address problems in data networks. The greater parts of research experience have been obtained during direct and indirect participation in the 1ST-10561 project known as FAIN (Future Active IP Networks) and ACTS-AC338 project called MIAMI (Mobile Intelligent Agent for Managing the Information Infrastructure). The Inter-domain Manager (IDM) component was integrated as an integral part of the FAIN policy-based network management systems (PBNM). Its monitoring component (developed during the MIAMI project) learns about routing changes that occur within a domain so that the management system and the managed nodes have the same topological view of the network. This enabled our reservation mechanism to reserve resources along the existing route set up by whichever underlying routing protocol is in place
    • 

    corecore