143,575 research outputs found

    Detecting and Refactoring Operational Smells within the Domain Name System

    Full text link
    The Domain Name System (DNS) is one of the most important components of the Internet infrastructure. DNS relies on a delegation-based architecture, where resolution of names to their IP addresses requires resolving the names of the servers responsible for those names. The recursive structures of the inter dependencies that exist between name servers associated with each zone are called dependency graphs. System administrators' operational decisions have far reaching effects on the DNSs qualities. They need to be soundly made to create a balance between the availability, security and resilience of the system. We utilize dependency graphs to identify, detect and catalogue operational bad smells. Our method deals with smells on a high-level of abstraction using a consistent taxonomy and reusable vocabulary, defined by a DNS Operational Model. The method will be used to build a diagnostic advisory tool that will detect configuration changes that might decrease the robustness or security posture of domain names before they become into production.Comment: In Proceedings GaM 2015, arXiv:1504.0244

    A contrasting look at self-organization in the Internet and next-generation communication networks

    Get PDF
    This article examines contrasting notions of self-organization in the Internet and next-generation communication networks, by reviewing in some detail recent evidence regarding several of the more popular attempts to explain prominent features of Internet structure and behavior as "emergent phenomena." In these examples, what might appear to the nonexpert as "emergent self-organization" in the Internet actually results from well conceived (albeit perhaps ad hoc) design, with explanations that are mathematically rigorous, in agreement with engineering reality, and fully consistent with network measurements. These examples serve as concrete starting points from which networking researchers can assess whether or not explanations involving self-organization are relevant or appropriate in the context of next-generation communication networks, while also highlighting the main differences between approaches to self-organization that are rooted in engineering design vs. those inspired by statistical physics

    Personal area technologies for internetworked services

    Get PDF

    Inter-organizational fault management: Functional and organizational core aspects of management architectures

    Full text link
    Outsourcing -- successful, and sometimes painful -- has become one of the hottest topics in IT service management discussions over the past decade. IT services are outsourced to external service provider in order to reduce the effort required for and overhead of delivering these services within the own organization. More recently also IT services providers themselves started to either outsource service parts or to deliver those services in a non-hierarchical cooperation with other providers. Splitting a service into several service parts is a non-trivial task as they have to be implemented, operated, and maintained by different providers. One key aspect of such inter-organizational cooperation is fault management, because it is crucial to locate and solve problems, which reduce the quality of service, quickly and reliably. In this article we present the results of a thorough use case based requirements analysis for an architecture for inter-organizational fault management (ioFMA). Furthermore, a concept of the organizational respective functional model of the ioFMA is given.Comment: International Journal of Computer Networks & Communications (IJCNC

    TCG based approach for secure management of virtualized platforms: state-of-the-art

    Get PDF
    There is a strong trend shift in the favor of adopting virtualization to get business benefits. The provisioning of virtualized enterprise resources is one kind of many possible scenarios. Where virtualization promises clear advantages it also poses new security challenges which need to be addressed to gain stakeholders confidence in the dynamics of new environment. One important facet of these challenges is establishing 'Trust' which is a basic primitive for any viable business model. The Trusted computing group (TCG) offers technologies and mechanisms required to establish this trust in the target platforms. Moreover, TCG technologies enable protecting of sensitive data in rest and transit. This report explores the applicability of relevant TCG concepts to virtualize enterprise resources securely for provisioning, establish trust in the target platforms and securely manage these virtualized Trusted Platforms

    On the tailoring of CAST-32A certification guidance to real COTS multicore architectures

    Get PDF
    The use of Commercial Off-The-Shelf (COTS) multicores in real-time industry is on the rise due to multicores' potential performance increase and energy reduction. Yet, the unpredictable impact on timing of contention in shared hardware resources challenges certification. Furthermore, most safety certification standards target single-core architectures and do not provide explicit guidance for multicore processors. Recently, however, CAST-32A has been presented providing guidance for software planning, development and verification in multicores. In this paper, from a theoretical level, we provide a detailed review of CAST-32A objectives and the difficulty of reaching them under current COTS multicore design trends; at experimental level, we assess the difficulties of the application of CAST-32A to a real multicore processor, the NXP P4080.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal grant RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Designing Reusable Systems that Can Handle Change - Description-Driven Systems : Revisiting Object-Oriented Principles

    Full text link
    In the age of the Cloud and so-called Big Data systems must be increasingly flexible, reconfigurable and adaptable to change in addition to being developed rapidly. As a consequence, designing systems to cater for evolution is becoming critical to their success. To be able to cope with change, systems must have the capability of reuse and the ability to adapt as and when necessary to changes in requirements. Allowing systems to be self-describing is one way to facilitate this. To address the issues of reuse in designing evolvable systems, this paper proposes a so-called description-driven approach to systems design. This approach enables new versions of data structures and processes to be created alongside the old, thereby providing a history of changes to the underlying data models and enabling the capture of provenance data. The efficacy of the description-driven approach is exemplified by the CRISTAL project. CRISTAL is based on description-driven design principles; it uses versions of stored descriptions to define various versions of data which can be stored in diverse forms. This paper discusses the need for capturing holistic system description when modelling large-scale distributed systems.Comment: 8 pages, 1 figure and 1 table. Accepted by the 9th Int Conf on the Evaluation of Novel Approaches to Software Engineering (ENASE'14). Lisbon, Portugal. April 201
    corecore