21 research outputs found

    Context Driven Access Control to SNMP MIB objects in multi-homed environments

    Get PDF
    Rapport interne.The advent of multi-technologies networks offering the service continuum over multiple network infrastructures implies new challenges to integrated management. One of this challenge is the auto-configuration of the management plane needed to allow dynamic relationships among several managers and one management agent. This paper proposes the use of provisional policies in order to dynamically auto-configure the access control plane of a management agent. This allows simple management based on agent location and time as well as the cooperative behavior of several managers

    Monitoring Scheduling for Home Gateways

    Get PDF
    International audienceIn simple and monolithic systems such as our current home gateways, monitoring is often overlooked: the home user can only reboot the gateway when there is a problem. In next-generation home gateways, more services will be available (pay-per-view TV, games. . . ) and different actors will provide them. When one service fails, it will be impossible to reboot the gateway without disturbing the other services. We propose a management framework that monitors remote gateways. The framework tests response times for various management activities on the gateway, and provides reference time/performance ratios. The values can be used to establish a management schedule that balances the rate at which queries can be performed with the resulting load that the query will induce locally on the gateway. This allows the manager to tune the ratio between the reactivity of monitoring and its intrusiveness on performance

    Managing the Transition from SNMP to NETCONF: Comparing Dual-Stack and Protocol Gateway Hybrid Approaches

    Get PDF
    As industries become increasingly automated and stressed to seek business advantages, they often have operational constraints that make modernization and security more challenging. Constraints exist such as low operating budgets, long operational lifetimes and infeasible network/device upgrade/modification paths. In order to bypass these constraints with minimal risk of disruption and perform ``no harm'', network administrators have come to rely on using dual-stack approaches, which allow legacy protocols to co-exist with modern ones. For example, if SNMP is required for managing legacy devices, and a newer protocol (NETCONF) is required for modern devices, then administrators simply modify firewall Access Control Lists (ACLs) to allow passage of both protocols. In today's networks, firewalls are ubiquitous, relatively inexpensive, and able to support multiple protocols (hence dual-stack) while providing network security. While investigating securing legacy devices in heterogeneous networks, it was determined that dual-stack firewall approaches do not provide adequate protection beyond layer three filtering of the IP stack. Therefore, the NETCONF/SNMP Protocol Gateway hybrid (NSPG) was developed as an alternative in environments where security is necessary, but legacy devices are infeasible to upgrade, replace, and modify. The NSPG allows network administrators to utilize only a single modern protocol (NETCONF) instead of both NETCONF and SNMP, and enforce additional security controls without modifying existing deployments. It has been demonstrated that legacy devices can be securely managed in a protocol-agnostic manner using low-cost commodity hardware (e.g., the RaspberryPi platform) with administrator-derived XML-based configuration policies

    Application of overlay techniques to network monitoring

    Get PDF
    Measurement and monitoring are important for correct and efficient operation of a network, since these activities provide reliable information and accurate analysis for characterizing and troubleshooting a network’s performance. The focus of network measurement is to measure the volume and types of traffic on a particular network and to record the raw measurement results. The focus of network monitoring is to initiate measurement tasks, collect raw measurement results, and report aggregated outcomes. Network systems are continuously evolving: besides incremental change to accommodate new devices, more drastic changes occur to accommodate new applications, such as overlay-based content delivery networks. As a consequence, a network can experience significant increases in size and significant levels of long-range, coordinated, distributed activity; furthermore, heterogeneous network technologies, services and applications coexist and interact. Reliance upon traditional, point-to-point, ad hoc measurements to manage such networks is becoming increasingly tenuous. In particular, correlated, simultaneous 1-way measurements are needed, as is the ability to access measurement information stored throughout the network of interest. To address these new challenges, this dissertation proposes OverMon, a new paradigm for edge-to-edge network monitoring systems through the application of overlay techniques. Of particular interest, the problem of significant network overheads caused by normal overlay network techniques has been addressed by constructing overlay networks with topology awareness - the network topology information is derived from interior gateway protocol (IGP) traffic, i.e. OSPF traffic, thus eliminating all overlay maintenance network overhead. Through a prototype that uses overlays to initiate measurement tasks and to retrieve measurement results, systematic evaluation has been conducted to demonstrate the feasibility and functionality of OverMon. The measurement results show that OverMon achieves good performance in scalability, flexibility and extensibility, which are important in addressing the new challenges arising from network system evolution. This work, therefore, contributes an innovative approach of applying overly techniques to solve realistic network monitoring problems, and provides valuable first hand experience in building and evaluating such a distributed system

    Managed access dependability for critical services in wireless inter domain environment

    Get PDF
    The Information and Communications Technology (ICT) industry has through the last decades changed and still continues to affect the way people interact with each other and how they access and share information, services and applications in a global market characterized by constant change and evolution. For a networked and highly dynamic society, with consumers and market actors providing infrastructure, networks, services and applications, the mutual dependencies of failure free operations are getting more and more complex. Service Level Agreements (SLAs) between the various actors and users may be used to describe the offerings along with price schemes and promises regarding the delivered quality. However, there is no guarantee for failure free operations whatever efforts and means deployed. A system fails for a number of reasons, but automatic fault handling mechanisms and operational procedures may be used to decrease the probability for service interruptions. The global number of mobile broadband Internet subscriptions surpassed the number of broadband subscriptions over fixed technologies in 2010. The User Equipment (UE) has become a powerful device supporting a number of wireless access technologies and the always best connected opportunities have become a reality. Some services, e.g. health care, smart power grid control, surveillance/monitoring etc. called critical services in this thesis, put high requirements on service dependability. A definition of dependability is the ability to deliver services that can justifiably be trusted. For critical services, the access networks become crucial factors for achieving high dependability. A major challenge in a multi operator, multi technology wireless environment is the mobility of the user that necessitates handovers according to the physical movement. In this thesis it is proposed an approach for how to optimize the dependability for critical services in multi operator, multi technology wireless environment. This approach allows predicting the service availability and continuity at real-time. Predictions of the optimal service availability and continuity are considered crucial for critical services. To increase the dependability for critical services dual homing is proposed where the use of combinations of access points, possibly owned by different operators and using different technologies, are optimized for the specific location and movement of the user. A central part of the thesis is how to ensure the disjointedness of physical and logical resources so important for utilizing the dependability increase potential with dual homing. To address the interdependency issues between physical and logical resources, a study of Operations, Administrations, and Maintenance (OA&M) processes related to the access network of a commercial Global System for Mobile Communications (GSM)/Universal Mobile Telecommunications System (UMTS) operator was performed. The insight obtained by the study provided valuable information of the inter woven dependencies between different actors in the delivery chain of services. Based on the insight gained from the study of OA&M processes a technological neutral information model of physical and logical resources in the access networks is proposed. The model is used for service availability and continuity prediction and to unveil interdependencies between resources for the infrastructure. The model is proposed as an extension of the Media Independent Handover (MIH) framework. A field trial in a commercial network was conducted to verify the feasibility in retrieving the model related information from the operators' Operational Support Systems (OSSs) and to emulate the extension and usage of the MIH framework. In the thesis it is proposed how measurement reports from UE and signaling in networks are used to define virtual cells as part of the proposed extension of the MIH framework. Virtual cells are limited geographical areas where the radio conditions are homogeneous. Virtual cells have radio coverage from a number of access points. A Markovian model is proposed for prediction of the service continuity of a dual homed critical service, where both the infrastructure and radio links are considered. A dependability gain is obtained by choosing a global optimal sequence of access points. Great emphasizes have been on developing computational e cient techniques and near-optimal solutions considered important for being able to predict service continuity at real-time for critical services. The proposed techniques to obtain the global optimal sequence of access points may be used by handover and multi homing mechanisms/protocols for timely handover decisions and access point selections. With the proposed extension of the MIH framework a global optimal sequence of access points providing the highest reliability may be predicted at real-time

    Network monitoring in public clouds: issues, methodologies, and applications

    Get PDF
    Cloud computing adoption is rapidly growing thanks to the carried large technical and economical advantages. Its effects can be observed also looking at the fast increase of cloud traffic: in accordance with recent forecasts, more than 75\% of the overall datacenter traffic will be cloud traffic by 2018. Accordingly, huge investments have been made by providers in network infrastructures. Networks of geographically distributed datacenters have been built, which require efficient and accurate monitoring activities to be operated. However, providers rarely expose information about the state of cloud networks or their design, and seldom make promises about their performance. In this scenario, cloud customers therefore have to cope with performance unpredictability in spite of the primary role played by the network. Indeed, according to the deployment practices adopted and the functional separation of the application layers often implemented, the network heavily influences the performance of the cloud services, also impacting costs and revenues. In this thesis cloud networks are investigated enforcing non-cooperative approaches, i.e.~that do not require access to any information restricted to entities involved in the cloud service provision. A platform to monitor cloud networks from the point of view of the customer is presented. Such a platform enables general customers---even those with limited expertise in the configuration and the management of cloud resources---to obtain valuable information about the state of the cloud network, according to a set of factors under their control. A detailed characterization of the cloud network and of its performance is provided, thanks to extensive experimentations performed during the last years on the infrastructures of the two leading cloud providers (Amazon Web Services and Microsoft Azure). The information base gathered by enforcing the proposed approaches allows customers to better understand the characteristics of these complex network infrastructures. Moreover, experimental results are also useful to the provider for understanding the quality of service perceived by customers. By properly interpreting the obtained results, usage guidelines can be devised which allow to enhance the achievable performance and reduce costs. As a particular case study, the thesis also shows how monitoring information can be leveraged by the customer to implement convenient mechanisms to scale cloud resources without any a priori knowledge. More in general, we believe that this thesis provides a better-defined picture of the characteristics of the complex cloud network infrastructures, also providing the scientific community with useful tools for characterizing them in the future

    A study of the applicability of software-defined networking in industrial networks

    Get PDF
    173 p.Las redes industriales interconectan sensores y actuadores para llevar a cabo funciones de monitorización, control y protección en diferentes entornos, tales como sistemas de transporte o sistemas de automatización industrial. Estos sistemas ciberfísicos generalmente están soportados por múltiples redes de datos, ya sean cableadas o inalámbricas, a las cuales demandan nuevas prestaciones, de forma que el control y gestión de tales redes deben estar acoplados a las condiciones del propio sistema industrial. De este modo, aparecen requisitos relacionados con la flexibilidad, mantenibilidad y adaptabilidad, al mismo tiempo que las restricciones de calidad de servicio no se vean afectadas. Sin embargo, las estrategias de control de red tradicionales generalmente no se adaptan eficientemente a entornos cada vez más dinámicos y heterogéneos.Tras definir un conjunto de requerimientos de red y analizar las limitaciones de las soluciones actuales, se deduce que un control provisto independientemente de los propios dispositivos de red añadiría flexibilidad a dichas redes. Por consiguiente, la presente tesis explora la aplicabilidad de las redes definidas por software (Software-Defined Networking, SDN) en sistemas de automatización industrial. Para llevar a cabo este enfoque, se ha tomado como caso de estudio las redes de automatización basadas en el estándar IEC 61850, el cual es ampliamente usado en el diseño de las redes de comunicaciones en sistemas de distribución de energía, tales como las subestaciones eléctricas. El estándar IEC 61850 define diferentes servicios y protocolos con altos requisitos en terminos de latencia y disponibilidad de la red, los cuales han de ser satisfechos mediante técnicas de ingeniería de tráfico. Como resultado, aprovechando la flexibilidad y programabilidad ofrecidas por las redes definidas por software, en esta tesis se propone una arquitectura de control basada en el protocolo OpenFlow que, incluyendo tecnologías de gestión y monitorización de red, permite establecer políticas de tráfico acorde a su prioridad y al estado de la red.Además, las subestaciones eléctricas son un ejemplo representativo de infraestructura crítica, que son aquellas en las que un fallo puede resultar en graves pérdidas económicas, daños físicos y materiales. De esta forma, tales sistemas deben ser extremadamente seguros y robustos, por lo que es conveniente la implementación de topologías redundantes que ofrezcan un tiempo de reacción ante fallos mínimo. Con tal objetivo, el estándar IEC 62439-3 define los protocolos Parallel Redundancy Protocol (PRP) y High-availability Seamless Redundancy (HSR), los cuales garantizan un tiempo de recuperación nulo en caso de fallo mediante la redundancia activa de datos en redes Ethernet. Sin embargo, la gestión de redes basadas en PRP y HSR es estática e inflexible, lo que, añadido a la reducción de ancho de banda debida la duplicación de datos, hace difícil un control eficiente de los recursos disponibles. En dicho sentido, esta tesis propone control de la redundancia basado en el paradigma SDN para un aprovechamiento eficiente de topologías malladas, al mismo tiempo que se garantiza la disponibilidad de las aplicaciones de control y monitorización. En particular, se discute cómo el protocolo OpenFlow permite a un controlador externo configurar múltiples caminos redundantes entre dispositivos con varias interfaces de red, así como en entornos inalámbricos. De esta forma, los servicios críticos pueden protegerse en situaciones de interferencia y movilidad.La evaluación de la idoneidad de las soluciones propuestas ha sido llevada a cabo, principalmente, mediante la emulación de diferentes topologías y tipos de tráfico. Igualmente, se ha estudiado analítica y experimentalmente cómo afecta a la latencia el poder reducir el número de saltos en las comunicaciones con respecto al uso de un árbol de expansión, así como balancear la carga en una red de nivel 2. Además, se ha realizado un análisis de la mejora de la eficiencia en el uso de los recursos de red y la robustez alcanzada con la combinación de los protocolos PRP y HSR con un control llevado a cabo mediante OpenFlow. Estos resultados muestran que el modelo SDN podría mejorar significativamente las prestaciones de una red industrial de misión crítica

    Trustworthy Knowledge Planes For Federated Distributed Systems

    Full text link
    In federated distributed systems, such as the Internet and the public cloud, the constituent systems can differ in their configuration and provisioning, resulting in significant impacts on the performance, robustness, and security of applications. Yet these systems lack support for distinguishing such characteristics, resulting in uninformed service selection and poor inter-operator coordination. This thesis presents the design and implementation of a trustworthy knowledge plane that can determine such characteristics about autonomous networks on the Internet. A knowledge plane collects the state of network devices and participants. Using this state, applications infer whether a network possesses some characteristic of interest. The knowledge plane uses attestation to attribute state descriptions to the principals that generated them, thereby making the results of inference more trustworthy. Trustworthy knowledge planes enable applications to establish stronger assumptions about their network operating environment, resulting in improved robustness and reduced deployment barriers. We have prototyped the knowledge plane and associated devices. Experience with deploying analyses over production networks demonstrate that knowledge planes impose low cost and can scale to support Internet-scale networks

    Trustworthy Knowledge Planes For Federated Distributed Systems

    Full text link
    In federated distributed systems, such as the Internet and the public cloud, the constituent systems can differ in their configuration and provisioning, resulting in significant impacts on the performance, robustness, and security of applications. Yet these systems lack support for distinguishing such characteristics, resulting in uninformed service selection and poor inter-operator coordination. This thesis presents the design and implementation of a trustworthy knowledge plane that can determine such characteristics about autonomous networks on the Internet. A knowledge plane collects the state of network devices and participants. Using this state, applications infer whether a network possesses some characteristic of interest. The knowledge plane uses attestation to attribute state descriptions to the principals that generated them, thereby making the results of inference more trustworthy. Trustworthy knowledge planes enable applications to establish stronger assumptions about their network operating environment, resulting in improved robustness and reduced deployment barriers. We have prototyped the knowledge plane and associated devices. Experience with deploying analyses over production networks demonstrate that knowledge planes impose low cost and can scale to support Internet-scale networks

    Estabelecimento de redes de comunidades sobreponíveis

    Get PDF
    Doutoramento em Engenharia InformáticaUma das áreas de investigação em Telecomunicações de interesse crescente prende-se com os futuros sistemas de comunicações móveis de 4a geração e além destes. Nos últimos anos tem sido desenvolvido o conceito de redes comunitárias, no qual os utilizadores se agregam de acordo com interesses comuns. Estes conceitos têm sido explorados de uma forma horizontal em diferentes camadas da comunicação, desde as redes comunitárias de comunicação (Seattle Wireless ou Personal Telco, p.ex.) até às redes de interesses peer-to-peer. No entanto, estas redes são usualmente vistas como redes de overlay, ou simplesmente redes de associação livre. Na prática, a noção de uma rede auto-organizada, completamente orientada ao serviço/comunidade, integralmente suportada em termos de arquitetura, não existe. Assim este trabalho apresenta uma realização original nesta área de criação de redes comunitárias, com uma arquitetura subjacente orientada a serviço, e que suporta integralmente múltiplas redes comunitárias no mesmo dispositivo, com todas as características de segurança, confiança e disponibilização de serviço necessárias neste tipo de cenários (um nó pode pertencer simultaneamente a mais do que uma rede comunitária). Devido à sua importância para os sistemas de redes comunitárias, foi dado particular atenção a aspetos de gestão de recursos e controlo de acessos. Ambos realizados de uma forma descentralizada e considerando mecanismos dotados de grande escalabilidade. Para isso, é apresentada uma linguagem de políticas que suporta a criação de comunidades virtuais. Esta linguagem não é apenas utilizada para o mapeamento da estrutura social dos membros da comunidade, como para, gerir dispositivos, recursos e serviços detidos pelos membros, de uma forma controlada e distribuída.One of the research areas with increasing interest in the field of telecommunications, are the ones related to future telecommunication systems, both 4th generation and beyond. In parallel, during the last years, several concepts have been developed related to clustering of users according to their interested, in the form of community networks. Solutions proposed for these concepts tackle the challenges horizontally, for each layer of the communication stack, ranging from community based communication networks (e.g. Seattle Wireless, or Personal Telco), to interest networks based on peer-to-peer protocols. However, these networks are presented either as free joining, or overlay networks. In practice, the notion of a self-organized, service and community oriented network, with these principles embedded in its design principles, is yet to be developed. This work presents an novel instantiation of a solution in the area of community networks, with a underlying architecture which is fully service oriented, and envisions the support for multiple community networks in the same device. Considerations regarding security, trust and service availability for this type of environments are also taken. Due to the importance of resource management and access control, in the context of community driven communication networks, a special focus was given to the support of scalable and decentralized management and access control methods. For this purpose, it is presented a policy language which supports the creation and management of virtual communities. The language is not only used for mapping the social structure of the community members, but also to, following a distributed approach, manage devices, resources and services owned by each community member
    corecore