107 research outputs found

    MonALISA : A Distributed Monitoring Service Architecture

    Full text link
    The MonALISA (Monitoring Agents in A Large Integrated Services Architecture) system provides a distributed monitoring service. MonALISA is based on a scalable Dynamic Distributed Services Architecture which is designed to meet the needs of physics collaborations for monitoring global Grid systems, and is implemented using JINI/JAVA and WSDL/SOAP technologies. The scalability of the system derives from the use of multithreaded Station Servers to host a variety of loosely coupled self-describing dynamic services, the ability of each service to register itself and then to be discovered and used by any other services, or clients that require such information, and the ability of all services and clients subscribing to a set of events (state changes) in the system to be notified automatically. The framework integrates several existing monitoring tools and procedures to collect parameters describing computational nodes, applications and network performance. It has built-in SNMP support and network-performance monitoring algorithms that enable it to monitor end-to-end network performance as well as the performance and state of site facilities in a Grid. MonALISA is currently running around the clock on the US CMS test Grid as well as an increasing number of other sites. It is also being used to monitor the performance and optimize the interconnections among the reflectors in the VRVS system.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 8 pages, pdf. PSN MOET00

    A Unified Monitoring Framework for Energy Consumption and Network Traffic

    Get PDF
    International audienceProviding experimenters with deep insight about the effects of theirexperiments is a central feature of testbeds. In this paper, wedescribe Kwapi, a framework designed in the context of the Grid'5000testbed, that unifies measurements for both energy consumption andnetwork traffic. Because all measurements are taken at theinfrastructure level (using sensors in power and network equipment),using this framework has no dependencies on the experiments themselves.Initially designed for OpenStack infrastructures, the Kwapi framework allowsmonitoring and reporting of energy consumption of distributed platforms. Inthis article, we present the extension of Kwapi to network monitoring, andoutline how we overcame several challenges: scaling to a testbed the size ofGrid'5000 while still providing high-frequency measurements; providing long-termloss-less storage of measurements; handling operational issues when deployingsuch a tool on a real infrastructure

    On Performance and Scalability of Cost-Effective SNMP Managers for Large-Scale Polling

    Full text link
    As networks grow larger in size and complexity, their monitoring is becoming an increasing challenge because of the required polling performance and also due to heterogeneity of devices. As it turns out, SNMP (Simple Network Management Protocol) is by far the most popular monitoring protocol. However, due to the increase in the number of network devices, it becomes necessary to employ multiple SNMP managers, which is not cost-effective due to the hardware requirements. Additionally, the different proprietary SNMP implementations require custom configuration very often, as new devices are being incorporated into the network. Therefore, current SNMP managers not only require capabilities for large-scale monitoring but also a high degree of flexibility and programmability. In response, we propose an SNMP manager architecture with a flexible multi-threaded architecture, which effectively reduces the hardware resources necessary to poll the increasing number of SNMP agents. In addition, it features a scripting component to deal with the different data representations caused by proprietary implementations. Our experience has shown that SNMP agents can have high variability in their response times. Actually, our findings show a strong correlation between high response times and CPU load. As a solution, we propose and analyze novel adaptive polling algorithms that decrease the load on agents' CPUs while keeping the desired polling rate for fast agents. Finally, we present several real-world use cases where we show the benefits of the polling algorithms and the scripting component, by means of extensive measurement campaignsThis work was supported by Ayudas para la formación de doctores en empresas, Doctorados Industriales, under Grant DI-16-0897

    Software Defined Networks based Smart Grid Communication: A Comprehensive Survey

    Get PDF
    The current power grid is no longer a feasible solution due to ever-increasing user demand of electricity, old infrastructure, and reliability issues and thus require transformation to a better grid a.k.a., smart grid (SG). The key features that distinguish SG from the conventional electrical power grid are its capability to perform two-way communication, demand side management, and real time pricing. Despite all these advantages that SG will bring, there are certain issues which are specific to SG communication system. For instance, network management of current SG systems is complex, time consuming, and done manually. Moreover, SG communication (SGC) system is built on different vendor specific devices and protocols. Therefore, the current SG systems are not protocol independent, thus leading to interoperability issue. Software defined network (SDN) has been proposed to monitor and manage the communication networks globally. This article serves as a comprehensive survey on SDN-based SGC. In this article, we first discuss taxonomy of advantages of SDNbased SGC.We then discuss SDN-based SGC architectures, along with case studies. Our article provides an in-depth discussion on routing schemes for SDN-based SGC. We also provide detailed survey of security and privacy schemes applied to SDN-based SGC. We furthermore present challenges, open issues, and future research directions related to SDN-based SGC.Comment: Accepte

    Cross-layer multi-cloud real-time application QoS monitoring and benchmarking as-a-service framework

    Full text link
    Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers and data processing frameworks) platforms with features such as elasticity, pay-per-use, low upfront investment and low time to market. This has led to the proliferation of business criti-cal applications that leverage various cloud platforms. Such applications hosted on sin-gle/multiple cloud platforms have diverse characteristics requiring extensive monitoring and benchmarking mechanisms to ensure run-time Quality of Service (QoS) (e.g., latency and throughput). The process of monitoring and benchmarking cloud applications is as yet a criti-cal issue to be further studied and addressed. Current monitoring and benchmarking approaches do not provide a holistic view of per-formance QoS for distributed applications cross cloud layers on multi-cloud environments. Furthermore, current monitoring frameworks are limited to monitoring tasks and do not in-corporate benchmarking abilities. In other words, there is no unified framework that com-bines monitoring and benchmarking functionalities. To gain the ability of both monitoring and benchmarking all under one framework will empower the cloud user to gain more in-depth control and awareness of cloud services. The Thesis identifies and discusses the major research dimensions and design issues relat-ed to developing techniques that can monitor and benchmark an application’s components cross-layers on multi-clouds. Furthermore, the thesis discusses to what extent such research dimensions and design issues are handled by current academic research papers as well as by the existing commercial monitoring tools. Moreover, the Thesis addresses an important research challenge of how to undertake cross-layer cloud monitoring and benchmarking in multi-cloud environments to provide es-sential information for effective cloud applications QoS management. It proposes, develops, implements and validates CLAMBS: Cross-Layer Multi-Cloud Application Monitoring and Benchmarking, as-a-Service Framework. The core contributions made by this thesis are the development of the CLAMBS framework and underlying monitoring and benchmarking tech-niques which are capable of: i) performing QoS monitoring of application components (e.g. ii database, web server, application server, etc.) that may be deployed across multiple cloud platforms (e.g. Amazon EC2, and Microsoft Azure); and ii) giving visibility into the QoS of in-dividual application components, which is not supported by current monitoring and bench-marking frameworks. Experiments are conducted on real-world multi-cloud platforms to em-pirically evaluate the framework and the results validate that CLAMBS can effectively monitor and benchmark applications running cross-layers on multi-clouds. The thesis presents implementation and evaluation details of the proposed CLAMBS framework. It demonstrates the feasibility and scalability of the proposed framework in real-world environments by implementing a proof-of-concept prototype on multi-cloud platforms. Finally, it presents a model for analysing the communication overheads introduced by various components (e.g. agents and manager) of CLAMBS in multi cloud environments

    Intelligent Routing for Software-Defined Media Networks

    Get PDF
    The multimedia market is an industry with an ever-growing demand coupled with strict requirements. Be it in live streaming services or file content broadcast, multimedia providers need to deliver the best possible quality in order to meet their costumer’s requirements and gain or keep their trust. Multimedia traffic has a high impact on networks and, due to its nature, is sensitive to congestion or hardware failure. Thus, it is frequently that multimedia providers resort to third-party software to monitor quality parameters. Skyline Communications’ DataMiner® offers network monitoring, orchestrating and automation capabilities across a broad range of applications and environments. These features are enabled by the emergence of Software-Defined Networking (SDN) which provides a global view of networks and the ability to change network properties through software applications. This contrasts with traditional networks which are rigid, static and difficult to scale-up. An application that greatly benefits from the global network view of SDN is routing optimization. Through routing optimization, a network can effectively deliver more traffic by efficiently balancing load across the different links and paths between end points of a service, reaching an increased performance in data transport. This dissertation comes to light with the goal of optimizing DataMiner’s routing mechanism by exploring the routing optimization possibilities enabled by its SDN-like architecture. Both link cost optimization-based and Machine Learning (ML) approaches are evaluated as possible solutions to Skyline’s problem and several experiments were conducted to compare them and understand their impact on network performance while transporting multimedia streams.O mercado audiovisual é uma indústria onde a procura está em constante crescimento, bem como a exigência. Tanto durante transmissões ao vivo como de conteúdo multimédia pré-gravado, os provedores de multimédia necessitam de garantir a melhor qualidade possível para corresponderem aos requisitos dos seus clientes e conquistarem ou manterem a sua confiança nos seus serviços. O tráfego multimédia tem um forte impacto nas redes que o transportam e, graças à sua natureza, é bastante sensível a congestão ou a falhas de equipamento. Por este motivo, é frequente os provedores de multimédia recorrerem a aplicações externas para monitorização de parâmetros de qualidade. O DataMiner®, desenvolvido pela Skyline Communications, oferece a capacidade de monitorizar e orquestrar redes de transporte de multimédia bem como de automatizar as suas funcionalidades num vasto conjunto de enquadramentos e ambientes. Tais funcionalidades são oferecidas pelo aparecimento de SDN que permite que se tenha uma visão global de uma rede e que se altere de forma flexível as suas definições através de aplicações. As características de redes deste tipo contrastam fortemente com as redes tradicionais marcadas pela sua rigidez, estaticidade e dificuldade de expansão. Uma área que beneficia bastante com a visão global de redes oferecida pela tecnologia de SDN é a otimização do transporte de dados. Desta forma, uma rede consegue transportar mais dados de forma eficiente através do balanceamento da carga a que é submetida pelas diferentes ligações entre elementos e caminhos que conectam pontos de entrada e saída da mesma, atingindo altos níveis de desempenho. A presente dissertação surge da intenção da Skyline de otimizar o seu algoritmo de encaminhamento através da exploração de métodos alternativos introduzidos pela tecnologia de SDN. Tanto métodos baseados em otimização do custo de ligações da rede como em aprendizagem automática são avaliados como possíveis soluções para o problema proposto e diversas simulações são conduzidas para as comparar e averiguar o seu impacto no desempenho de redes de transporte de dados multimédia

    Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art

    Full text link
    Software-Defined Networking (SDN) is an evolutionary networking paradigm which has been adopted by large network and cloud providers, among which are Tech Giants. However, embracing a new and futuristic paradigm as an alternative to well-established and mature legacy networking paradigm requires a lot of time along with considerable financial resources and technical expertise. Consequently, many enterprises can not afford it. A compromise solution then is a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN functionalities are leveraged while existing traditional network infrastructures are acknowledged. Recently, hSDN has been seen as a viable networking solution for a diverse range of businesses and organizations. Accordingly, the body of literature on hSDN research has improved remarkably. On this account, we present this paper as a comprehensive state-of-the-art survey which expands upon hSDN from many different perspectives

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    A study of the applicability of software-defined networking in industrial networks

    Get PDF
    173 p.Las redes industriales interconectan sensores y actuadores para llevar a cabo funciones de monitorización, control y protección en diferentes entornos, tales como sistemas de transporte o sistemas de automatización industrial. Estos sistemas ciberfísicos generalmente están soportados por múltiples redes de datos, ya sean cableadas o inalámbricas, a las cuales demandan nuevas prestaciones, de forma que el control y gestión de tales redes deben estar acoplados a las condiciones del propio sistema industrial. De este modo, aparecen requisitos relacionados con la flexibilidad, mantenibilidad y adaptabilidad, al mismo tiempo que las restricciones de calidad de servicio no se vean afectadas. Sin embargo, las estrategias de control de red tradicionales generalmente no se adaptan eficientemente a entornos cada vez más dinámicos y heterogéneos.Tras definir un conjunto de requerimientos de red y analizar las limitaciones de las soluciones actuales, se deduce que un control provisto independientemente de los propios dispositivos de red añadiría flexibilidad a dichas redes. Por consiguiente, la presente tesis explora la aplicabilidad de las redes definidas por software (Software-Defined Networking, SDN) en sistemas de automatización industrial. Para llevar a cabo este enfoque, se ha tomado como caso de estudio las redes de automatización basadas en el estándar IEC 61850, el cual es ampliamente usado en el diseño de las redes de comunicaciones en sistemas de distribución de energía, tales como las subestaciones eléctricas. El estándar IEC 61850 define diferentes servicios y protocolos con altos requisitos en terminos de latencia y disponibilidad de la red, los cuales han de ser satisfechos mediante técnicas de ingeniería de tráfico. Como resultado, aprovechando la flexibilidad y programabilidad ofrecidas por las redes definidas por software, en esta tesis se propone una arquitectura de control basada en el protocolo OpenFlow que, incluyendo tecnologías de gestión y monitorización de red, permite establecer políticas de tráfico acorde a su prioridad y al estado de la red.Además, las subestaciones eléctricas son un ejemplo representativo de infraestructura crítica, que son aquellas en las que un fallo puede resultar en graves pérdidas económicas, daños físicos y materiales. De esta forma, tales sistemas deben ser extremadamente seguros y robustos, por lo que es conveniente la implementación de topologías redundantes que ofrezcan un tiempo de reacción ante fallos mínimo. Con tal objetivo, el estándar IEC 62439-3 define los protocolos Parallel Redundancy Protocol (PRP) y High-availability Seamless Redundancy (HSR), los cuales garantizan un tiempo de recuperación nulo en caso de fallo mediante la redundancia activa de datos en redes Ethernet. Sin embargo, la gestión de redes basadas en PRP y HSR es estática e inflexible, lo que, añadido a la reducción de ancho de banda debida la duplicación de datos, hace difícil un control eficiente de los recursos disponibles. En dicho sentido, esta tesis propone control de la redundancia basado en el paradigma SDN para un aprovechamiento eficiente de topologías malladas, al mismo tiempo que se garantiza la disponibilidad de las aplicaciones de control y monitorización. En particular, se discute cómo el protocolo OpenFlow permite a un controlador externo configurar múltiples caminos redundantes entre dispositivos con varias interfaces de red, así como en entornos inalámbricos. De esta forma, los servicios críticos pueden protegerse en situaciones de interferencia y movilidad.La evaluación de la idoneidad de las soluciones propuestas ha sido llevada a cabo, principalmente, mediante la emulación de diferentes topologías y tipos de tráfico. Igualmente, se ha estudiado analítica y experimentalmente cómo afecta a la latencia el poder reducir el número de saltos en las comunicaciones con respecto al uso de un árbol de expansión, así como balancear la carga en una red de nivel 2. Además, se ha realizado un análisis de la mejora de la eficiencia en el uso de los recursos de red y la robustez alcanzada con la combinación de los protocolos PRP y HSR con un control llevado a cabo mediante OpenFlow. Estos resultados muestran que el modelo SDN podría mejorar significativamente las prestaciones de una red industrial de misión crítica

    Energy Management

    Get PDF
    Forecasts point to a huge increase in energy demand over the next 25 years, with a direct and immediate impact on the exhaustion of fossil fuels, the increase in pollution levels and the global warming that will have significant consequences for all sectors of society. Irrespective of the likelihood of these predictions or what researchers in different scientific disciplines may believe or publicly say about how critical the energy situation may be on a world level, it is without doubt one of the great debates that has stirred up public interest in modern times. We should probably already be thinking about the design of a worldwide strategic plan for energy management across the planet. It would include measures to raise awareness, educate the different actors involved, develop policies, provide resources, prioritise actions and establish contingency plans. This process is complex and depends on political, social, economic and technological factors that are hard to take into account simultaneously. Then, before such a plan is formulated, studies such as those described in this book can serve to illustrate what Information and Communication Technologies have to offer in this sphere and, with luck, to create a reference to encourage investigators in the pursuit of new and better solutions
    corecore