20 research outputs found

    Observing and Improving the Reliability of Internet Last-mile Links

    Get PDF
    People rely on having persistent Internet connectivity from their homes and mobile devices. However, unlike links in the core of the Internet, the links that connect people's homes and mobile devices, known as "last-mile" links, are not redundant. As a result, the reliability of any given link is of paramount concern: when last-mile links fail, people can be completely disconnected from the Internet. In addition to lacking redundancy, Internet last-mile links are vulnerable to failure. Such links can fail because the cables and equipment that make up last-mile links are exposed to the elements; for example, weather can cause tree limbs to fall on overhead cables, and flooding can destroy underground equipment. They can also fail, eventually, because cellular last-mile links can drain a smartphone's battery if an application tries to communicate when signal strength is weak. In this dissertation, I defend the following thesis: By building on existing infrastructure, it is possible to (1) observe the reliability of Internet last-mile links across different weather conditions and link types; (2) improve the energy efficiency of cellular Internet last-mile links; and (3) provide an incrementally deployable, energy-efficient Internet last-mile downlink that is highly resilient to weather-related failures. I defend this thesis by designing, implementing, and evaluating systems

    Analyzing Internet reliability remotely with probing-based techniques

    Get PDF
    Internet reliability for home users is increasingly important as a variety of services that we use migrate to the Internet. Yet, we lack authoritative measures of residential Internet reliability. Measuring reliability requires the detection of Internet outage events experienced by home users. But residential Internet outages are rare events. Further, they can affect relatively few users. Thus, detecting residential Internet outages requires broad and longitudinal measurements of individual users' Internet connections. However, such measurements of Internet reliability are challenging to obtain accurately and at scale. Probing-based remote outage detection techniques can scale but their accuracy is questionable. These techniques detect Internet outages across time as well as across the IPv4 address space by sending active probes, such as pings and traceroutes, to users' IP addresses and use probe responses to infer Internet connectivity. However, they can infer false outages since their foundational assumption can sometimes be invalid: that the lack of response to an active probe is indicative of failure. In this dissertation, I show how to use probing-based techniques to measure residential Internet reliability by defending the following thesis: It is possible to remotely and accurately detect substantial outages experienced by any device with a stable public IP address that typically responds to active probes and use these outages to compare reliability across ISPs, media-types, geographical areas, and weather conditions. In the first part of the dissertation, I address the inaccuracy of probing-based techniques' detected outages and show how to use probe responses to correctly detect outages. I illustrate two scenarios where the lack of response to an active probe is not indicative of failure. In the first scenario, responses are delayed beyond the prober's timeout, leading these techniques to infer packet-loss instead of delay. In the second scenario, these techniques can falsely infer packet-loss when the address they are probing gets dynamically reassigned. I examine how often delayed responses and dynamic reassignment occur across ISPs to quantify the inaccuracy of these techniques. I show how outages can be inferred correctly even in networks with dynamic reassignment using complementary datasets that can reveal whether an address was dynamically reassigned before, during, and after a detected outage for that address. In the second part of the dissertation, I motivate why the detection of individual addresses' outages is necessary for analyzing residential reliability. An individual address typically represents one residential customer; therefore, detecting outages for individual addresses can allow capturing even small outages. Prior probing-based techniques focus upon the detection of edge network outages affecting a substantial set of addresses belonging to a BGP prefix or to a /24 address block. Here, I quantitatively demonstrate the extent to which prior techniques can miss residential outages. I show that even individual address outages occur rarely in most networks. When multiple simultaneous outages of related individual addresses occur, there is likely a common underlying cause. With this insight, I develop and evaluate an approach to find outage events that are statistically unlikely to have occurred independently. I show that the majority of such events do not affect entire /24 address blocks or BGP prefixes, and are therefore not likely to be detected by existing techniques which look for outages at these granularities. In the final part of the dissertation, I show how to use individual addresses' outages detected by probing-based techniques to assess Internet reliability across media-types, geographical areas, and weather conditions. Individual outages are not direct measures of reliability: they can occur independently because users disable equipment or can be observed falsely due to dynamic address renumbering. I use the insight that the statistical change in outage rate in different challenging environments (e.g., thunderstorm) can quantitatively expose actual outage “inflation”. I show how to study the effect of challenging environments upon the reliability of a group of addresses by analyzing the inflation in outage rate for that group during its presence. This dissertation's contributions will help achieve comprehensive measurements of Internet reliability that can be used to identify vulnerable networks and their challenges, inform which enhancements can help networks improve reliability, and evaluate the efficacy of deployed enhancements over time

    Use of locator/identifier separation to improve the future internet routing system

    Get PDF
    The Internet evolved from its early days of being a small research network to become a critical infrastructure many organizations and individuals rely on. One dimension of this evolution is the continuous growth of the number of participants in the network, far beyond what the initial designers had in mind. While it does work today, it is widely believed that the current design of the global routing system cannot scale to accommodate future challenges. In 2006 an Internet Architecture Board (IAB) workshop was held to develop a shared understanding of the Internet routing system scalability issues faced by the large backbone operators. The participants documented in RFC 4984 their belief that "routing scalability is the most important problem facing the Internet today and must be solved." A potential solution to the routing scalability problem is ending the semantic overloading of Internet addresses, by separating node location from identity. Several proposals exist to apply this idea to current Internet addressing, among which the Locator/Identifier Separation Protocol (LISP) is the only one already being shipped in production routers. Separating locators from identifiers results in another level of indirection, and introduces a new problem: how to determine location, when the identity is known. The first part of our work analyzes existing proposals for systems that map identifiers to locators and proposes an alternative system, within the LISP ecosystem. We created a large-scale Internet topology simulator and used it to compare the performance of three mapping systems: LISP-DHT, LISP+ALT and the proposed LISP-TREE. We analyzed and contrasted their architectural properties as well. The monitoring projects that supplied Internet routing table growth data over a large timespan inspired us to create LISPmon, a monitoring platform aimed at collecting, storing and presenting data gathered from the LISP pilot network, early in the deployment of the LISP protocol. The project web site and collected data is publicly available and will assist researchers in studying the evolution of the LISP mapping system. We also document how the newly introduced LISP network elements fit into the current Internet, advantages and disadvantages of different deployment options, and how the proposed transition mechanism scenarios could affect the evolution of the global routing system. This work is currently available as an active Internet Engineering Task Force (IETF) Internet Draft. The second part looks at the problem of efficient one-to-many communications, assuming a routing system that implements the above mentioned locator/identifier split paradigm. We propose a network layer protocol for efficient live streaming. It is incrementally deployable, with changes required only in the same border routers that should be upgraded to support locator/identifier separation. Our proof-of-concept Linux kernel implementation shows the feasibility of the protocol, and our comparison to popular peer-to-peer live streaming systems indicates important savings in inter-domain traffic. We believe LISP has considerable potential of getting adopted, and an important aspect of this work is how it might contribute towards a better mapping system design, by showing the weaknesses of current favorites and proposing alternatives. The presented results are an important step forward in addressing the routing scalability problem described in RFC 4984, and improving the delivery of live streaming video over the Internet

    Actas da 10ª Conferência sobre Redes de Computadores

    Get PDF
    Universidade do MinhoCCTCCentro AlgoritmiCisco SystemsIEEE Portugal Sectio

    A pragmatic approach toward securing inter-domain routing

    Get PDF
    Internet security poses complex challenges at different levels, where even the basic requirement of availability of Internet connectivity becomes a conundrum sometimes. Recent Internet service disruption events have made the vulnerability of the Internet apparent, and exposed the current limitations of Internet security measures as well. Usually, the main cause of such incidents, even in the presence of the security measures proposed so far, is the unintended or intended exploitation of the loop holes in the protocols that govern the Internet. In this thesis, we focus on the security of two different protocols that were conceived with little or no security mechanisms but play a key role both in the present and the future of the Internet, namely the Border Gateway Protocol (BGP) and the Locator Identifier Separation Protocol (LISP). The BGP protocol, being the de-facto inter-domain routing protocol in the Internet, plays a crucial role in current communications. Due to lack of any intrinsic security mechanism, it is prone to a number of vulnerabilities that can result in partial paralysis of the Internet. In light of this, numerous security strategies were proposed but none of them were pragmatic enough to be widely accepted and only minor security tweaks have found the pathway to be adopted. Even the recent IETF Secure Inter-Domain Routing (SIDR) Working Group (WG) efforts including, the Resource Public Key Infrastructure (RPKI), Route Origin authorizations (ROAs), and BGP Security (BGPSEC) do not address the policy related security issues, such as Route Leaks (RL). Route leaks occur due to violation of the export routing policies among the Autonomous Systems (ASes). Route leaks not only have the potential to cause large scale Internet service disruptions but can result in traffic hijacking as well. In this part of the thesis, we examine the route leak problem and propose pragmatic security methodologies which a) require no changes to the BGP protocol, b) are neither dependent on third party information nor on third party security infrastructure, and c) are self-beneficial regardless of their adoption by other players. Our main contributions in this part of the thesis include a) a theoretical framework, which, under realistic assumptions, enables a domain to autonomously determine if a particular received route advertisement corresponds to a route leak, and b) three incremental detection techniques, namely Cross-Path (CP), Benign Fool Back (BFB), and Reverse Benign Fool Back (R-BFB). Our strength resides in the fact that these detection techniques solely require the analytical usage of in-house control-plane, data-plane and direct neighbor relationships information. We evaluate the performance of the three proposed route leak detection techniques both through real-time experiments as well as using simulations at large scale. Our results show that the proposed detection techniques achieve high success rates for countering route leaks in different scenarios. The motivation behind LISP protocol has shifted over time from solving routing scalability issues in the core Internet to a set of vital use cases for which LISP stands as a technology enabler. The IETF's LISP WG has recently started to work toward securing LISP, but the protocol still lacks end-to-end mechanisms for securing the overall registration process on the mapping system ensuring RLOC authorization and EID authorization. As a result LISP is unprotected against different attacks, such as RLOC spoofing, which can cripple even its basic functionality. For that purpose, in this part of the thesis we address the above mentioned issues and propose practical solutions that counter them. Our solutions take advantage of the low technological inertia of the LISP protocol. The changes proposed for the LISP protocol and the utilization of existing security infrastructure in our solutions enable resource authorizations and lay the foundation for the needed end-to-end security

    Proceedings of the Third Edition of the Annual Conference on Wireless On-demand Network Systems and Services (WONS 2006)

    Get PDF
    Ce fichier regroupe en un seul documents l'ensemble des articles accéptés pour la conférences WONS2006/http://citi.insa-lyon.fr/wons2006/index.htmlThis year, 56 papers were submitted. From the Open Call submissions we accepted 16 papers as full papers (up to 12 pages) and 8 papers as short papers (up to 6 pages). All the accepted papers will be presented orally in the Workshop sessions. More precisely, the selected papers have been organized in 7 session: Channel access and scheduling, Energy-aware Protocols, QoS in Mobile Ad-Hoc networks, Multihop Performance Issues, Wireless Internet, Applications and finally Security Issues. The papers (and authors) come from all parts of the world, confirming the international stature of this Workshop. The majority of the contributions are from Europe (France, Germany, Greece, Italy, Netherlands, Norway, Switzerland, UK). However, a significant number is from Australia, Brazil, Canada, Iran, Korea and USA. The proceedings also include two invited papers. We take this opportunity to thank all the authors who submitted their papers to WONS 2006. You helped make this event again a success

    SoS: self-organizing substrates

    Get PDF
    Large-scale networked systems often, both by design or chance exhibit self-organizing properties. Understanding self-organization using tools from cybernetics, particularly modeling them as Markov processes is a first step towards a formal framework which can be used in (decentralized) systems research and design.Interesting aspects to look for include the time evolution of a system and to investigate if and when a system converges to some absorbing states or stabilizes into a dynamic (and stable) equilibrium and how it performs under such an equilibrium state. Such a formal framework brings in objectivity in systems research, helping discern facts from artefacts as well as providing tools for quantitative evaluation of such systems. This thesis introduces such formalism in analyzing and evaluating peer-to-peer (P2P) systems in order to better understand the dynamics of such systems which in turn helps in better designs. In particular this thesis develops and studies the fundamental building blocks for a P2P storage system. In the process the design and evaluation methodology we pursue illustrate the typical methodological approaches in studying and designing self-organizing systems, and how the analysis methodology influences the design of the algorithms themselves to meet system design goals (preferably with quantifiable guarantees). These goals include efficiency, availability and durability, load-balance, high fault-tolerance and self-maintenance even in adversarial conditions like arbitrarily skewed and dynamic load and high membership dynamics (churn), apart of-course the specific functionalities that the system is supposed to provide. The functionalities we study here are some of the fundamental building blocks for various P2P applications and systems including P2P storage systems, and hence we call them substrates or base infrastructure. These elemental functionalities include: (i) Reliable and efficient discovery of resources distributed over the network in a decentralized manner; (ii) Communication among participants in an address independent manner, i.e., even when peers change their physical addresses; (iii) Availability and persistence of stored objects in the network, irrespective of availability or departure of individual participants from the system at any time; and (iv) Freshness of the objects/resources' (up-to-date replicas). Internet-scale distributed index structures (often termed as structured overlays) are used for discovery and access of resources in a decentralized setting. We propose a rapid construction from scratch and maintenance of the P-Grid overlay network in a self-organized manner so as to provide efficient search of both individual keys as well as a whole range of keys, doing so providing good load-balancing characteristics for diverse kind of arbitrarily skewed loads - storage and replication, query forwarding and query answering loads. For fast overlay construction we employ recursive partitioning of the key-space so that the resulting partitions are balanced with respect to storage load and replication. The proper algorithmic parameters for such partitioning is derived from a transient analysis of the partitioning process which has Markov property. Preservation of ordering information in P-Grid such that queries other than exact queries, like range queries can be efficiently and rather trivially handled makes P-Grid suitable for data-oriented applications. Fast overlay construction is analogous to building an index on a new set of keys making P-Grid suitable as the underlying indexing mechanism for peer-to-peer information retrieval applications among other potential applications which may require frequent indexing of new attributes apart regular updates to an existing index. In order to deal with membership dynamics, in particular changing physical address of peers across sessions, the overlay itself is used as a (self-referential) directory service for maintaining the participating peers' physical addresses across sessions. Exploiting this self-referential directory, a family of overlay maintenance scheme has been designed with lower communication overhead than other overlay maintenance strategies. The notion of dynamic equilibrium study for overlays under continuous churn and repairs, modeled as a Markov process, was introduced in order to evaluate and compare the overlay maintenance schemes. While the self-referential directory was originally invented to realize overlay maintenance schemes with lower overheads than existing overlay maintenance schemes, the self-referential directory is generic in nature and can be used for various other purposes, e.g., as a decentralized public key infrastructure. Persistence of peer identity across sessions, in spite of changes in physical address, provides a logical independence of the overlay network from the underlying physical network. This has many other potential usages, for example, efficient maintenance mechanisms for P2P storage systems and P2P trust and reputation management. We specifically look into the dynamics of maintaining redundancy for storage systems and design a novel lazy maintenance strategy. This strategy is algorithmically a simple variant of existing maintenance strategies which adapts to the system dynamics. This randomized lazy maintenance strategy thus explores the cost-performance trade-offs of the storage maintenance operations in a self-organizing manner. We model the storage system (redundancy), under churn and maintenance, as a Markov process. We perform an equilibrium study to show that the system operates in a more stable dynamic equilibrium with our strategy than for the existing maintenance scheme for comparable overheads. Particularly, we show that our maintenance scheme provides substantial performance gains in terms of maintenance overhead and system's resilience in presence of churn and correlated failures. Finally, we propose a gossip mechanism which works with lower communication overhead than existing approaches for communication among a relatively large set of unreliable peers without assuming any specific structure for their mutual connectivity. We use such a communication primitive for propagating replica updates in P2P systems, facilitating management of mutable content in P2P systems. The peer population affected by a gossip can be modeled as a Markov process. Studying the transient spread of gossips help in choosing proper algorithm parameters to reduce communication overhead while guaranteeing coverage of online peers. Each of these substrates in themselves were developed to find practical solutions for real problems. Put together, these can be used in other applications, including a P2P storage system with support for efficient lookup and inserts, membership dynamics, content mutation and updates, persistence and availability. Many of the ideas have already been implemented in real systems and several others are in the way to be integrated into the implementations. There are two principal contributions of this dissertation. It provides design of the P2P systems which are useful for end-users as well as other application developers who can build upon these existing systems. Secondly, it adapts and introduces the methodology of analysis of a system's time-evolution (tools typically used in diverse domains including physics and cybernetics) to study the long run behavior of P2P systems, and uses this methodology to (re-)design appropriate algorithms and evaluate them. We observed that studying P2P systems from the perspective of complex systems reveals their inner dynamics and hence ways to exploit such dynamics for suitable or better algorithms. In other words, the analysis methodology in itself strongly influences and inspires the way we design such systems. We believe that such an approach of orchestrating self-organization in internet-scale systems, where the algorithms and the analysis methodology have strong mutual influence will significantly change the way future such systems are developed and evaluated. We envision that such an approach will particularly serve as an important tool for the nascent but fast moving P2P systems research and development community

    Exploiting Host Availability in Distributed Systems.

    Full text link
    As distributed systems become more decentralized, fluctuating host availability is an increasingly disruptive phenomenon. Older systems such as AFS used a small number of well-maintained, highly available machines to coordinate access to shared client state; server uptime (and thus service availability) were expected to be high. Newer services scale to larger number of clients by increasing the number of servers. In these systems, the responsibility for maintaining the service abstraction is spread amongst thousands of machines. In the extreme, each client is also a server who must respond to requests from its peers, and each host can opt in or out of the system at any time. In these operating environments, a non-trivial fraction of servers will be unavailable at any give time. This diffusion of responsibility from a few dedicated hosts to many unreliable ones has a dramatic impact on distributed system design, since it is difficult to build robust applications atop a partially available, potentially untrusted substrate. This dissertation explores one aspect of this challenge: how can a distributed system measure the fluctuating availability of its constituent hosts, and how can it use an understanding of this churn to improve performance and security? This dissertation extends the previous literature in three ways. First, it introduces new analytical techniques for characterizing availability data, applying these techniques to several real networks and explaining the distinct uptime patterns found within. Second, this dissertation introduces new methods for predicting future availability, both at the granularity of individual hosts and clusters of hosts. Third, my dissertation describes how to use these new techniques to improve the performance and security of distributed systems.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58445/1/jmickens_1.pd

    An ontology-based approach toward the configuration of heterogeneous network devices

    Get PDF
    Despite the numerous efforts of standardization, semantic issues remain in effect in many subfields of networking. The inability to exchange data unambiguously between information systems and human resources is an issue that hinders technology implementation, semantic interoperability, service deployment, network management, technology migration, among many others. In this thesis, we will approach the semantic issues in two critical subfields of networking, namely, network configuration management and network addressing architectures. The fact that makes the study in these areas rather appealing is that in both scenarios semantic issues have been around from the very early days of networking. However, as networks continue to grow in size and complexity current practices are becoming neither scalable nor practical. One of the most complex and essential tasks in network management is the configuration of network devices. The lack of comprehensive and standard means for modifying and controlling the configuration of network elements has led to the continuous and extended use of proprietary Command Line Interfaces (CLIs). Unfortunately, CLIs are generally both, device and vendor-specific. In the context of heterogeneous network infrastructures---i.e., networks typically composed of multiple devices from different vendors---the use of several CLIs raises serious Operation, Administration and Management (OAM) issues. Accordingly, network administrators are forced to gain specialized expertise and to continuously keep knowledge and skills up to date as new features, system upgrades or technologies appear. Overall, the utilization of proprietary mechanisms allows neither sharing knowledge consistently between vendors' domains nor reusing configurations to achieve full automation of network configuration tasks---which are typically required in autonomic management. Due to this heterogeneity, CLIs typically provide a help feature which is in turn an useful source of knowledge to enable semantic interpretation of a vendor's configuration space. The large amount of information a network administrator must learn and manage makes Information Extraction (IE) and other forms of natural language analysis of the Artificial Intelligence (AI) field key enablers for the network device configuration space. This thesis presents the design and implementation specification of the first Ontology-Based Information Extraction (OBIE) System from the CLI of network devices for the automation and abstraction of device configurations. Moreover, the so-called semantic overload of IP addresses---wherein addresses are both identifiers and locators of a node at the same time---is one of the main constraints over mobility of network hosts, multi-homing and scalability of the routing system. In light of this, numerous approaches have emerged in an effort to decouple the semantics of the network addressing scheme. In this thesis, we approach this issue from two perspectives, namely, a non-disruptive (i.e., evolutionary) solution to the current Internet and a clean-slate approach for Future Internet. In the first scenario, we analyze the Locator/Identifier Separation Protocol (LISP) as it is currently one of the strongest solutions to the semantic overload issue. However, its adoption is hindered by existing problems in the proposed mapping systems. Herein, we propose the LISP Redundancy Protocol (LRP) aimed to complement the LISP framework and strengthen feasibility of deployment, while at the same time, minimize mapping table size, latency time and maximize reachability in the network. In the second scenario, we explore TARIFA a Next Generation Internet architecture and introduce a novel service-centric addressing scheme which aims to overcome the issues related to routing and semantic overload of IP addresses.A pesar de los numerosos esfuerzos de estandarización, los problemas de semántica continúan en efecto en muchas subáreas de networking. La inabilidad de intercambiar data sin ambiguedad entre sistemas es un problema que limita la interoperabilidad semántica. En esta tesis, abordamos los problemas de semántica en dos áreas: (i) la gestión de configuración y (ii) arquitecturas de direccionamiento. El hecho que hace el estudio en estas áreas de interés, es que los problemas de semántica datan desde los inicios del Internet. Sin embargo, mientras las redes continúan creciendo en tamaño y complejidad, los mecanismos desplegados dejan de ser escalabales y prácticos. Una de las tareas más complejas y esenciales en la gestión de redes es la configuración de equipos. La falta de mecanismos estándar para la modificación y control de la configuración de equipos ha llevado al uso continuado y extendido de interfaces por líneas de comando (CLI). Desafortunadamente, las CLIs son generalmente, específicos por fabricante y dispositivo. En el contexto de redes heterogéneas--es decir, redes típicamente compuestas por múltiples dispositivos de distintos fabricantes--el uso de varias CLIs trae consigo serios problemas de operación, administración y gestión. En consecuencia, los administradores de red se ven forzados a adquirir experiencia en el manejo específico de múltiples tecnologías y además, a mantenerse continuamente actualizados en la medida en que nuevas funcionalidades o tecnologías emergen, o bien con actualizaciones de sistemas operativos. En general, la utilización de mecanismos propietarios no permite compartir conocimientos de forma consistente a lo largo de plataformas heterogéneas, ni reutilizar configuraciones con el objetivo de alcanzar la completa automatización de tareas de configuración--que son típicamente requeridas en el área de gestión autonómica. Debido a esta heterogeneidad, las CLIs suelen proporcionar una función de ayuda que fundamentalmente aporta información para la interpretación semántica del entorno de configuración de un fabricante. La gran cantidad de información que un administrador debe aprender y manejar, hace de la extracción de información y otras formas de análisis de lenguaje natural del campo de Inteligencia Artificial, potenciales herramientas para la configuración de equipos en entornos heterogéneos. Esta tesis presenta el diseño y especificaciones de implementación del primer sistema de extracción de información basada en ontologías desde el CLI de dispositivos de red, para la automatización y abstracción de configuraciones. Por otra parte, la denominada sobrecarga semántica de direcciones IP--en donde, las direcciones son identificadores y localizadores al mismo tiempo--es una de las principales limitaciones sobre mobilidad, multi-homing y escalabilidad del sistema de enrutamiento. Por esta razón, numerosas propuestas han emergido en un esfuerzo por desacoplar la semántica del esquema de direccionamiento de las redes actuales. En esta tesis, abordamos este problema desde dos perspectivas, la primera de ellas una aproximación no-disruptiva (es decir, evolucionaria) al problema del Internet actual y la segunda, una nueva propuesta en torno a futuras arquitecturas del Internet. En el primer escenario, analizamos el protocolo LISP (del inglés, Locator/Identifier Separation Protocol) ya que es en efecto, una de las soluciones con mayor potencial para la resolucion del problema de semántica. Sin embargo, su adopción está limitada por problemas en los sistemas de mapeo propuestos. En esta tesis, proponemos LRP (del inglés, LISP Redundancy Protocol) un protocolo destinado a complementar LISP e incrementar la factibilidad de despliegue, a la vez que, reduce el tamaño de las tablas de mapeo, tiempo de latencia y maximiza accesibilidad. En el segundo escenario, exploramos TARIFA una arquitectura de red de nueva generación e introducimos un novedoso esquema de direccionamiento orientado a servicios
    corecore