14 research outputs found

    On the scalability of LISP and advanced overlaid services

    Get PDF
    In just four decades the Internet has gone from a lab experiment to a worldwide, business critical infrastructure that caters to the communication needs of almost a half of the Earth's population. With these figures on its side, arguing against the Internet's scalability would seem rather unwise. However, the Internet's organic growth is far from finished and, as billions of new devices are expected to be joined in the not so distant future, scalability, or lack thereof, is commonly believed to be the Internet's biggest problem. While consensus on the exact form of the solution is yet to be found, the need for a semantic decoupling of a node's location and identity, often called a location/identity separation, is generally accepted as a promising way forward. Typically, this requires the introduction of new network elements that provide the binding of the two names-paces and caches that avoid hampering router packet forwarding speeds. But due to this increased complexity the solution's scalability is itself questioned. This dissertation evaluates the suitability of using the Locator/ID Separation Protocol (LISP), one of the most successful proposals to follow the location/identity separation guideline, as a solution to the Internet's scalability problem. However, because the deployment of any new architecture depends not only on solving the incumbent's technical problems but also on the added value that it brings, our approach follows two lines. In the first part of the thesis, we develop the analytical tools to evaluate LISP's control plane scalability while in the second we show that the required control/data plane separation provides important benefits that could drive LISP's adoption. As a first step to evaluating LISP's scalability, we propose a methodology for an analytical analysis of cache performance that relies on the working-set theory to estimate traffic locality of reference. One of our main contribution is that we identify the conditions network traffic must comply with for the theory to be applicable and then use the result to develop a model that predicts average cache miss rates. Furthermore, we study the model's suitability for long term cache provisioning and assess the cache's vulnerability in front of malicious users through an extension that accounts for cache polluting traffic. As a last step, we investigate the main sources of locality and their impact on the asymptotic scalability of the LISP cache. An important finding here is that destination popularity distribution can accurately describe cache performance, independent of the much harder to model short term correlations. Under a small set of assumptions, this result finally enables us to characterize asymptotic scalability with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the models and discuss the accuracy of our assumptions using several one-day-long packet traces collected at the egress points of a campus and an academic network. To show the added benefits that could drive LISP's adoption, in the second part of the thesis we investigate the possibilities of performing inter-domain multicast and improving intra-domain routing. Although the idea of using overlaid services to improve underlay performance is not new, this dissertation argues that LISP offers the right tools to reliably and easily implement such services due to its reliance on network instead of application layer support. In particular, we present and extensively evaluate Lcast, a network-layer single-source multicast framework designed to merge the robustness and efficiency of IP multicast with the configurability and low deployment cost of application-layer overlays. Additionally, we describe and evaluate LISP-MPS, an architecture capable of exploiting LISP to minimize intra-domain routing tables and ensure, among other, support for multi protocol switching and virtual networks.En menos de cuatro décadas Internet ha evolucionado desde un experimento de laboratorio hasta una infraestructura de alcance mundial, de importancia crítica para negocios y que atiende a las necesidades de casi un tercio de los habitantes del planeta. Con estos números, es difícil tratar de negar la necesidad de escalabilidad de Internet. Sin embargo, el crecimiento orgánico de Internet está aún lejos de finalizar ya que se espera que mil millones de dispositivos nuevos se conecten en el futuro cercano. Así pues, la falta de escalabilidad es el mayor problema al que se enfrenta Internet hoy en día. Aunque la solución definitiva al problema está aún por definir, la necesidad de desacoplar semánticamente la localización e identidad de un nodo, a menudo llamada locator/identifier separation, es generalmente aceptada como un camino prometedor a seguir. Sin embargo, esto requiere la introducción de nuevos dispositivos en la red que unan los dos espacios de nombres disjuntos resultantes y de cachés que almacenen los enlaces temporales entre ellos con el fin de aumentar la velocidad de transmisión de los enrutadores. A raíz de esta complejidad añadida, la escalabilidad de la solución en si misma es también cuestionada. Este trabajo evalúa la idoneidad de utilizar Locator/ID Separation Protocol (LISP), una de las propuestas más exitosas que siguen la pauta locator/identity separation, como una solución para la escalabilidad de la Internet. Con tal fin, desarrollamos las herramientas analíticas para evaluar la escalabilidad del plano de control de LISP pero también para mostrar que la separación de los planos de control y datos proporciona un importante valor añadido que podría impulsar la adopción de LISP. Como primer paso para evaluar la escalabilidad de LISP, proponemos una metodología para un estudio analítico del rendimiento de la caché que se basa en la teoría del working-set para estimar la localidad de referencias. Identificamos las condiciones que el tráfico de red debe cumplir para que la teoría sea aplicable y luego desarrollamos un modelo que predice las tasas medias de fallos de caché con respecto a parámetros de tráfico fácilmente medibles. Por otra parte, para demostrar su versatilidad y para evaluar la vulnerabilidad de la caché frente a usuarios malintencionados, extendemos el modelo para considerar el rendimiento frente a tráfico generado por usuarios maliciosos. Como último paso, investigamos como usar la popularidad de los destinos para estimar el rendimiento de la caché, independientemente de las correlaciones a corto plazo. Bajo un pequeño conjunto de hipótesis conseguimos caracterizar la escalabilidad con respecto a la cantidad de prefijos (el crecimiento de Internet) y los usuarios (crecimiento del sitio LISP). Validamos los modelos y discutimos la exactitud de nuestras suposiciones utilizando varias trazas de paquetes reales. Para mostrar los beneficios adicionales que podrían impulsar la adopción de LISP, también investigamos las posibilidades de realizar multidifusión inter-dominio y la mejora del enrutamiento dentro del dominio. Aunque la idea de utilizar servicios superpuestos para mejorar el rendimiento de la capa subyacente no es nueva, esta tesis sostiene que LISP ofrece las herramientas adecuadas para poner en práctica de forma fiable y fácilmente este tipo de servicios debido a que LISP actúa en la capa de red y no en la capa de aplicación. En particular, presentamos y evaluamos extensamente Lcast, un marco de multidifusión con una sola fuente diseñado para combinar la robustez y eficiencia de la multidifusión IP con la capacidad de configuración y bajo coste de implementación de una capa superpuesta a nivel de aplicación. Además, describimos y evaluamos LISP-MPS, una arquitectura capaz de explotar LISP para minimizar las tablas de enrutamiento intra-dominio y garantizar, entre otras, soporte para conmutación multi-protocolo y redes virtuales

    Internet of Things From Hype to Reality

    Get PDF
    The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions

    GMPLS-OBS interoperability and routing acalability in internet

    Get PDF
    The popularization of Internet has turned the telecom world upside down over the last two decades. Network operators, vendors and service providers are being challenged to adapt themselves to Internet requirements in a way to properly serve the huge number of demanding users (residential and business). The Internet (data-oriented network) is supported by an IP packet-switched architecture on top of a circuit-switched, optical-based architecture (voice-oriented network), which results in a complex and rather costly infrastructure to the transport of IP traffic (the dominant traffic nowadays). In such a way, a simple and IP-adapted network architecture is desired. From the transport network perspective, both Generalized Multi-Protocol Label Switching (GMPLS) and Optical Burst Switching (OBS) technologies are part of the set of solutions to progress towards an IP-over-WDM architecture, providing intelligence in the control and management of resources (i.e. GMPLS) as well as a good network resource access and usage (i.e. OBS). The GMPLS framework is the key enabler to orchestrate a unified optical network control and thus reduce network operational expenses (OPEX), while increasing operator's revenues. Simultaneously, the OBS technology is one of the well positioned switching technologies to realize the envisioned IP-over-WDM network architecture, leveraging on the statistical multiplexing of data plane resources to enable sub-wavelength in optical networks. Despite of the GMPLS principle of unified control, little effort has been put on extending it to incorporate the OBS technology and many open questions still remain. From the IP network perspective, the Internet is facing scalability issues as enormous quantities of service instances and devices must be managed. Nowadays, it is believed that the current Internet features and mechanisms cannot cope with the size and dynamics of the Future Internet. Compact Routing is one of the main breakthrough paradigms on the design of a routing system scalable with the Future Internet requirements. It intends to address the fundamental limits of current stretch-1 shortest-path routing in terms of RT scalability (aiming at sub-linear growth). Although "static" compact routing works fine, scaling logarithmically on the number of nodes even in scale-free graphs such as Internet, it does not handle dynamic graphs. Moreover, as multimedia content/services proliferate, the multicast is again under the spotlight as bandwidth efficiency and low RT sizes are desired. However, it makes the problem even worse as more routing entries should be maintained. In a nutshell, the main objective of this thesis in to contribute with fully detailed solutions dealing both with i) GMPLS-OBS control interoperability (Part I), fostering unified control over multiple switching domains and reduce redundancy in IP transport. The proposed solution overcomes every interoperability technology-specific issue as well as it offers (absolute) QoS guarantees overcoming OBS performance issues by making use of the GMPLS traffic-engineering (TE) features. Keys extensions to the GMPLS protocol standards are equally approached; and ii) new compact routing scheme for multicast scenarios, in order to overcome the Future Internet inter-domain routing system scalability problem (Part II). In such a way, the first known name-independent (i.e. topology unaware) compact multicast routing algorithm is proposed. On the other hand, the AnyTraffic Labeled concept is also introduced saving on forwarding entries by sharing a single forwarding entry to unicast and multicast traffic type. Exhaustive simulation campaigns are run in both cases in order to assess the reliability and feasible of the proposals

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    A metamodel to annotate knowledge based engineering codes as enterprise knowledge resources

    Get PDF
    The encoding of Knowledge Based Engineering (KBE) software applications is becoming a prominent tool for the automation of knowledge intensive tasks carried out using Computer Aided Design (CAD) technology. However, limitations exist on the ability to manage the engineering knowledge models embedded in these executable KBE applications. This research proposes a metamodel to annotate encoded KBE applications. Resulting from the annotation, XKMs become explicit knowledge resources whose content can be better accessed and managed. The attachment of metadata to data sets in enterprise repositories is a necessary step to identify and index them so they can be queried, browsed and changed. The sophistication of metadata models for these data “items” ranges from the simple indexing using numbers to more sophisticated representations describing their context information (i.e. author, creation date, etc.), their internal structure and their content. Current engineering data repositories like Product Data Management and Product Lifecycle Management systems offer predefined metamodels to annotate a range of engineering data items including CAD files or special types of documents. At the moment, there is no metadata model specifically designed to annotate KBE codes. In this situation, an undifferentiated metadata model needs to be used for XKMs. However, in this case the only information retained by the system about them would be context metadata. Once an instance of the metadata is attached to an XKM, it can be used as its identifier within an enterprise data repository. The proposed metamodel contains abstract entities to annotate XKMs. The resulting descriptive model for an XKM pays attention to its internal structure and its operation at different levels of granularity. The particular design of the proposed metamodel positions it at a level of abstraction between non executable domain knowledge models and executable KBE applications. This design choice is made to support the use of the metadata not only as an informative model but also as an executable one. The achievement of this target is becoming possible through the emergence of semantic modelling standards that allow the description of data models independently from the language of implementation. Using this approach, the generation of code and metadata is made automatically using mapping rules resulting from the semantic agreement between models and specific syntax rules. The immediate application of the developed metamodel is to annotate XKMs within PLM systems. The approach shall contribute not only to systematically store instances of XKMs but also to manage the lifecycle of the engineering knowledge encoded within them. The proposed representation provides a more comprehensive approach for non KBE language experts to understand the code. On this basis, the change on the metamodels can be automatically traced back to the code and vice-versa. During the research, evidence has been gathered from the community of KBE technology users and vendors on the need to support this research effort. In the long term, the research contributes to the use of PLM systems as a platform for engineering knowledge management.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Strategies for Managing Linked Enterprise Data

    Get PDF
    Data, information and knowledge become key assets of our 21st century economy. As a result, data and knowledge management become key tasks with regard to sustainable development and business success. Often, knowledge is not explicitly represented residing in the minds of people or scattered among a variety of data sources. Knowledge is inherently associated with semantics that conveys its meaning to a human or machine agent. The Linked Data concept facilitates the semantic integration of heterogeneous data sources. However, we still lack an effective knowledge integration strategy applicable to enterprise scenarios, which balances between large amounts of data stored in legacy information systems and data lakes as well as tailored domain specific ontologies that formally describe real-world concepts. In this thesis we investigate strategies for managing linked enterprise data analyzing how actionable knowledge can be derived from enterprise data leveraging knowledge graphs. Actionable knowledge provides valuable insights, supports decision makers with clear interpretable arguments, and keeps its inference processes explainable. The benefits of employing actionable knowledge and its coherent management strategy span from a holistic semantic representation layer of enterprise data, i.e., representing numerous data sources as one, consistent, and integrated knowledge source, to unified interaction mechanisms with other systems that are able to effectively and efficiently leverage such an actionable knowledge. Several challenges have to be addressed on different conceptual levels pursuing this goal, i.e., means for representing knowledge, semantic data integration of raw data sources and subsequent knowledge extraction, communication interfaces, and implementation. In order to tackle those challenges we present the concept of Enterprise Knowledge Graphs (EKGs), describe their characteristics and advantages compared to existing approaches. We study each challenge with regard to using EKGs and demonstrate their efficiency. In particular, EKGs are able to reduce the semantic data integration effort when processing large-scale heterogeneous datasets. Then, having built a consistent logical integration layer with heterogeneity behind the scenes, EKGs unify query processing and enable effective communication interfaces for other enterprise systems. The achieved results allow us to conclude that strategies for managing linked enterprise data based on EKGs exhibit reasonable performance, comply with enterprise requirements, and ensure integrated data and knowledge management throughout its life cycle

    Actas da 10ª Conferência sobre Redes de Computadores

    Get PDF
    Universidade do MinhoCCTCCentro AlgoritmiCisco SystemsIEEE Portugal Sectio

    SDN-based traffic engineering in data centers, Interconnects, and Carrier Networks

    Get PDF
    Server virtualization and cloud computing have escalated the bandwidth and performance demands on the DCN (data center network). The main challenges in DCN are maximizing network utilization and ensuring fault tolerance to address multiple node-and-link failures. A multitenant and highly dynamic virtualized environment consists of a large number of endstations, leading to a very large number of flows that challenge the scalability of a solution to network throughput maximization. The challenges are scalability, in terms of address learning, forwarding decision convergence, and forwarding state size, as well as flexibility for offloading with VM migration. Geographically distributed data centers are inter-connected through service providers’ carrier network. Service providers offer wide-area network (WAN) connection such as private lines and MPLS circuits between edges of data centers. DC sides of network operators try to maximize the utilization of such defined overlay WAN connection i.e. data center interconnection (DCI), which applies to edges of DC networks. Service provider sides of network operators try to optimize the core of carrier network. Along with the increasing adoption of ROADM, OTN, and packet switching technologies, traditional two-layer IP/MPLS-over-WDM network has evolved into three-layer IP/MPLS-over-OTN-over-DWDM network and once defined overlay topology is now transitioning to dynamic topologies based on on-demand traffic demands. Network operations are thus divided into three physical sub-networks: DCN, overlay DCI, and multi-layer carrier network. Server virtualization, cloud computing and evolving multilayer carrier network challenge traffic engineering to maximize utilization on all physical subnetworks. The emerging software-defined networking (SDN) architecture moves path computation towards a centralized controller, which has global visibility. Carriers indicate a strong preference for SDN to be interoperable between multiple vendors in heterogeneous transport networks. SDN is a natural way to create a unified control plane across multiple administrative divisions. This thesis contributes SDN-based traffic engineering techniques for maximizing network utilization of DCN, DCI, and carrier network. The first part of the thesis focuses on DCN traffic engineering. Traditional forwarding mechanisms using a single path are not able to take advantages of available multiple physical paths. The state-of-the-art MPTCP (Multipath Transmission Control Protocol) solution uses multiple randomly selected paths, but cannot give total aggregated capacity. Moreover, it works as a TCP process, and so does not support other protocols like UDP. To address these issues, this thesis presents a solution using adaptive multipath routing in a Layer-2 network with static (capacity and latency) metrics, which adapts link and path failures. This solution provides innetwork aggregated path capacity to individual flows, as well as scalability and multitenancy, by separating end-station services from the provider’s network. The results demonstrate an improvement of 14% in the worst bisection bandwidth utilization, compared to the MPTCP with 5 sub-flows. The second part of the thesis focuses on DCI traffic engineering. The existing approaches to reservation services provide limited reservation capabilities, e.g. limited connections over links returned by the traceroute over traditional IP-based networks. Moreover, most existing approaches do not address fault tolerance in the event of node or link failures. To address these issues, this thesis presents ECMP-like multipath routing algorithm and forwarding assignment scheme that increase reservation acceptance rate compared to state-of-art reservation frameworks in the WAN-links between data centers, and such reservations can be configured with a limited number of static forwarding rules on switches. Our prototype provides the RESTful web service interface for link-fail event management and re-routes paths for all the affected reservations. In the final part of the thesis, we focused on multi-layer carrier network traffic engineering. New dynamic traffic trends in upper layers (e.g. IP routing) require dynamic configuration of the optical transport to re-direct the traffic, and this in turn requires an integration of multiple administrative control layers. When multiple bandwidth path requests come from different nodes in different layers, a distributed sequential computation cannot optimize the entire network. Most prior research has focused on the two-layer problem, and recent three-layer research studies are limited to the capacity dimensioning problem. In this thesis, we present an optimization model with MILP formulation for dynamic traffic in a three-layer network, especially taking into account the unique technological constraints of the distinct OTN layer. Our experimental results show how unit cost values of different layers affect network cost and parameters in the presence of multiple sets of traffic loads. We also demonstrate the effectiveness of our proposed heuristic approach

    Myriad : a distributed machine vision application framework

    Get PDF
    This thesis examines the potential for the application of distributed computing frameworks to industrial and also lightweight consumer-level Machine Vision (MV) applications. Traditional, stand-alone MV systems have many benefits in well-defined, tightly- controlled industrial settings, but expose limitations in interactive, de-localised and small-task applications that seek to utilise vision techniques. In these situations, single-computer solutions fail to suffice and greater flexibility in terms of system construction, interactivity and localisation are required. Network-connected and distributed vision systems are proposed as a remedy to these problems, providing dynamic, componentised systems that may optionally be independent of location, or take advantage of networked computing tools and techniques, such as web servers, databases, proxies, wireless networking, secure connectivity, distributed computing clusters, web services and load balancing. The thesis discusses a system named Myriad, a distributed computing framework for Machine Vision applications. Myriad is composed components, such as image processing engines and equipment controllers, which behave as enhanced web servers and communicate using simple HTTP requests. The roles of HTTP-based distributed computing servers in simplifying rapid development of networked applications and integrating those applications with existing networked tools and business processes are explored. Prototypes of Myriad components, written in Java, along with supporting PHP, Perl and Prolog scripts and user interfaces in C , Java, VB and C++/Qt are examined. Each component includes a scripting language named MCS, enabling remote clients (or other Myriad components) to issue single commands or execute sequences of commands locally to the component in a sustained session. The advantages of server- side scripting in this manner for distributed computing tasks are outlined with emphasis on Machine Vision applications, as a means to overcome network connection issues and address problems where consistent processing is required. Furthermore, the opportunities to utilise scripting to form complex distributed computing network topologies and fully-autonomous federated networked applications are described, and examples given on how to achieve functionality such as clusters of image processing nodes. Through the medium of experimentation involving the remote control of a model train set, cameras and lights, the ability of Myriad to perform traditional roles of fixed, stand-alone Machine Vision systems is supported, along with discussion of opportunities to incorporate these elements into network-based dynamic collaborative inspection applications. In an example of 2D packing of remotely-acquired shapes, distributed computing extensions to Machine Vision tasks are explored, along with integration into larger business processes. Finally, the thesis examines the use of Machine Vision techniques and Myriad components to construct distributed computing applications with the addition of vision capabilities, leading to a new class of image-data-driven applications that exploit mobile computing and Pervasive Computing trends
    corecore