338 research outputs found

    Future of networking is the future of Big Data, The

    Get PDF
    2019 Summer.Includes bibliographical references.Scientific domains such as Climate Science, High Energy Particle Physics (HEP), Genomics, Biology, and many others are increasingly moving towards data-oriented workflows where each of these communities generates, stores and uses massive datasets that reach into terabytes and petabytes, and projected soon to reach exabytes. These communities are also increasingly moving towards a global collaborative model where scientists routinely exchange a significant amount of data. The sheer volume of data and associated complexities associated with maintaining, transferring, and using them, continue to push the limits of the current technologies in multiple dimensions - storage, analysis, networking, and security. This thesis tackles the networking aspect of big-data science. Networking is the glue that binds all the components of modern scientific workflows, and these communities are becoming increasingly dependent on high-speed, highly reliable networks. The network, as the common layer across big-science communities, provides an ideal place for implementing common services. Big-science applications also need to work closely with the network to ensure optimal usage of resources, intelligent routing of requests, and data. Finally, as more communities move towards data-intensive, connected workflows - adopting a service model where the network provides some of the common services reduces not only application complexity but also the necessity of duplicate implementations. Named Data Networking (NDN) is a new network architecture whose service model aligns better with the needs of these data-oriented applications. NDN's name based paradigm makes it easier to provide intelligent features at the network layer rather than at the application layer. This thesis shows that NDN can push several standard features to the network. This work is the first attempt to apply NDN in the context of large scientific data; in the process, this thesis touches upon scientific data naming, name discovery, real-world deployment of NDN for scientific data, feasibility studies, and the designs of in-network protocols for big-data science

    Process control and configuration of a reconfigurable production system using a multi-agent software system

    Get PDF
    Thesis (M. Tech. (Information Technology)) -- Central University of technology, Free State, 2011Traditional designs for component-handling platforms are rigidly linked to the product being produced. Control and monitoring methods for these platforms consist of various proprietary hardware controllers containing the control logic for the production process. Should the configuration of the component handling platform change, the controllers need to be taken offline and reprogrammed to take the changes into account. The current thinking in component-handling system design is the notion of re-configurability. Reconfigurability means that with minimum or no downtime the system can be adapted to produce another product type or overcome a device failure. The re-configurable component handling platform is built-up from groups of independent devices. These groups or cells are each responsible for some aspect of the overall production process. By moving or swopping different versions of these cells within the component-handling platform, re-configurability is achieved. Such a dynamic system requires a flexible communications platform and high-level software control architecture to accommodate the reconfigurable nature of the system. This work represents the design and testing of the core of a re-configurable production control software platform. Multiple software components work together to control and monitor a re-configurable component handling platform. The design and implementation of a production database, production ontology, communications architecture and the core multi-agent control application linking all these components together is presented

    Enhancing data privacy and security in Internet of Things through decentralized models and services

    Get PDF
    exploits a Byzantine Fault Tolerant (BFT) blockchain, in order to perform collaborative and dynamic botnet detection by collecting and auditing IoT devices\u2019 network traffic flows as blockchain transactions. Secondly, we take the challenge to decentralize IoT, and design a hybrid blockchain architecture for IoT, by proposing Hybrid-IoT. In Hybrid-IoT, subgroups of IoT devices form PoW blockchains, referred to as PoW sub-blockchains. Connection among the PoW sub-blockchains employs a BFT inter-connector framework. We focus on the PoW sub-blockchains formation, guided by a set of guidelines based on a set of dimensions, metrics and bounds

    Service-oriented models for audiovisual content storage

    No full text
    What are the important topics to understand if involved with storage services to hold digital audiovisual content? This report takes a look at how content is created and moves into and out of storage; the storage service value networks and architectures found now and expected in the future; what sort of data transfer is expected to and from an audiovisual archive; what transfer protocols to use; and a summary of security and interface issues

    Incrementando as redes centradas à informaçãopara uma internet das coisas baseada em nomes

    Get PDF
    The way we use the Internet has been evolving since its origins. Nowadays, users are more interested in accessing contents and services with high demands in terms of bandwidth, security and mobility. This evolution has triggered the emergence of novel networking architectures targeting current, as well as future, utilisation demands. Information-Centric Networking (ICN) is a prominent example of these novel architectures that moves away from the current host-centric communications and centres its networking functions around content. Parallel to this, new utilisation scenarios in which smart devices interact with one another, as well as with other networked elements, have emerged to constitute what we know as the Internet of Things (IoT). IoT is expected to have a significant impact on both the economy and society. However, fostering the widespread adoption of IoT requires many challenges to be overcome. Despite recent developments, several issues concerning the deployment of IPbased IoT solutions on a large scale are still open. The fact that IoT is focused on data and information rather than on point-topoint communications suggests the adoption of solutions relying on ICN architectures. In this context, this work explores the ground concepts of ICN to develop a comprehensive vision of the principal requirements that should be met by an IoT-oriented ICN architecture. This vision is complemented with solutions to fundamental issues for the adoption of an ICN-based IoT. First, to ensure the freshness of the information while retaining the advantages of ICN’s in-network caching mechanisms. Second, to enable discovery functionalities in both local and large-scale domains. The proposed mechanisms are evaluated through both simulation and prototyping approaches, with results showcasing the feasibility of their adoption. Moreover, the outcomes of this work contribute to the development of new compelling concepts towards a full-fledged Named Network of Things.A forma como usamos a Internet tem vindo a evoluir desde a sua criação. Atualmente, os utilizadores estão mais interessados em aceder a conteúdos e serviços, com elevados requisitos em termos de largura de banda, segurança e mobilidade. Esta evolução desencadeou o desenvolvimento de novas arquiteturas de rede, visando os atuais, bem como os futuros, requisitos de utilização. As Redes Centradas à Informação (Information-Centric Networking - ICN) são um exemplo proeminente destas novas arquiteturas que, em vez de seguirem um modelo de comunicação centrado nos dispositivos terminais, centram as suas funções de rede em torno do próprio conteúdo. Paralelamente, novos cenários de utilização onde dispositivos inteligentes interagem entre si, e com outros elementos de rede, têm vindo a aparecer e constituem o que hoje conhecemos como a Internet das Coisas (Internet of Things - IoT ). É esperado que a IoT tenha um impacto significativo na economia e na sociedade. No entanto, promover a adoção em massa da IoT ainda requer que muitos desafios sejam superados. Apesar dos desenvolvimentos recentes, vários problemas relacionados com a adoção em larga escala de soluções de IoT baseadas no protocolo IP estão em aberto. O facto da IoT estar focada em dados e informação, em vez de comunicações ponto-a-ponto, sugere a adoção de soluções baseadas em arquiteturas ICN. Neste sentido, este trabalho explora os conceitos base destas soluções para desenvolver uma visão completa dos principais requisitos que devem ser satisfeitos por uma solução IoT baseada na arquitetura de rede ICN. Esta visão é complementada com soluções para problemas cruciais para a adoção de uma IoT baseada em ICN. Em primeiro lugar, assegurar que a informação seja atualizada e, ao mesmo tempo, manter as vantagens do armazenamento intrínseco em elementos de rede das arquiteturas ICN. Em segundo lugar, permitir as funcionalidades de descoberta não só em domínios locais, mas também em domínios de larga-escala. Os mecanismos propostos são avaliados através de simulações e prototipagem, com os resultados a demonstrarem a viabilidade da sua adoção. Para além disso, os resultados deste trabalho contribuem para o desenvolvimento de conceitos sólidos em direção a uma verdadeira Internet das Coisas baseada em Nomes.Programa Doutoral em Telecomunicaçõe

    Self-adaptive Grid Resource Monitoring and discovery

    Get PDF
    The Grid provides a novel platform where the scientific and engineering communities can share data and computation across multiple administrative domains. There are several key services that must be offered by Grid middleware; one of them being the Grid Information Service( GIS). A GIS is a Grid middleware component which maintains information about hardware, software, services and people participating in a virtual organisation( VO). There is an inherent need in these systems for the delivery of reliable performance. This thesis describes a number of approaches which detail the development and application of a suite of benchmarks for the prediction of the process of resource discovery and monitoring on the Grid. A series of experimental studies of the characterisation of performance using benchmarking, are carried out. Several novel predictive algorithms are presented and evaluated in terms of their predictive error. Furthermore, predictive methods are developed which describe the behaviour of MDS2 for a variable number of user requests. The MDS is also extended to include job information from a local scheduler; this information is queried using requests of greatly varying complexity. The response of the MDS to these queries is then assessed in terms of several performance metrics. The benchmarking of the dynamic nature of information within MDS3 which is based on the Open Grid Services Architecture (OGSA), and also the successor to MDS2, is also carried out. The performance of both the pull and push query mechanisms is analysed. GridAdapt (Self-adaptive Grid Resource Monitoring) is a new system that is proposed, built upon the Globus MDS3 benchmarking. It offers self-adaptation, autonomy and admission control at the Index Service, whilst ensuring that the MIDS is not overloaded and can meet its quality-of-service,f or example,i n terms of its average response time for servicing synchronous queries and the total number of queries returned per unit time

    Enhancing data privacy and security in Internet of Things through decentralized models and services

    Get PDF
    exploits a Byzantine Fault Tolerant (BFT) blockchain, in order to perform collaborative and dynamic botnet detection by collecting and auditing IoT devices’ network traffic flows as blockchain transactions. Secondly, we take the challenge to decentralize IoT, and design a hybrid blockchain architecture for IoT, by proposing Hybrid-IoT. In Hybrid-IoT, subgroups of IoT devices form PoW blockchains, referred to as PoW sub-blockchains. Connection among the PoW sub-blockchains employs a BFT inter-connector framework. We focus on the PoW sub-blockchains formation, guided by a set of guidelines based on a set of dimensions, metrics and bounds

    The AdCIM framework : extraction, integration and persistence of the configuration of distributed systems

    Get PDF
    [Resumen] Este resumen se compone de una introducción, que explica el enfoque y contexto de la Tesis, seguida de una sección sobre su organización en partes y capítulos. Después, sigue una enumeración de las contribuciones recogidas en ella, para finalizar con las conclusiones y trabajo futuro. Introducción Los administradores de sistemas tienen que trabajar con la gran diversidad de hardware y software existente en las organizaciones actuales. Desde el punto de vista del administrador, las infraestructuras homogéneas son mucho más sencillas de administrar y por ello más deseables. Pero, aparte de la dificultad intrínseca de mantener esa homogeneidad a la vez que progresa la tecnología y las consecuencias de estar atado a un proveedor fijo, la propia homogeneidad tiene riesgos; por ejemplo, las instalaciones en monocultivo son más vulnerables contra virus y troyanos, y hacerlas seguras requiere la introducción de diferencias aleatorias en llamadas al sistema que introduzcan diversidad artificial, una medida que puede provocar inestabilidad (ver Birman y Schneider. Esto hace la heterogeneidad en sí casi inevitable, y una característica de los sistemas reales difícil de obviar. Pero de hecho conlleva más complejidad. En muchas instalaciones, la mezcla de Windows y derivados de Unix es usual, ya sea en combinación o divididos claramente en clientes y servidores. Las tareas de administración en ambos sistemas son diferentes debido a las diferencias en ecosistema y modo de conceptualizar los sistemas informáticos acaecidas tras años de divergencia en interfaces, sistemas de configuración, comandos y abstracciones. A lo largo del tiempo ha habido muchos intentos de cerrar esa brecha, y algunos lo hacen emulando o versionando las herramientas Unix, probadas a lo largo de muchos años. Por ejemplo, la solución de Microsoft, Windows Services for Unix permite el uso de NIS, el Network File System (NFS), Perl, y el shell Korn en Windows, pero no los integra realmente en Windows, ya que está más orientado a la migración de aplicaciones. Cygwin soporta más herramientas, como Bash y las Autotools de GNU, pero se centra en la traslación directa a Windows de programas Unix basados en POSIX usando gcc. Outwit es un port muy interesante del conjunto de herramientas Unix que integra los pipelines de Unix en Windows y permite acceder al Registro, los drivers ODBC y al portapapeles desde los shells de Unix, pero los scripts desarrollados para este sistema no son usables directamente en sistemas Unix. Por lo tanto, la separación sigue a pesar de dichos intentos. En esta Tesis presentamos un framework, denominado AdCIM, para la administración de la configuración de sistemas heterogéneos. Como tal, su objetivo es integrar y uniformizar la administración de estos sistemas abstrayendo sus diferencias, pero al mismo tiempo ser flexible y fácil de adaptar para soportar nuevos sistemas rápidamente. Para lograr dichos objetivos la arquitectura de AdCIM sigue el paradigma de orientación a modelo, que propone el diseño de aplicaciones a partir de un modelo inicial, que es transformado en diversos ''artefactos'', como código, documentación, esquemas de base de datos, etc. que formarían la aplicación. En el caso de AdCIM, el modelo es CIM, y las transformaciones se efectúan utilizando el lenguaje declarativo XSLT, que es capaz de expresar transformaciones sobre datos XML. AdCIM realiza todas sus transformaciones con XSLT, excepto la conversión inicial de ficheros de texto plano a XML, hecha con un párser especial de texto a XML. Los programas XSLT, también denominados stylesheets, enlazan y transforman partes específicas del árbol XML de entrada, y soportan ejecución recursiva, formando un modelo de programación declarativo-funcional con gran potencia expresiva. El modelo elegido para representar los dominios de administración cubiertos por el framework es CIM (Common Information Model), un modelo estándar, extensible y orientado a objetos creado por la Distributed Management Task Force (DMTF). Usando esquemas del modelo CIM, los múltiples y distintos formatos de configuración y datos de administración son traducidos por la infraestructura de AdCIM en instancias CIM. Los esquemas CIM también sirven como base para generar formularios web y otros esquemas específicos para validación y persistencia de los datos. El desarrollo de AdCIM como un framework orientado al modelo evolucionó a partir de nuestro trabajo previo, que extraía datos de configuración y los almacenaba en un repositorio LDAP utilizando scripts Perl. En sucesivos trabajos se empezó a trabajar con la orientación a modelo y se demostró la naturaleza adaptativa de este framework, mediante adaptaciones a entornos Grid y a Wireless Mesh Networks. El enfoque e implementación de este framework son novedosos, y usa algunas tecnologías definidas como estándares por organizaciones internacionales como la IETF, la DMTF, y la W3C. Vemos el uso de dichas tecnologías como una ventaja en vez de una limitación en las posibilidades del framework. Su uso añade generalidad y aplicabilidad al framework, sobre todo comparado con soluciones ad-hoc o de propósito muy específico. A pesar de esta flexibilidad, hemos intentado en todo lo posible definir y concretar todos los aspectos de implementación, definir prácticas de uso adecuadas y evaluar el impacto en el rendimiento y escalabilidad del framework de la elección de las distintas tecnologías estándar

    Hierarchical network topographical routing

    Get PDF
    Within the last 10 years the content consumption model that underlies many of the assumptions about traffic aggregation within the Internet has changed; the previous short burst transfer followed by longer periods of inactivity that allowed for statistical aggregation of traffic has been increasingly replaced by continuous data transfer models. Approaching this issue from a clean slate perspective; this work looks at the design of a network routing structure and supporting protocols for assisting in the delivery of large scale content services. Rather than approaching a content support model through existing IP models the work takes a fresh look at Internet routing through a hierarchical model in order to highlight the benefits that can be gained with a new structural Internet or through similar modifications to the existing IP model. The work is divided into three major sections: investigating the existing UK based Internet structure as compared to the traditional Autonomous System (AS) Internet structural model; a localised hierarchical network topographical routing model; and intelligent distributed localised service models. The work begins by looking at the United Kingdom (UK) Internet structure as an example of a current generation technical and economic model with shared access to the last mile connectivity and a large scale wholesale network between Internet Service Providers (ISPs) and the end user. This model combined with the Internet Protocol (IP) address allocation and transparency of the wholesale network results in an enforced inefficiency within the overall network restricting the ability of ISPs to collaborate. From this model a core / edge separation hierarchical virtual tree based routing protocol based on the physical network topography (layers 2 and 3) is developed to remove this enforced inefficiency by allowing direct management and control at the lowest levels of the network. This model acts as the base layer for further distributed intelligent services such as management and content delivery to enable both ISPs and third parties to actively collaborate and provide content from the most efficient source
    corecore