16 research outputs found
Context Aware Service Oriented Computing in Mobile Ad Hoc Networks
These days we witness a major shift towards small, mobile devices, capable of wireless communication. Their communication capabilities enable them to form mobile ad hoc networks and share resources and capabilities. Service Oriented Computing (SOC) is a new emerging paradigm for distributed computing that has evolved from object-oriented and component-oriented computing to enable applications distributed within and across organizational boundaries. Services are autonomous computational elements that can be described, published, discovered, and orchestrated for the purpose of developing applications. The application of the SOC model to mobile devices provides a loosely coupled model for distributed processing in a resource-poor and highly dynamic environment. Cooperation in a mobile ad hoc environment depends on the fundamental capability of hosts to communicate with each other. Peer-to-peer interactions among hosts within communication range allow such interactions but limit the scope of interactions to a local region. Routing algorithms for mobile ad hoc networks extend the scope of interactions to cover all hosts transitively connected over multi-hop routes. Additional contextual information, e.g., knowledge about the movement of hosts in physical space, can help extend the boundaries of interactions beyond the limits of an island of connectivity. To help separate concerns specific to different layers, a coordination model between the routing layer and the SOC layer provides abstractions that mask the details characteristic to the network layer from the distributed computing semantics above. This thesis explores some of the opportunities and challenges raised by applying the SOC paradigm to mobile computing in ad hoc networks. It investigates the implications of disconnections on service advertising and discovery mechanisms. It addresses issues related to code migration in addition to physical host movement. It also investigates some of the security concerns in ad hoc networking service provision. It presents a novel routing algorithm for mobile ad hoc networks and a novel coordination model that addresses space and time explicitly
Network Service Availability and Continuity Management in the Context of Network Function Virtualization
In legacy computer systems, network functions (e.g., routers, firewalls, etc.) have been provided by specialized hardware appliances to realize Network Services (NS). In recent years, the rise of Network Function Virtualization (NFV) has changed how we realize NSs. With NFV, commercial off-the-shelf hardware and virtualization technologies are used to create Virtual Network Functions (VNF). In the context of NFV, an NS is realized by interconnecting VNFs using Virtual Links (VL).
Service availability and continuity are among the important non-functional characteristics of NSs. Availability is defined as the fraction of time the NS functionality is provided in a period. Current work on NS availability, in the NFV context, focuses on determining the appropriate number of redundant VNFs and their deployment in the virtualized environment, and the redundancy of network paths. Such solutions are necessary but insufficient because redundancy does not guarantee that the overall service outage time for an NS functionality remains below a certain threshold. Moreover, service disruption which impacts the service continuity is not addressed in the current work quantitatively. In addition, NSs and VNFs elasticity and the dynamicity of virtualized infrastructures which can impact the availability of NS functionalities, are not considered in the current state of the art.
In this thesis, we propose a framework for NS availability and continuity management, which consists of two approaches, one for design time and another for runtime adaptation. For this, we define service disruption time for an NS functionality as the amount of time for which the service data is lost due to service outages for a given period. We also define the service data disruption for an NS functionality as the maximum amount of data lost due to a service outage. The design-time approach includes analytical methods which take acceptable service disruption and availability requirements of the tenant, a designed NS, and a given infrastructure as inputs to adjust the NS design and map these requirements to constraints on low-level configuration parameters. Design-time approach guarantees the service availability and continuity requirements will be met as long as the availability characteristics of the infrastructure resources used by the NS constituents do not change at runtime. However, changes in the supporting infrastructure may happen at runtime due to multiple reasons like failover, upgrades, and aging. Therefore, we propose a runtime adaptation approach that reacts to changes at runtime and adjusts the configuration parameters accordingly to satisfy the same service availability and continuity requirements. The runtime approach uses machine learning models, which are created at design time, to determine the required adjustments at runtime.
To demonstrate the feasibility of the proposed solutions and to experiment with them, we present a proof of concept, including prototypes of our approaches and their application in a small NFV cloud environment created for validation purposes. We conduct multiple experiments for two case studies with different service availability and continuity requirements. The results from the conducted experiments show that our approaches can guarantee the fulfillment of the service availability and continuity requirements
Automatic Generation of Distributed Runtime Infrastructure for Internet of Things
Ph. D. ThesisThe Internet of Things (IoT) represents a network of connected devices that are able to
cooperate and interact with each other in order to reach a particular goal. To attain this,
the devices are equipped with identifying, sensing, networking and processing capabilities.
Cloud computing, on the other hand, is the delivering of on-demand computing services –
from applications, to storage, to processing power – typically over the internet. Clouds
bring a number of advantages to distributed computing because of highly available pool of
virtualized computing resource. Due to the large number of connected devices, real-world
IoT use cases may generate overwhelmingly large amounts of data. This prompts the use
of cloud resources for processing, storage and analysis of the data. Therefore, a typical IoT
system comprises of a front-end (devices that collect and transmit data), and back-end –
typically distributed Data Stream Management Systems (DSMSs) deployed on the cloud
infrastructure, for data processing and analysis.
Increasingly, new IoT devices are being manufactured to provide limited execution
environment on top of their data sensing and transmitting capabilities. This consequently
demands a change in the way data is being processed in a typical IoT-cloud setup. The
traditional, centralised cloud-based data processing model – where IoT devices are used
only for data collection – does not provide an efficient utilisation of all available resources.
In addition, the fundamental requirements of real-time data processing such as short
response time may not always be met. This prompts a new processing model which is
based on decentralising the data processing tasks. The new decentralised architectural
pattern allows some parts of data streaming computation to be executed directly on edge
devices – closer to where the data is collected. Extending the processing capabilities to the
IoT devices increases the robustness of applications as well as reduces the communication
overhead between different components of an IoT system. However, this new pattern poses new challenges in the development, deployment and management of IoT applications.
Firstly, there exists a large resource gap between the two parts of a typical IoT system (i.e.
clouds and IoT devices); hence, prompting a new approach for IoT applications deployment
and management. Secondly, the new decentralised approach necessitates the deployment
of DSMS on distributed clusters of heterogeneous nodes resulting in unpredictable runtime
performance and complex fault characteristics. Lastly, the environment where DSMSs are
deployed is very dynamic due to user or device mobility, workload variation, and resource
availability.
In this thesis we present solutions to address the aforementioned challenges. We
investigate how a high-level description of a data streaming computation can be used
to automatically generate a distributed runtime infrastructure for Internet of Things.
Subsequently, we develop a deployment and management system capable of distributing
different operators of a data streaming computation onto different IoT gateway devices
and cloud infrastructure.
To address the other challenges, we propose a non-intrusive approach for performance
evaluation of DSMSs and present a protocol and a set of algorithms for dynamic migration
of stateful data stream operators. To improve our migration approach, we provide an
optimisation technique which provides minimal application downtime and improves the
accuracy of a data stream computation
An Embryonics Inspired Architecture for Resilient Decentralised Cloud Service Delivery
Data-driven artificial intelligence applications arising from Internet of Things technologies can have
profound wide-reaching societal benefits at the cross-section of the cyber and physical domains. Usecases are expanding rapidly. For example, smart-homes and smart-buildings provide intelligent monitoring, resource optimisation, safety, and security for their inhabitants. Smart cities can manage
transport, waste, energy, and crime on large scales. Whilst smart-manufacturing can autonomously
produce goods through the self-management of factories and logistics. As these use-cases expand further, the requirement to ensure data is processed accurately and timely is ever crucial, as many of these
applications are safety critical. Where loss off life and economic damage is a likely possibility in the
event of system failure. While the typical service delivery paradigm, cloud computing, is strong due
to operating upon economies of scale, their physical proximity to these applications creates network
latency which is incompatible with these safety critical applications. To complicate matters further,
the environments they operate in are becoming increasingly hostile. With resource-constrained and
mobile wireless networking, commonplace. These issues drive the need for new service delivery architectures which operate closer to, or even upon, the network devices, sensors and actuators which
compose these IoT applications at the network edge. These hostile and resource constrained environments require adaptation of traditional cloud service delivery models to these decentralised mobile
and wireless environments. Such architectures need to provide persistent service delivery within the
face of a variety of internal and external changes or: resilient decentralised cloud service delivery.
While the current state of the art proposes numerous techniques to enhance the resilience of services
in this manner, none provide an architecture which is capable of providing data processing services in
a cloud manner which is inherently resilient. Adopting techniques from autonomic computing, whose
characteristics are resilient by nature, this thesis presents a biologically-inspired platform modelled
on embryonics. Embryonic systems have an ability to self-heal and self-organise whilst showing capacity to support decentralised data processing. An initial model for embryonics-inspired resilient
decentralised cloud service delivery is derived according to both the decentralised cloud, and resilience
requirements given for this work. Next, this model is simulated using cellular automata, which illustrate the embryonic concept’s ability to provide self-healing service delivery under varying system
component loss. This highlights optimisation techniques, including: application complexity bounds,
differentiation optimisation, self-healing aggression, and varying system starting conditions. All attributes of which can be adjusted to vary the resilience performance of the system depending upon
different resource capabilities and environmental hostilities.
Next, a proof-of-concept implementation is developed and validated which illustrates the efficacy
of the solution. This proof-of-concept is evaluated on a larger scale where batches of tests highlighted
the different performance criteria and constraints of the system. One key finding was the considerable
quantity of redundant messages produced under successful scenarios which were helpful in terms of
enabling resilience yet could increase network contention. Therefore balancing these attributes are
important according to use-case. Finally, graph-based resilience algorithms were executed across
all tests to understand the structural resilience of the system and whether this enabled suitable
measurements or prediction of the application’s resilience. Interestingly this study highlighted that
although the system was not considered to be structurally resilient, the applications were still being
executed in the face of many continued component failures. This highlighted that the autonomic
embryonic functionality developed was succeeding in executing applications resiliently. Illustrating
that structural and application resilience do not necessarily coincide. Additionally, one graph metric,
assortativity, was highlighted as being predictive of application resilience, although not structural
resilience
Advances in Grid Computing
This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems
Proyecto Docente e Investigador, Trabajo Original de Investigación y Presentación de la Defensa, preparado por Germán Moltó para concursar a la plaza de Catedrático de Universidad, concurso 082/22, plaza 6708, área de Ciencia de la Computación e Inteligencia Artificial
Este documento contiene el proyecto docente e investigador del candidato Germán Moltó Martínez presentado como requisito para el concurso de acceso a plazas de Cuerpos Docentes Universitarios. Concretamente, el documento se centra en el concurso para la plaza 6708 de Catedrático de Universidad en el área de Ciencia de la Computación en el Departamento de Sistemas Informáticos y Computación de la Universitat Politécnica de València. La plaza está adscrita a la Escola Técnica Superior d'Enginyeria Informàtica y tiene como perfil las asignaturas "Infraestructuras de Cloud Público" y "Estructuras de Datos y Algoritmos".También se incluye el Historial Académico, Docente e Investigador, así como la presentación usada durante la defensa.Germán Moltó Martínez (2022). Proyecto Docente e Investigador, Trabajo Original de Investigación y Presentación de la Defensa, preparado por Germán Moltó para concursar a la plaza de Catedrático de Universidad, concurso 082/22, plaza 6708, área de Ciencia de la Computación e Inteligencia Artificial. http://hdl.handle.net/10251/18903
Future of networking is the future of Big Data, The
2019 Summer.Includes bibliographical references.Scientific domains such as Climate Science, High Energy Particle Physics (HEP), Genomics, Biology, and many others are increasingly moving towards data-oriented workflows where each of these communities generates, stores and uses massive datasets that reach into terabytes and petabytes, and projected soon to reach exabytes. These communities are also increasingly moving towards a global collaborative model where scientists routinely exchange a significant amount of data. The sheer volume of data and associated complexities associated with maintaining, transferring, and using them, continue to push the limits of the current technologies in multiple dimensions - storage, analysis, networking, and security. This thesis tackles the networking aspect of big-data science. Networking is the glue that binds all the components of modern scientific workflows, and these communities are becoming increasingly dependent on high-speed, highly reliable networks. The network, as the common layer across big-science communities, provides an ideal place for implementing common services. Big-science applications also need to work closely with the network to ensure optimal usage of resources, intelligent routing of requests, and data. Finally, as more communities move towards data-intensive, connected workflows - adopting a service model where the network provides some of the common services reduces not only application complexity but also the necessity of duplicate implementations. Named Data Networking (NDN) is a new network architecture whose service model aligns better with the needs of these data-oriented applications. NDN's name based paradigm makes it easier to provide intelligent features at the network layer rather than at the application layer. This thesis shows that NDN can push several standard features to the network. This work is the first attempt to apply NDN in the context of large scientific data; in the process, this thesis touches upon scientific data naming, name discovery, real-world deployment of NDN for scientific data, feasibility studies, and the designs of in-network protocols for big-data science