426 research outputs found
Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure
This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICAâs needs.Postprint (published version
On distributed mobile edge computing
Mobile Cloud Computing (MCC) has been proposed to offload the workloads of mobile applications from mobile devices to the cloud in order to not only reduce energy consumption of mobile devices but also accelerate the execution of mobile applications. Owing to the long End-to-End (E2E) delay between mobile devices and the cloud, offloading the workloads of many interactive mobile applications to the cloud may not be suitable. That is, these mobile applications require a huge amount of computing resources to process their workloads as well as a low E2E delay between mobile devices and computing resources, which cannot be satisfied by the current MCC technology.
In order to reduce the E2E delay, a novel cloudlet network architecture is proposed to bring the computing and storage resources from the remote cloud to the mobile edge. In the cloudlet network, each mobile user is associated with a specific Avatar (i.e., a dedicated Virtual Machine (VM) providing computing and storage resources to its mobile user) in the nearby cloudlet via its associated Base Station (BS). Thus, mobile users can offload their workloads to their Avatars with low E2E delay (i.e., one wireless hop). However, mobile users may roam among BSs in the mobile network, and so the E2E delay between mobile users and their Avatars may become worse if the Avatars remain in their original cloudlets. Thus, Avatar handoff is proposed to migrate an Avatar from one cloudlet into another to reduce the E2E delay between the Avatar and its mobile user. The LatEncy aware Avatar handDoff (LEAD) algorithm is designed to determine the location of each mobile user\u27s Avatar in each time slot in order to minimize the average E2E delay among all the mobile users and their Avatars. The performance of LEAD is demonstrated via extensive simulations.
The cloudlet network architecture not only facilitates mobile users in offloading their computational tasks but also empowers Internet of Things (IoT). Popular IoT resources are proposed to be cached in nearby brokers, which are considered as application layer middleware nodes hosted by cloudlets in the cloudlet network, to reduce the energy consumption of servers. In addition, an Energy Aware and latency guaranteed dynamic reSourcE caching (EASE) strategy is proposed to enable each broker to cache suitable popular resources such that the energy consumption from the servers is minimized and the average delay of delivering the contents of the resources to the corresponding clients is guaranteed. The performance of EASE is demonstrated via extensive simulations.
The future work comprises two parts. First, caching popular IoT resources in nearby brokers may incur unbalanced traffic loads among brokers, thus increasing the average delay of delivering the contents of the resources. Thus, how to balance the traffic loads among brokers to speed up IoT content delivery process requires further investigation. Second, drone assisted mobile access network architecture will be briefly investigated to accelerate communications between mobile users and their Avatars
Extending an open source enterprise service bus for PostgreSQL statement transformation to enable cloud data access
Cloud computing has enabled a new era in the IT industry and many organizations are interested in moving their business operations to the Cloud. This can be realized by designing new applications that follow the prerequisites of the Cloud provider or just by migrating the existing applications to the Cloud. Each application follows a multi-layered architecture defined by its design approach. Application data is of utmost importance and it is managed by the data layer, which is further divided into two sublayers, the Data Access Layer (DAL) and the Database Layer (DBL). The former abstracts the data access functionality and the latter ensures data persistence and data manipulation.
Application migration to the Cloud can be achieved by migrating all layers it consists of or only part of them. In many situations it is chosen to move only the DBL to the Cloud and keep the other layers on-premise. Most preferably, the migration of the DBL should be transparent to the upper layers of the application, so that the effort and the cost of the migration, especially concerning application refactoring, becomes minimal. In this thesis, an open source Enterprise Service Bus (ESB), able to provide multi-tenant and transparent data access to the Cloud, is extended with PostgreSQL transformation functionality. Previously the ESB could only support MySQL source databases. After the integration of two new components, a PostgreSQL proxy and a PostgreSQL transformer, we provide support for PostgreSQL source databases and dialects. Furthermore, we validate and evaluate our approach based on the TPC-H benchmark, in order to ensure results based on realistic SQL statements and appropriate example data. We show linear time complexity, O(n) of the developed PostgreSQL transformer
Designing and Implementing a Distributed Database for a Small Multi-Outlet Business
Data is a fundamental and necessary element for businesses. During their operations they generate a certain amount of data that they need to capture, store, and later on retrieve when required. Databases provide the means to store and effectively retrieve data. Such a database can help a business improve its services, be more competitive, and ultimately increase its profits. In this paper, the system requirements of a distributed database are researched for a movie rental and sale store that has at least two outlets in different locations besides the main one. This project investigates the different stages of such a database, namely, the planning, analysis, decision, implementation and testing
Demystifying Dependency Bugs in Deep Learning Stack
Deep learning (DL) applications, built upon a heterogeneous and complex DL
stack (e.g., Nvidia GPU, Linux, CUDA driver, Python runtime, and TensorFlow),
are subject to software and hardware dependencies across the DL stack. One
challenge in dependency management across the entire engineering lifecycle is
posed by the asynchronous and radical evolution and the complex version
constraints among dependencies. Developers may introduce dependency bugs (DBs)
in selecting, using and maintaining dependencies. However, the characteristics
of DBs in DL stack is still under-investigated, hindering practical solutions
to dependency management in DL stack. To bridge this gap, this paper presents
the first comprehensive study to characterize symptoms, root causes and fix
patterns of DBs across the whole DL stack with 446 DBs collected from
StackOverflow posts and GitHub issues. For each DB, we first investigate the
symptom as well as the lifecycle stage and dependency where the symptom is
exposed. Then, we analyze the root cause as well as the lifecycle stage and
dependency where the root cause is introduced. Finally, we explore the fix
pattern and the knowledge sources that are used to fix it. Our findings from
this study shed light on practical implications on dependency management
Recommended from our members
Knowledge based approach to flexible workflow management systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded the Korea Advanced Institute of Science and Technology (KAIST).Today's business environments are characterized by dynamic and uncertain environments. In order to effectively support business processes in such contexts, workflow management systems must be able to adapt themselves effectively. In this dissertation, the workflow is redefined in
concept and represented with a set of business rules. Business rules play a central role in
organizational workflows in context of cooperation among actors. To achieve business goals, they constrain the flow of works, use of resources, and responsibility mapping between tasks and actors using role concept. Business rules are explicitly modeled in the Knowledge-based Workflow Model (KWM) using frames.
To increase the adaptability of workflow management system, KWM has several distinctive
features. First, it increases expressiveness of workflow model so that exception handling rules
and responsibility mapping rules between tasks and actors as well as task scheduling rules are
explicitly modeled. Secondly, formal definition of KWM enables one to define and to analyze correctness of workflow schema. Knowledge-based approach enables more powerful analysis on workflow schema including checking consistency and compactness of routing rules as well as terminality of a workflow. Thirdly, providing change propagation mechanism which assures
correctness of workflow after the modification of workflow schema increases adaptability.
Change propagation rules for the modification primitives are provided to manage workflow
evolution. On the other hand, metarules that control rules in KWM are used to handle exceptions that occur in a running workflow instance. Workflow participants can easily change workflow schema of a workflow instance with the support of extra rules and a metarule.
Based on KWM, K-WFMS (Knowledge-based WorkFlow Management System) has been implemented in client/server architecture. Inference shell of knowledge-based systems is employed for enactment of business rules and integrated with database systems. From a real application based on the KWM architecture, it has been shown that system performance can increase notably by reducing the number of rules and facts that are used in the course of workflow enactment
Extending an open source enterprise service bus for SQL statement transformation to enable cloud data access
Cloud computing has gained tremendous popularity in the past decade in the IT industry for its resource-sharing and cost-reducing nature. To move existing applications to the Cloud, they can be redesigned to fit into the Cloud paradigm, or migrate its existing components partially or totally to the Cloud. In application design, a three-tier architecture is often used, consisting of a presentation layer, a business logic layer, and a data layer. The presentation layer describes the interaction between application and user; the business layer provides the business logic; and the data layer deals with data storage. The data layer is further divided into the Data Access Layer which abstracts the data access functionality, and the Database Layer for data persistence and data manipulation.
In various occasions, corporations decide to move the their application's database layer to the Cloud, due to the high resource consumption and maintenance cost. However, currently there is little support and guidance on how to enable appropriate data access to the Cloud. Moreover, the diversity and heterogeneity of database systems increase the difficulty of adaption for the existing presentation layer and business layer with the migrated database layer. In this thesis, we focus on the heterogeneity of SQL language across different database systems. We extend an existing open source Enterprise Service Bus with Cloud data access capability for the transformation of SQL statements used in the presentation and business layer, to the SQL dialect used in the Cloud database system back end. With the prototype we develop, we validate it against real world scenario with Cloud services, such as FlexiScale and Amazon RDS. Besides, we analyze the complexity of the algorithm we realized for parsing and transforming the SQL statements and prove the complexity through performance measurements
- âŠ