991 research outputs found

    Optimized traffic scheduling and routing in smart home networks

    Get PDF
    Home networks are evolving rapidly to include heterogeneous physical access and a large number of smart devices that generate different types of traffic with different distributions and different Quality of Service (QoS) requirements. Due to their particular architectures, which are very dense and very dynamic, the traditional one-pair-node shortest path solution is no longer efficient to handle inter-smart home networks (inter-SHNs) routing constraints such as delay, packet loss, and bandwidth in all-pair node heterogenous links. In addition, Current QoS-aware scheduling methods consider only the conventional priority metrics based on the IP Type of Service (ToS) field to make decisions for bandwidth allocation. Such priority based scheduling methods are not optimal to provide both QoS and Quality of Experience (QoE), especially for smart home applications, since higher priority traffic does not necessarily require higher stringent delay than lower-priority traffic. Moreover, current QoS-aware scheduling methods in the intra-smart home network (intra-SHN) do not consider concurrent traffic caused by the fluctuation of intra-SH network traffic distributions. Thus, the goal of this dissertation is to build an efficient heterogenous multi-constrained routing mechanism and an optimized traffic scheduling tool in order to maintain a cost-effective communication between all wired-wireless connected devices in inter-SHNs and to effectively process concurrent and non-concurrent traffic in intra-SHN. This will help Internet service providers (ISPs) and home user to enhance the overall QoS and QoE of their applications while maintaining a relevant communication in both inter-SHNs and intra-SHN. In order to meet this goal, three key issues are required to be addressed in our framework and are summarized as follows: i) how to build a cost-effective routing mechanism in heterogonous inter-SHNs ? ii) how to efficiently schedule the multi-sourced intra-SHN traffic based on both QoS and QoE ? and iii) how to design an optimized queuing model for intra-SHN concurrent traffics while considering their QoS requirements? As part of our contributions to solve the first problem highlighted above, we present an analytical framework for dynamically optimizing data flows in inter-SHNs using Software-defined networking (SDN). We formulate a QoS-based routing optimization problem as a constrained shortest path problem and then propose an optimized solution (QASDN) to determine minimal cost between all pairs of nodes in the network taking into account the different types of physical accesses and the network utilization patterns. To address the second issue and to solve the gaps between QoS and QoE, we propose a new queuing model for QoS-level Pair traffic with mixed arrival distributions in Smart Home network (QP-SH) to make a dynamic QoS-aware scheduling decision meeting delay requirements of all traffic while preserving their degrees of criticality. A new metric combining the ToS field and the maximum number of packets that can be processed by the system's service during the maximum required delay, is defined. Finally, as part of our contribution to address the third issue, we present an analytic model for a QoS-aware scheduling optimization of concurrent intra-SHN traffics with mixed arrival distributions and using probabilistic queuing disciplines. We formulate a hybrid QoS-aware scheduling problem for concurrent traffics in intra-SHN, propose an innovative queuing model (QC-SH) based on the auction economic model of game theory to provide a fair multiple access over different communication channels/ports, and design an applicable model to implement auction game on both sides; traffic sources and the home gateway, without changing the structure of the IEEE 802.11 standard. The results of our work offer SHNs more effective data transfer between all heterogenous connected devices with optimal resource utilization, a dynamic QoS/QoE-aware traffic processing in SHN as well as an innovative model for optimizing concurrent SHN traffic scheduling with enhanced fairness strategy. Numerical results show an improvement up to 90% for network resource utilization, 77% for bandwidth, 40% for scheduling with QoS and QoE and 57% for concurrent traffic scheduling delay using our proposed solutions compared with Traditional methods

    CSP and Real-Time: Reality or Illusion?

    Get PDF

    WS-Pro: a Petri net based performance-driven service composition framework

    Get PDF
    As an emerging area gaining prevalence in the industry, Web Services was established to satisfy the needs for better flexibility and higher reliability in web applications. However, due to the lack of reliable frameworks and difficulties in constructing versatile service composition platform, web developers encountered major obstacles in large-scale deployment of web services. Meanwhile, performance has been one of the major concerns and a largely unexplored area in Web Services research. There is high demand for researchers to conceive and develop feasible solutions to design, monitor, and deploy web service systems that can adapt to failures, especially performance failures. Though many techniques have been proposed to solve this problem, none of them offers a comprehensive solution to overcome the difficulties that challenge practitioners. Central to the performance-engineering studies, performance analysis and performance adaptation are of paramount importance to the success of a software project. The industry learned through many hard lessons the significance of well-founded and well-executed performance engineering plans. An important fact is that it is too expensive to tackle performance evaluation, mostly through performance testing, after the software is developed. This is especially true in recent decades when software complexity has risen sharply. After the system is deployed, performance adaptation is essential to maintaining and improving software system reliability. Performance adaptation provides techniques to mitigate the consequence of performance failures and therefore is an important research issue. Performance adaptation is particularly meaningful for mission-critical software systems and software systems with inevitable frequent performance failures, such as Web Services. This dissertation focuses on Web Services framework and proposes a performance-driven service composition scheme, called WS-Pro, to support both performance analysis and performance adaptation. A formalism of transformation from WS-BPEL to Petri net is first defined to enable the analysis of system properties and facilitate quality prediction. A state-transition based proof is presented to show that the transformed Petri net model correctly simulates the behavior of the WS-BPEL process. The generated Petri net model was augmented using performance data supplied by both historical data and runtime data. Results of executing the Petri nets suggest that optimal composition plans can be achieved based on the proposed method. The performance of service composition procedure is an important research issue which has not been sufficiently treated by researchers. However, such an issue is critical for dynamic service composition, where re-planning must be done in a timely manner. In order to improve the performance of service composition procedure and enhance performance adaptation, this dissertation presents an algorithm to remove loops in the reachability graphs so that a large portion of the computation time of service composition can be moved to a pre-processing unit; hence the response time is shortened during runtime. We also extended the WS-Pro to the ubiquitous computing area to improve fault-tolerance

    Functional Programming for Embedded Systems

    Get PDF
    Embedded Systems application development has traditionally been carried out in low-level machine-oriented programming languages like C or Assembler that can result in unsafe, error-prone and difficult-to-maintain code. Functional programming with features such as higher-order functions, algebraic data types, polymorphism, strong static typing and automatic memory management appears to be an ideal candidate to address the issues with low-level languages plaguing embedded systems. However, embedded systems usually run on heavily memory-constrained devices with memory in the order of hundreds of kilobytes and applications running on such devices embody the general characteristics of being (i) I/O- bound, (ii) concurrent and (iii) timing-aware. Popular functional language compilers and runtimes either do not fare well with such scarce memory resources or do not provide high-level abstractions that address all the three listed characteristics. This work attempts to address this gap by investigating and proposing high-level abstractions specialised for I/O-bound, concurrent and timing-aware embedded-systems programs. We implement the proposed abstractions on eagerly-evaluated, statically-typed functional languages running natively on microcontrollers. Our contributions are divided into two parts - Part 1 presents a functional reactive programming language - Hailstorm - that tracks side effects like I/O in its type system using a feature called resource types. Hailstorm’s programming model is illustrated on the GRiSP microcontroller board.Part 2 comprises two papers that describe the design and implementation of Synchron, a runtime API that provides a uniform message-passing framework for the handling of software messages as well as hardware interrupts. Additionally, the Synchron API supports a novel timing operator to capture the notion of time, common in embedded applications. The Synchron API is implemented as a virtual machine - SynchronVM - that is run on the NRF52 and STM32 microcontroller boards. We present programming examples that illustrate the concurrency, I/O and timing capabilities of the VM and provide various benchmarks on the response time, memory and power usage of SynchronVM

    Reliable and efficient webserver management for task scheduling in edge-cloud platform

    Get PDF
    The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology

    Distributed multimedia systems

    Get PDF
    A distributed multimedia system (DMS) is an integrated communication, computing, and information system that enables the processing, management, delivery, and presentation of synchronized multimedia information with quality-of-service guarantees. Multimedia information may include discrete media data, such as text, data, and images, and continuous media data, such as video and audio. Such a system enhances human communications by exploiting both visual and aural senses and provides the ultimate flexibility in work and entertainment, allowing one to collaborate with remote participants, view movies on demand, access on-line digital libraries from the desktop, and so forth. In this paper, we present a technical survey of a DMS. We give an overview of distributed multimedia systems, examine the fundamental concept of digital media, identify the applications, and survey the important enabling technologies.published_or_final_versio

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    BeeFlow: Behavior Tree-based Serverless Workflow Modeling and Scheduling for Resource-Constrained Edge Clusters

    Full text link
    Serverless computing has gained popularity in edge computing due to its flexible features, including the pay-per-use pricing model, auto-scaling capabilities, and multi-tenancy support. Complex Serverless-based applications typically rely on Serverless workflows (also known as Serverless function orchestration) to express task execution logic, and numerous application- and system-level optimization techniques have been developed for Serverless workflow scheduling. However, there has been limited exploration of optimizing Serverless workflow scheduling in edge computing systems, particularly in high-density, resource-constrained environments such as system-on-chip clusters and single-board-computer clusters. In this work, we discover that existing Serverless workflow scheduling techniques typically assume models with limited expressiveness and cause significant resource contention. To address these issues, we propose modeling Serverless workflows using behavior trees, a novel and fundamentally different approach from existing directed-acyclic-graph- and state machine-based models. Behavior tree-based modeling allows for easy analysis without compromising workflow expressiveness. We further present observations derived from the inherent tree structure of behavior trees for contention-free function collections and awareness of exact and empirical concurrent function invocations. Based on these observations, we introduce BeeFlow, a behavior tree-based Serverless workflow system tailored for resource-constrained edge clusters. Experimental results demonstrate that BeeFlow achieves up to 3.2X speedup in a high-density, resource-constrained edge testbed and 2.5X speedup in a high-profile cloud testbed, compared with the state-of-the-art.Comment: Accepted by Journal of Systems Architectur

    Performance assessment of real-time data management on wireless sensor networks

    Get PDF
    Technological advances in recent years have allowed the maturity of Wireless Sensor Networks (WSNs), which aim at performing environmental monitoring and data collection. This sort of network is composed of hundreds, thousands or probably even millions of tiny smart computers known as wireless sensor nodes, which may be battery powered, equipped with sensors, a radio transceiver, a Central Processing Unit (CPU) and some memory. However due to the small size and the requirements of low-cost nodes, these sensor node resources such as processing power, storage and especially energy are very limited. Once the sensors perform their measurements from the environment, the problem of data storing and querying arises. In fact, the sensors have restricted storage capacity and the on-going interaction between sensors and environment results huge amounts of data. Techniques for data storage and query in WSN can be based on either external storage or local storage. The external storage, called warehousing approach, is a centralized system on which the data gathered by the sensors are periodically sent to a central database server where user queries are processed. The local storage, in the other hand called distributed approach, exploits the capabilities of sensors calculation and the sensors act as local databases. The data is stored in a central database server and in the devices themselves, enabling one to query both. The WSNs are used in a wide variety of applications, which may perform certain operations on collected sensor data. However, for certain applications, such as real-time applications, the sensor data must closely reflect the current state of the targeted environment. However, the environment changes constantly and the data is collected in discreet moments of time. As such, the collected data has a temporal validity, and as time advances, it becomes less accurate, until it does not reflect the state of the environment any longer. Thus, these applications must query and analyze the data in a bounded time in order to make decisions and to react efficiently, such as industrial automation, aviation, sensors network, and so on. In this context, the design of efficient real-time data management solutions is necessary to deal with both time constraints and energy consumption. This thesis studies the real-time data management techniques for WSNs. It particularly it focuses on the study of the challenges in handling real-time data storage and query for WSNs and on the efficient real-time data management solutions for WSNs. First, the main specifications of real-time data management are identified and the available real-time data management solutions for WSNs in the literature are presented. Secondly, in order to provide an energy-efficient real-time data management solution, the techniques used to manage data and queries in WSNs based on the distributed paradigm are deeply studied. In fact, many research works argue that the distributed approach is the most energy-efficient way of managing data and queries in WSNs, instead of performing the warehousing. In addition, this approach can provide quasi real-time query processing because the most current data will be retrieved from the network. Thirdly, based on these two studies and considering the complexity of developing, testing, and debugging this kind of complex system, a model for a simulation framework of the real-time databases management on WSN that uses a distributed approach and its implementation are proposed. This will help to explore various solutions of real-time database techniques on WSNs before deployment for economizing money and time. Moreover, one may improve the proposed model by adding the simulation of protocols or place part of this simulator on another available simulator. For validating the model, a case study considering real-time constraints as well as energy constraints is discussed. Fourth, a new architecture that combines statistical modeling techniques with the distributed approach and a query processing algorithm to optimize the real-time user query processing are proposed. This combination allows performing a query processing algorithm based on admission control that uses the error tolerance and the probabilistic confidence interval as admission parameters. The experiments based on real world data sets as well as synthetic data sets demonstrate that the proposed solution optimizes the real-time query processing to save more energy while meeting low latency.Fundação para a Ciência e Tecnologi
    corecore