201 research outputs found

    Cloud Service Selection System Approach based on QoS Model: A Systematic Review

    Get PDF
    The Internet of Things (IoT) has received a lot of interest from researchers recently. IoT is seen as a component of the Internet of Things, which will include billions of intelligent, talkative "things" in the coming decades. IoT is a diverse, multi-layer, wide-area network composed of a number of network links. The detection of services and on-demand supply are difficult in such networks, which are comprised of a variety of resource-limited devices. The growth of service computing-related fields will be aided by the development of new IoT services. Therefore, Cloud service composition provides significant services by integrating the single services. Because of the fast spread of cloud services and their different Quality of Service (QoS), identifying necessary tasks and putting together a service model that includes specific performance assurances has become a major technological problem that has caused widespread concern. Various strategies are used in the composition of services i.e., Clustering, Fuzzy, Deep Learning, Particle Swarm Optimization, Cuckoo Search Algorithm and so on. Researchers have made significant efforts in this field, and computational intelligence approaches are thought to be useful in tackling such challenges. Even though, no systematic research on this topic has been done with specific attention to computational intelligence. Therefore, this publication provides a thorough overview of QoS-aware web service composition, with QoS models and approaches to finding future aspects

    QoS-aware Resource-utilisation Self-adaptive (QRS) Framework for Distributed Data Stream Management Systems

    Get PDF
    The last decade witnessed a vast number of Big Data applications in the science and industry fields alike. Such applications generate large amounts of streaming data and real-time event-based information. Such data needs to be analysed under the specific quality of service constraints, which must be done within extremely low latencies. Many distributed data stream processing approaches are based on the best-effort QoS principle that lack the capability of dynamic adaptation to the fluctuations in data input rates. Most of the proposed solutions tend to either drop some of the input data (load shedding) or degrade the level of QoS provided by the system. Another approach is to limit the data ingestion input rate using techniques like backpressure heartbeats, which can affect the worker nodes that causes an output delay. Such approaches are not suitable to handle certain types of mission-critical applications such as critical infrastructure surveillance, monitoring and signalling, vital health care monitoring, and military command and control streaming applications. This research presents a novel QoS-aware, Resource-utilisation Self-adaptive (QRS) Framework for managing data stream processing systems. The framework proposes a comprehensive usage model that encompasses proactive operations followed by simultaneous prompt actions. The simultaneous prompt actions instantly collect and analyse the performance and QoS metrics along with running data streams, ensuring that data does not lose its current values, whereas the proactive operations construct the prediction model that anticipate QoS violations and performance degradation in the system. The model triggers essential decision process for dynamic tuning of resources or adapting a new scheduling strategy. A proof of concept model was built that accurately represents the working conditions of the distributed data stream management ecosystem. The proposed framework is validated and verified. The framework’s several components were fully implemented over the emerging and prevalent distributed data streaming processing system, Apache Storm. The framework performs accurate prediction up to 81% about the system’s capacity to handle data load and input rate. The accuracy reaches up to 100% by incorporating abnormal detection techniques. Moreover, the framework performs well compared with the default round-robin and resource-aware schedulers within Storm. It provides a better ability to handle high data rates by re-balancing the topology and re-scheduling resources based on the prediction models well ahead of any congestion or QoS degradation

    A Game-Theoretic Approach to Strategic Resource Allocation Mechanisms in Edge and Fog Computing

    Get PDF
    With the rapid growth of Internet of Things (IoT), cloud-centric application management raises questions related to quality of service for real-time applications. Fog and edge computing (FEC) provide a complement to the cloud by filling the gap between cloud and IoT. Resource management on multiple resources from distributed and administrative FEC nodes is a key challenge to ensure the quality of end-user’s experience. To improve resource utilisation and system performance, researchers have been proposed many fair allocation mechanisms for resource management. Dominant Resource Fairness (DRF), a resource allocation policy for multiple resource types, meets most of the required fair allocation characteristics. However, DRF is suitable for centralised resource allocation without considering the effects (or feedbacks) of large-scale distributed environments like multi-controller software defined networking (SDN). Nash bargaining from micro-economic theory or competitive equilibrium equal incomes (CEEI) are well suited to solving dynamic optimisation problems proposing to ‘proportionately’ share resources among distributed participants. Although CEEI’s decentralised policy guarantees load balancing for performance isolation, they are not faultproof for computation offloading. The thesis aims to propose a hybrid and fair allocation mechanism for rejuvenation of decentralised SDN controller deployment. We apply multi-agent reinforcement learning (MARL) with robustness against adversarial controllers to enable efficient priority scheduling for FEC. Motivated by software cybernetics and homeostasis, weighted DRF is generalised by applying the principles of feedback (positive or/and negative network effects) in reverse game theory (GT) to design hybrid scheduling schemes for joint multi-resource and multitask offloading/forwarding in FEC environments. In the first piece of study, monotonic scheduling for joint offloading at the federated edge is addressed by proposing truthful mechanism (algorithmic) to neutralise harmful negative and positive distributive bargain externalities respectively. The IP-DRF scheme is a MARL approach applying partition form game (PFG) to guarantee second-best Pareto optimality viii | P a g e (SBPO) in allocation of multi-resources from deterministic policy in both population and resource non-monotonicity settings. In the second study, we propose DFog-DRF scheme to address truthful fog scheduling with bottleneck fairness in fault-probable wireless hierarchical networks by applying constrained coalition formation (CCF) games to implement MARL. The multi-objective optimisation problem for fog throughput maximisation is solved via a constraint dimensionality reduction methodology using fairness constraints for efficient gateway and low-level controller’s placement. For evaluation, we develop an agent-based framework to implement fair allocation policies in distributed data centre environments. In empirical results, the deterministic policy of IP-DRF scheme provides SBPO and reduces the average execution and turnaround time by 19% and 11.52% as compared to the Nash bargaining or CEEI deterministic policy for 57,445 cloudlets in population non-monotonic settings. The processing cost of tasks shows significant improvement (6.89% and 9.03% for fixed and variable pricing) for the resource non-monotonic setting - using 38,000 cloudlets. The DFog-DRF scheme when benchmarked against asset fair (MIP) policy shows superior performance (less than 1% in time complexity) for up to 30 FEC nodes. Furthermore, empirical results using 210 mobiles and 420 applications prove the efficacy of our hybrid scheduling scheme for hierarchical clustering considering latency and network usage for throughput maximisation.Abubakar Tafawa Balewa University, Bauchi (Tetfund, Nigeria

    Mobile Ad-Hoc Networks

    Get PDF
    Being infrastructure-less and without central administration control, wireless ad-hoc networking is playing a more and more important role in extending the coverage of traditional wireless infrastructure (cellular networks, wireless LAN, etc). This book includes state-of the-art techniques and solutions for wireless ad-hoc networks. It focuses on the following topics in ad-hoc networks: vehicular ad-hoc networks, security and caching, TCP in ad-hoc networks and emerging applications. It is targeted to provide network engineers and researchers with design guidelines for large scale wireless ad hoc networks

    A comprehensive survey on Fog Computing: State-of-the-art and research challenges

    Get PDF
    Cloud computing with its three key facets (i.e., Infrastructure-as-a-Service, Platform-as-a-Service, and Softwareas- a-Service) and its inherent advantages (e.g., elasticity and scalability) still faces several challenges. The distance between the cloud and the end devices might be an issue for latencysensitive applications such as disaster management and content delivery applications. Service level agreements (SLAs) may also impose processing at locations where the cloud provider does not have data centers. Fog computing is a novel paradigm to address such issues. It enables provisioning resources and services outside the cloud, at the edge of the network, closer to end devices, or eventually, at locations stipulated by SLAs. Fog computing is not a substitute for cloud computing but a powerful complement. It enables processing at the edge while still offering the possibility to interact with the cloud. This paper presents a comprehensive survey on fog computing. It critically reviews the state of the art in the light of a concise set of evaluation criteria. We cover both the architectures and the algorithms that make fog systems. Challenges and research directions are also introduced. In addition, the lessons learned are reviewed and the prospects are discussed in terms of the key role fog is likely to play in emerging technologies such as tactile Internet

    The Internet of Everything

    Get PDF
    In the era before IoT, the world wide web, internet, web 2.0 and social media made people’s lives comfortable by providing web services and enabling access personal data irrespective of their location. Further, to save time and improve efficiency, there is a need for machine to machine communication, automation, smart computing and ubiquitous access to personal devices. This need gave birth to the phenomenon of Internet of Things (IoT) and further to the concept of Internet of Everything (IoE)

    Application of cognitive radio based sensor network in smart grids for efficient, holistic monitoring and control.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.This thesis is directed towards the application of cognitive radio based sensor network (CRSN) in smart grid (SG) for efficient, holistic monitoring and control. The work involves enabling of sensor network and wireless communication devices for spectra utilization via the capability of Dynamic Spectrum Access (DSA) of a cognitive radio (CR) as well as end to end communication access technology for unified monitoring and control in smart grids. Smart Grid (SG) is a new power grid paradigm that can provide predictive information and recommendations to utilities, including their suppliers, and their customers on how best to manage power delivery and consumption. SG can greatly reduce air pollution from our surrounding by renewable power sources such as wind energy, solar plants and huge hydro stations. SG also reduces electricity blackouts and surges. Communication network is the foundation for modern SG. Implementing an improved communication solution will help in addressing the problems of the existing SG. Hence, this study proposed and implemented improved CRSN model which will help to ultimately evade the inherent problems of communication network in the SG such as: energy inefficiency, interference, spectrum inefficiencies, poor quality of service (QoS), latency and throughput. To overcome these problems, the existing approach which is more predominant is the use of wireless sensor network (WSNs) for communication needs in SG. However, WSNs have low battery power, low computational complexity, low bandwidth support, and high latency or delay due to multihop transmission in existing WSN topology. Consequently, solving these problems by addressing energy efficiency, bandwidth or throughput, and latency have not been fully realized due to the limitations in the WSN and the existing network topology. Therefore, existing approach has not fully addressed the communication needs in SG. SG can be fully realized by integrating communication network technologies infrastructures into the power grid. Cognitive Radio-based Sensor Network (CRSN) is considered a feasible solution to enhance various aspects of the electric power grid such as communication with end and remote devices in real-time manner for efficient monitoring and to realize maximum benefits of a smart grid system. CRSN in SG is aimed at addressing the problem of spectrum inefficiency and interference which wireless sensor network (WSN) could not. However, numerous challenges for CRSNs are due to the harsh environmental wireless condition in a smart grid system. As a result, latency, throughput and reliability become critical issues. To overcome these challenges, lots of approaches can be adopted ranging from integration of CRSNs into SGs; proper implementation design model for SG; reliable communication access devices for SG; key immunity requirements for communication infrastructure in SG; up to communication network protocol optimization and so on. To this end, this study utilized the National Institute of Standard (NIST) framework for SG interoperability in the design of unified communication network architecture including implementation model for guaranteed quality of service (QoS) of smart grid applications. This involves virtualized network in form of multi-homing comprising low power wide area network (LPWAN) devices such as LTE CAT1/LTE-M, and TV white space band device (TVBD). Simulation and analysis show that the performance of the developed modules architecture outperforms the legacy wireless systems in terms of latency, blocking probability, and throughput in SG harsh environmental condition. In addition, the problem of multi correlation fading channels due to multi antenna channels of the sensor nodes in CRSN based SG has been addressed by the performance analysis of a moment generating function (MGF) based M-QAM error probability over Nakagami-q dual correlated fading channels with maximum ratio combiner (MRC) receiver technique which includes derivation and novel algorithmic approach. The results of the MATLAB simulation are provided as a guide for sensor node deployment in order to avoid the problem of multi correlation in CRSN based SGs. SGs application requires reliable and efficient communication with low latency in timely manner as well as adequate topology of sensor nodes deployment for guaranteed QoS. Another important requirement is the need for an optimized protocol/algorithms for energy efficiency and cross layer spectrum aware made possible for opportunistic spectrum access in the CRSN nodes. Consequently, an optimized cross layer interaction of the physical and MAC layer protocols using various novel algorithms and techniques was developed. This includes a novel energy efficient distributed heterogeneous clustered spectrum aware (EDHC- SA) multichannel sensing signal model with novel algorithm called Equilateral triangulation algorithm for guaranteed network connectivity in CRSN based SG. The simulation results further obtained confirm that EDHC-SA CRSN model outperforms conventional ZigBee WSN in terms of bit error rate (BER), end-to-end delay (latency) and energy consumption. This no doubt validates the suitability of the developed model in SG

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    • …
    corecore