415 research outputs found

    A CBR Approach to Allocate Computational Resources Within a Cloud Platform

    Get PDF
    Cloud Computing paradigm continues growing very quickly. The underlying computational infrastructure has to cope with this increase on the demand and the high number of end-users. To do so, platforms usually use mathematical models to allocate the computational resource among the offered services to the end-user. Although these mathematical models are valid and they are widely extended, they can be improved by means of use intelligent techniques. Thus, this study proposes an innovative approach based on an agent-based system that integrated a case-based reasoning system. This system is able to dynamically allocate resources over a Cloud Computing platform

    Performance Analysis of OpenAirInterface System Emulation

    Get PDF
    With the rapid growth of mobile networks, the radio access network becomes more and more costly to deploy, operate, maintain and upgrade. The most effective answer to this problem lies in the centralization and virtualization of the eNodeBs. This solution is known as Cloud RAN and is one of the key topics in the development of fifth generation networks. Within this context OpenAirInterface is a promising emulation tool that can be used for prototyping innovative scheduling algorithms, making the most of the new architecture. In this work we first describe the emulation environment of OpenAirInterface and its scheduling framework and we use it to implement two MAC schedulers. Moreover we validate the above schedulers and we perform a thorough profiling of OpenAirInterface, in terms of both memory occupancy and execution time. Our results show that OpenAirInterface can be effectively used for prototyping scheduling algorithms in emulated LTE networks

    Flexible distributed computing with volunteered resources

    Get PDF
    PhDNowadays, computational grids have evolved to a stage where they can comprise many volunteered resources owned by different individual users and/or institutions, such as desktop grids and volunteered computing grids. This brings benefits for large-scale computing, as more resources are available to exploit. On the other hand, the inherent characteristics of the volunteered resources bring some challenges for efficiently exploiting them. For example, jobs may not be able to be executed by some resources, as the computing resources can be heterogeneous. Furthermore, the resources can be volatile as the resource owners usually have the right to decide when and how to donate the idle Central Processing Unit (CPU) cycles of their computers. Therefore, in order to utilise volunteered resources efficiently, this research investigated solutions from different aspects. Firstly, this research proposes a new computational Grid architecture based on Java and Java application migration technologies to provide fundamental support for coping with these challenges. This proposed architecture supports heterogeneous resources, ensuring local activities are not affected by Grid jobs and enabling resources to carry out live and automatic Java application migration. Secondly, this research work proposes some job-scheduling and migration algorithms based on resource availability prediction and/or artificial intelligence techniques. To examine the proposed algorithms, this work includes a series of experiments in both synthetic and practical scenarios and compares the performance of the proposed algorithms with existing ones across a variety of scenarios. According to the critical assessment, each algorithm has its own distinct advantages and performs well when certain conditions are met. In addition, this research analyses the characteristics of resources in terms of the availability pattern of practical volunteer-based grids. The analysis shows that each environment has its own characteristics and each volunteered resource’s availability tends to possess weak correlations across different days and times-of-day.British Telco

    Formulating and managing viable SLAs in cloud computing from a small to medium service provider's viewpoint: A state-of-the-art review

    Full text link
    © 2017 Elsevier Ltd In today's competitive world, service providers need to be customer-focused and proactive in their marketing strategies to create consumer awareness of their services. Cloud computing provides an open and ubiquitous computing feature in which a large random number of consumers can interact with providers and request services. In such an environment, there is a need for intelligent and efficient methods that increase confidence in the successful achievement of business requirements. One such method is the Service Level Agreement (SLA), which is comprised of service objectives, business terms, service relations, obligations and the possible action to be taken in the case of SLA violation. Most of the emphasis in the literature has, until now, been on the formation of meaningful SLAs by service consumers, through which their requirements will be met. However, in an increasingly competitive market based on the cloud environment, service providers too need a framework that will form a viable SLA, predict possible SLA violations before they occur, and generate early warning alarms that flag a potential lack of resources. This is because when a provider and a consumer commit to an SLA, the service provider is bound to reserve the agreed amount of resources for the entire period of that agreement – whether the consumer uses them or not. It is therefore very important for cloud providers to accurately predict the likely resource usage for a particular consumer and to formulate an appropriate SLA before finalizing an agreement. This problem is more important for a small to medium cloud service provider which has limited resources that must be utilized in the best possible way to generate maximum revenue. A viable SLA in cloud computing is one that intelligently helps the service provider to determine the amount of resources to offer to a requesting consumer, and there are number of studies on SLA management in the literature. The aim of this paper is two-fold. First, it presents a comprehensive overview of existing state-of-the-art SLA management approaches in cloud computing, and their features and shortcomings in creating viable SLAs from the service provider's viewpoint. From a thorough analysis, we observe that the lack of a viable SLA management framework renders a service provider unable to make wise decisions in forming an SLA, which could lead to service violations and violation penalties. To fill this gap, our second contribution is the proposal of the Optimized Personalized Viable SLA (OPV-SLA) framework which assists a service provider to form a viable SLA and start managing SLA violation before an SLA is formed and executed. The framework also assists a service provider to make an optimal decision in service formation and allocate the appropriate amount of marginal resources. We demonstrate the applicability of our framework in forming viable SLAs through experiments. From the evaluative results, we observe that our framework helps a service provider to form viable SLAs and later to manage them to effectively minimize possible service violation and penalties

    Artificial Intelligence in the development of modern infrastructures

    Get PDF
    Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform tasks as human beings. Most of the examples of AI you hear about today - from computers playing chess to autonomous driving cars - rely heavily on deep learning and natural language processing

    Role of Interference and Computational Complexity in Modern Wireless Networks: Analysis, Optimization, and Design

    Get PDF
    Owing to the popularity of smartphones, the recent widespread adoption of wireless broadband has resulted in a tremendous growth in the volume of mobile data traffic, and this growth is projected to continue unabated. In order to meet the needs of future systems, several novel technologies have been proposed, including cooperative communications, cloud radio access networks (RANs) and very densely deployed small-cell networks. For these novel networks, both interference and the limited availability of computational resources play a very important role. Therefore, the accurate modeling and analysis of interference and computation is essential to the understanding of these networks, and an enabler for more efficient design.;This dissertation focuses on four aspects of modern wireless networks: (1) Modeling and analysis of interference in single-hop wireless networks, (2) Characterizing the tradeoffs between the communication performance of wireless transmission and the computational load on the systems used to process such transmissions, (3) The optimization of wireless multiple-access networks when using cost functions that are based on the analytical findings in this dissertation, and (4) The analysis and optimization of multi-hop networks, which may optionally employ forms of cooperative communication.;The study of interference in single-hop wireless networks proceeds by assuming that the random locations of the interferers are drawn from a point process and possibly constrained to a finite area. Both the information-bearing and interfering signals propagate over channels that are subject to path loss, shadowing, and fading. A flexible model for fading, based on the Nakagami distribution, is used, though specific examples are provided for Rayleigh fading. The analysis is broken down into multiple steps, involving subsequent averaging of the performance metrics over the fading, the shadowing, and the location of the interferers with the aim to distinguish the effect of these mechanisms that operate over different time scales. The analysis is extended to accommodate diversity reception, which is important for the understanding of cooperative systems that combine transmissions that originate from different locations. Furthermore, the role of spatial correlation is considered, which provides insight into how the performance in one location is related to the performance in another location.;While it is now generally understood how to communicate close to the fundamental limits implied by information theory, operating close to the fundamental performance bounds is costly in terms of the computational complexity required to receive the signal. This dissertation provides a framework for understanding the tradeoffs between communication performance and the imposed complexity based on how close a system operates to the performance bounds, and it allows to accurately estimate the required data processing resources of a network under a given performance constraint. The framework is applied to Cloud-RAN, which is a new cellular architecture that moves the bulk of the signal processing away from the base stations (BSs) and towards a centralized computing cloud. The analysis developed in this part of the dissertation helps to illuminate the benefits of pooling computing assets when decoding multiple uplink signals in the cloud. Building upon these results, new approaches for wireless resource allocation are proposed, which unlike previous approaches, are aware of the computing limitations of the network.;By leveraging the accurate expressions that characterize performance in the presence of interference and fading, a methodology is described for optimizing wireless multiple-access networks. The focus is on frequency hopping (FH) systems, which are already widely used in military systems, and are becoming more common in commercial systems. The optimization determines the best combination of modulation parameters (such as the modulation index for continuous-phase frequency-shift keying), number of hopping channels, and code rate. In addition, it accounts for the adjacent-channel interference (ACI) and determines how much of the signal spectrum should lie within the operating band of each channel, and how much can be allowed to splatter into adjacent channels.;The last part of this dissertation contemplates networks that involve multi-hop communications. Building on the analytical framework developed in early parts of this dissertation, the performance of such networks is analyzed in the presence of interference and fading, and it is introduced a novel paradigm for a rapid performance assessment of routing protocols. Such networks may involve cooperative communications, and the particular cooperative protocol studied here allows the same packet to be transmitted simultaneously by multiple transmitters and diversity combined at the receiver. The dynamics of how the cooperative protocol evolves over time is described through an absorbing Markov chain, and the analysis is able to efficiently capture the interference that arises as packets are periodically injected into the network by a common source, the temporal correlation among these packets and their interdependence

    Software Defined Networking:Applicability and Service Possibilities

    Get PDF

    Quality of service optimization in IoT driven intelligent transportation system

    Get PDF
    High mobility in ITS, especially V2V communication networks, allows increasing coverage and quick assistance to users and neighboring networks, but also degrades the performance of the entire system due to fluctuation in the wireless channel. How to obtain better QoS during multimedia transmission in V2V over future generation networks (i.e., edge computing platforms) is very challenging due to the high mobility of vehicles and heterogeneity of future IoT-based edge computing networks. In this context, this article contributes in three distinct ways: to develop a QoS-aware, green, sustainable, reliable, and available (QGSRA) algorithm to support multimedia transmission in V2V over future IoT-driven edge computing networks; to implement a novel QoS optimization strategy in V2V during multimedia transmission over IoT-based edge computing platforms; to propose QoS metrics such as greenness (i.e., energy efficiency), sustainability (i.e., less battery charge consumption), reliability (i.e., less packet loss ratio), and availability (i.e., more coverage) to analyze the performance of V2V networks. Finally, the proposed QGSRA algorithm has been validated through extensive real-time datasets of vehicles to demonstrate how it outperforms conventional techniques, making it a potential candidate for multimedia transmission in V2V over self-adaptive edge computing platforms
    • …
    corecore