512 research outputs found

    Workload patterns for quality-driven dynamic cloud service configuration and auto-scaling

    Get PDF
    Cloud service providers negotiate SLAs for customer services they offer based on the reliability of performance and availability of their lower-level platform infrastructure. While availability management is more mature, performance management is less reliable. In order to support an iterative approach that supports the initial static infrastructure configuration as well as dynamic reconfiguration and auto-scaling, an accurate and efficient solution is required. We propose a prediction-based technique that combines a pattern matching approach with a traditional collaborative filtering solution to meet the accuracy and efficiency requirements. Service workload patterns abstract common infrastructure workloads from monitoring logs and act as a part of a first-stage high-performant configuration mechanism before more complex traditional methods are considered. This enhances current reactive rule-based scalability approaches and basic prediction techniques based on for example exponential smoothing

    Service workload patterns for QoS-driven cloud resource management

    Get PDF
    Cloud service providers negotiate SLAs for customer services they offer based on the reliability of performance and availability of their lower-level platform infrastructure. While availability management is more mature, performance management is less reliable. In order to support a continuous approach that supports the initial static infrastructure configuration as well as dynamic reconfiguration and auto-scaling, an accurate and efficient solution is required. We propose a prediction technique that combines a workload pattern mining approach with a traditional collaborative filtering solution to meet the accuracy and efficiency requirements. Service workload patterns abstract common infrastructure workloads from monitoring logs and act as a part of a first-stage high-performant configuration mechanism before more complex traditional methods are considered. This enhances current reactive rule-based scalability approaches and basic prediction techniques by a hybrid prediction solution. Uncertainty and noise are additional challenges that emerge in multi-layered, often federated cloud architectures. We specifically add log smoothing combined with a fuzzy logic approach to make the prediction solution more robust in the context of these challenges

    Computation in Complex Networks

    Get PDF
    Complex networks are one of the most challenging research focuses of disciplines, including physics, mathematics, biology, medicine, engineering, and computer science, among others. The interest in complex networks is increasingly growing, due to their ability to model several daily life systems, such as technology networks, the Internet, and communication, chemical, neural, social, political and financial networks. The Special Issue “Computation in Complex Networks" of Entropy offers a multidisciplinary view on how some complex systems behave, providing a collection of original and high-quality papers within the research fields of: • Community detection • Complex network modelling • Complex network analysis • Node classification • Information spreading and control • Network robustness • Social networks • Network medicin

    Self-adaptive mobile web service discovery framework for dynamic mobile environment

    Get PDF
    The advancement in mobile technologies has undoubtedly turned mobile web service (MWS) into a significant computing resource in a dynamic mobile environment (DME). The discovery is one of the critical stages in the MWS life cycle to identify the most relevant MWS for a particular task as per the request's context needs. While the traditional service discovery frameworks that assume the world is static with predetermined context are constrained in DME, the adaptive solutions show potential. Unfortunately, the effectiveness of these frameworks is plagued by three problems. Firstly, the coarse-grained MWS categorization approach that fails to deal with the proliferation of functionally similar MWS. Secondly, context models constricted by insufficient expressiveness and inadequate extensibility confound the difficulty in describing the DME, MWS, and the user’s MWS needs. Thirdly, matchmaking requires manual adjustment and disregard context information that triggers self-adaptation, leading to the ineffective and inaccurate discovery of relevant MWS. Therefore, to address these challenges, a self-adaptive MWS discovery framework for DME comprises an enhanced MWS categorization approach, an extensible meta-context ontology model, and a self-adaptive MWS matchmaker is proposed. In this research, the MWS categorization is achieved by extracting the goals and tags from the functional description of MWS and then subsuming k-means in the modified negative selection algorithm (M-NSA) to create categories that contain similar MWS. The designing of meta-context ontology is conducted using the lightweight unified process for ontology building (UPON-Lite) in collaboration with the feature-oriented domain analysis (FODA). The self-adaptive MWS matchmaking is achieved by enabling the self-adaptive matchmaker to learn MWS relevance using a Modified-Negative Selection Algorithm (M-NSA) and retrieve the most relevant MWS based on the current context of the discovery. The MWS categorization approach was evaluated, and its impact on the effectiveness of the framework is assessed. The meta-context ontology was evaluated using case studies, and its impact on the service relevance learning was assessed. The proposed framework was evaluated using a case study and the ProgrammableWeb dataset. It exhibits significant improvements in terms of binary relevance, graded relevance, and statistical significance, with the highest average precision value of 0.9167. This study demonstrates that the proposed framework is accurate and effective for service-based application designers and other MWS clients

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    A PROBABILISTIC MACHINE LEARNING FRAMEWORK FOR CLOUD RESOURCE SELECTION ON THE CLOUD

    Get PDF
    The execution of the scientific applications on the Cloud comes with great flexibility, scalability, cost-effectiveness, and substantial computing power. Market-leading Cloud service providers such as Amazon Web service (AWS), Azure, Google Cloud Platform (GCP) offer various general purposes, memory-intensive, and compute-intensive Cloud instances for the execution of scientific applications. The scientific community, especially small research institutions and undergraduate universities, face many hurdles while conducting high-performance computing research in the absence of large dedicated clusters. The Cloud provides a lucrative alternative to dedicated clusters, however a wide range of Cloud computing choices makes the instance selection for the end-users. This thesis aims to simplify Cloud instance selection for end-users by proposing a probabilistic machine learning framework to allow to users select a suitable Cloud instance for their scientific applications. This research builds on the previously proposed A2Cloud-RF framework that recommends high-performing Cloud instances by profiling the application and the selected Cloud instances. The framework produces a set of objective scores called the A2Cloud scores, which denote the compatibility level between the application and the selected Cloud instances. When used alone, the A2Cloud scores become increasingly unwieldy with an increasing number of tested Cloud instances. Additionally, the framework only examines the raw application performance and does not consider the execution cost to guide resource selection. To improve the usability of the framework and assist with economical instance selection, this research adds two Naïve Bayes (NB) classifiers that consider both the application’s performance and execution cost. These NB classifiers include: 1) NB with a Random Forest Classifier (RFC) and 2) a standalone NB module. Naïve Bayes with a Random Forest Classifier (RFC) augments the A2Cloud-RF framework\u27s final instance ratings with the execution cost metric. In the training phase, the classifier builds the frequency and probability tables. The classifier recommends a Cloud instance based on the highest posterior probability for the selected application. The standalone NB classifier uses the generated A2Cloud score (an intermediate result from the A2Cloud-RF framework) and execution cost metric to construct an NB classifier. The NB classifier forms a frequency table and probability (prior and likelihood) tables. For recommending a Cloud instance for a test application, the classifier calculates the highest posterior probability for all of the Cloud instances. The classifier recommends a Cloud instance with the highest posterior probability. This study performs the execution of eight real-world applications on 20 Cloud instances from AWS, Azure, GCP, and Linode. We train the NB classifiers using 80% of this dataset and employ the remaining 20% for testing. The testing yields more than 90% recommendation accuracy for the chosen applications and Cloud instances. Because of the imbalanced nature of the dataset and multi-class nature of classification, we consider the confusion matrix (true positive, false positive, true negative, and false negative) and F1 score with above 0.9 scores to describe the model performance. The final goal of this research is to make Cloud computing an accessible resource for conducting high-performance scientific executions by enabling users to select an effective Cloud instance from across multiple providers

    Design Models for Trusted Communications in Vehicle-to-Everything (V2X) Networks

    Get PDF
    Intelligent transportation system is one of the main systems which has been developed to achieve safe traffic and efficient transportation. It enables the road entities to establish connections with other road entities and infrastructure units using Vehicle-to-Everything (V2X) communications. To improve the driving experience, various applications are implemented to allow for road entities to share the information among each other. Then, based on the received information, the road entity can make its own decision regarding road safety and guide the driver. However, when these packets are dropped for any reason, it could lead to inaccurate decisions due to lack of enough information. Therefore, the packets should be sent through a trusted communication. The trusted communication includes a trusted link and trusted road entity. Before sending packets, the road entity should assess the link quality and choose the trusted link to ensure the packet delivery. Also, evaluating the neighboring node behavior is essential to obtain trusted communications because some misbehavior nodes may drop the received packets. As a consequence, two main models are designed to achieve trusted V2X communications. First, a multi-metric Quality of Service (QoS)-balancing relay selection algorithm is proposed to elect the trusted link. Analytic Hierarchy Process (AHP) is applied to evaluate the link based on three metrics, which are channel capacity, link stability and end-to-end delay. Second, a recommendation-based trust model is designed for V2X communication to exclude misbehavior nodes. Based on a comparison between trust-based methods, weighted-sum is chosen in the proposed model. The proposed methods ensure trusted communications by reducing the Packet Dropping Rate (PDR) and increasing the end-to-end delivery packet ratio. In addition, the proposed trust model achieves a very low False Negative Rate (FNR) in comparison with an existing model

    An Approach of SLA Violation Prediction and QoS Optimization using Regression Machine Learning Techniques

    Get PDF
    Along with the acceptance of Service-Oriented Architecture (SOA) as a promising style of software design, the role that Quality of Service (QoS) plays in the success of SOA-based software systems has become much more significant than ever before. When QoS is documented as a Service-Level Agreement (SLA), it specifies the commitment between a service provider and a client, as well as monetary penalties in case of any SLA violations. To avoid and reduce the situations that may cause SLA violations, service providers need tools to intuitively analyze if their service design provokes SLA violations and to automatically guide them preventing SLA violations. Due to the dynamic nature of service interaction during the operation of SOA-based software systems, the avoidance of SLA violations requires prompt detection of potential violations before prevention takes place at real-time. To overcome the low latency time in practice, this thesis research develops an approach of using Machine Learning techniques to not only predict SLA violations but also prevent them by means of optimization. This research discusses the algorithm and framework, along with the results of the experiments, which will help to examine its usefulness for service providers working on the construction and refinement of services
    corecore