197,130 research outputs found
Execution Time Prediction for a Web Service Instance
Availability of services on Internet has provided unique opportunity to customers as well as providers for conducting e-business. This new business paradigm can succeed provided selection of services is accomplished to customer satisfaction in terms of service delivery time as well as service quality. Instead of leaving it to service provider to declare, we propose a strategy for forecasting execution time of the web service being called. The paper deals with model details and proposes a framework for implementation of the model. We also demonstrate its usability in scenarios like checkpointing web services
Indirect Copyright Infringement Liability for ISPs and The Economics of Contracts under Asymmetric Information
Under current copyright law, Internet Service Providers (ISPs) can be found liable for the traffic on the websites that they host. While the ISPs themselves are not undertaking acts that infringe copyright, indirect liability asserts that they either contribute to, or encourage in some way, infringing activities, and thus they are liable to claims of indirect involvement by the affected copyright holders. The present paper explores indirect liability in a standard principal-agent setting, where both moral hazard (the act of monitoring) and adverse selection (differential costs of monitoring over ISPs) are present. The model considers the kinds of contracts that could be signed between the copyright holders (acting through a collective) and the ISPs (acting individually). The self-selecting, incentive compatible equilibrium is found for the feasible scenarios that may present themselves.
Optimisation of server selection for maximising utility in Erlang-loss systems
This paper undertakes the challenge of server selection problem in Erlang-loss system (ELS). We propose a novel approach to the server selection problem in the ELS taking into account probabilistic modelling to reflect a practical scenario when user arrivals vary over time. The proposed framework is divided into three stages, including i) developing a new method for server selection based on the M/M/n/n queuing model with probabilistic arrivals; ii) combining server allocation results with further research on utility-maximising server selection to optimise system performance; and iii) designing a heuristic approach to efficiently solve the developed optimisation problem. Simulation results show that by using this framework, Internet Service Providers (ISPs) can significantly improve QoS for better revenue with optimal server allocation in their data centre networks
Machine Learning Based Classifier for Service Function Chains
Using service function chains, Internet Service Providers can customize the use of service functions that process the network flows belonging to their customers. Each network flow is injected into a service chain according to the flow features. Since most of the malicious applications try not to get the proper analysis by imitating some valid and famous applications, classification based on simple flow features may waste processing power by using inappropriate service chains for evasive flows. In this paper, we have explored an application-aware classification approach using machine learning methods. Using CatBoost as a machine learning method, a model is created and used for traffic classification. We have provided some statistical reports on how this approach is compared with simple flow feature-based approaches in malicious environments and how feature selection can impact classification correctness. Choosing the most suitable number of features at the right time can beat traditional approaches in classification quality and provide better results in the service function chaining environment
An adaptive trust based service quality monitoring mechanism for cloud computing
Cloud computing is the newest paradigm in distributed computing that delivers computing resources over the Internet as services. Due to the attractiveness of cloud computing, the market is currently flooded with many service providers. This
has necessitated the customers to identify the right one meeting their requirements in terms of service quality. The existing monitoring of service quality has been limited only to quantification in cloud computing. On the other hand, the continuous
improvement and distribution of service quality scores have been implemented in other distributed computing paradigms but not specifically for cloud computing. This research investigates the methods and proposes mechanisms for quantifying and
ranking the service quality of service providers. The solution proposed in this thesis consists of three mechanisms, namely service quality modeling mechanism, adaptive trust computing mechanism and trust distribution mechanism for cloud computing.
The Design Research Methodology (DRM) has been modified by adding phases, means and methods, and probable outcomes. This modified DRM is used throughout this study. The mechanisms were developed and tested gradually until the expected
outcome has been achieved. A comprehensive set of experiments were carried out in a simulated environment to validate their effectiveness. The evaluation has been carried out by comparing their performance against the combined trust model and
QoS trust model for cloud computing along with the adapted fuzzy theory based trust computing mechanism and super-agent based trust distribution mechanism, which were developed for other distributed systems. The results show that the mechanisms are faster and more stable than the existing solutions in terms of reaching the final trust scores on all three parameters tested. The results presented in this thesis are significant
in terms of making cloud computing acceptable to users in verifying the performance of the service providers before making the selection
QoS Based Service Selection and Provisioning in Cloud Computing
Cloud computing has become a disruptive technology which has seen significant growth
among consumers of various sizes. Consumers can now have access to seemingly unlimited
computing resources over the Internet without making significant investment in computing
infrastructure. Consequently, this trend has seen a rise in the number of cloud computing
providers. Most of these providers offer various services to consumers, which are
commonly classified into three main types of service provisioning models such as,
Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service
(SaaS). Cloud computing providers offer multiple services to consumers using one or more
of models.
However, the large number of services offered by cloud providers introduces a new set of
problems for the consumers. Consumers have to choose from a wide range of services and
providers such that they meet consumersā requirements. The problem is further
complicated as a large number of consumers do not necessarily have adequate knowledge
of cloud servicesā concepts and terminologies. To add to the complexity, there is no
standard benchmark for cloud services. Therefore cloud providers can package the same
cloud services in different ways for consumers. Furthermore, there are no standard service
level agreements defined for cloud services selection. Each cloud provider has different
service level agreements which make consumer selection process more cumbersome. This
research aims to bridge this gap by proposing and developing new methods and techniques
that take into account different cloud services and providers as well as quality of services
attributes that make it easier for consumers to rank and select cloud services which are
tailored to their requirements.
This thesis makes various contributions to the current state of knowledge in the cloud
service selection and provisioning area. It proposes a new model in order to systematically
represent the quality of service (QoS) attributes of cloud services that cover both technical
and non-technical aspects of cloud computing. The new model succinctly represents QoS
attributes which cloud consumers can easily use and understand when selecting cloud
services. The thesis also proposes a new framework for cloud service selection which
improves and simplifies the cloud service selection process. It takes into account the level
of userās knowledge of cloud computing technologies. The major benefit is to simplify the
selection process for āBeginnerā cloud service consumers (who have little knowledge of
cloud computing) by presenting the main QoS attributes to them with brief explanations.
The other benefit is to give an Intermediate/Expert cloud service consumers an opportunity
to go through more details of QoS sub-parameters. Unlike existing approaches, the
ii
framework developed in this thesis also ensures the credibility of the service selection by
utilizing information from three different sources, including, information from cloud
service providersā websites, online monitoring tools and usersā reviews of cloud services.
Furthermore, the framework integrates Service Level Agreement (SLA) which is an
integral part of cloud services as it is important for the consumer to be able to view it as
part of their decision making process. The framework is validated by developing a
prototype tool using Python, MongoDB and Amazon AWS EC2 server. The tool is then
evaluated using various real life scenarios to rank cloud service providers and also by
comparing it against existing tools. The results show that the proposed tool outperforms
existing tools using a set of criteria such as operability, mode of data selection and number
of cloud providers among others for ranking and selecting cloud services
The performance and locality tradeoff in BitTorrent-like P2P file-sharing systems
The recent surge of large-scale peer-to-peer (P2P) applications has brought huge amounts of P2P traffic, which significantly changes the Internet traffic pattern and increases the traffic-relay cost at the Internet Service Providers (ISPs). To alleviate the stress on networks, localized peer selection has been proposed that advocates neighbor selection within the same network (AS or ISP) to reduce the cross-ISP traffic. Nevertheless, localized peer selection may potentially lead to the downgrade of downloading speed at the peers, rendering a non-negligible tradeoff between the downloading performance and traffic localization in the P2P system. Aiming at effective peer selection strategies that achieve any desired Pareto optimum in face of the tradeoff, in this paper, we characterize the performance and locality tradeoff as a multi-objective b-matching optimization problem. In particular, we first present a generic maximum weight b-matching model that characterizes the tit-for-tat in BitTorrent-like peer selection. We then introduce multiple optimization objectives into the model, which effectively characterize the performance and locality tradeoff using simultaneous objectives to optimize. We also design fully distributed peer selection algorithms that can effectively achieve any desired Pareto optimum of the global multi-objective optimization, that represents a desired tradeoff point between performance and locality in the entire system. Our models and algorithms are supported by rigorous analysis and extensive simulations. Ā©2010 IEEE.published_or_final_versionThe IEEE International Conference on Communications (ICC 2010), Cape Town, South Africa, 23-27 May 2010. In Proceedings of the IEEE International Conference on Communications, 2010, p. 1-
Generative AI-aided Optimization for AI-Generated Content (AIGC) Services in Edge Networks
As Metaverse emerges as the next-generation Internet paradigm, the ability to
efficiently generate content is paramount. AI-Generated Content (AIGC) offers a
promising solution to this challenge. However, the training and deployment of
large AI models necessitate significant resources. To address this issue, we
introduce an AIGC-as-a-Service (AaaS) architecture, which deploys AIGC models
in wireless edge networks, ensuring ubiquitous access to AIGC services for
Metaverse users. Nonetheless, a key aspect of providing personalized user
experiences requires the careful selection of AIGC service providers (ASPs)
capable of effectively executing user tasks. This selection process is
complicated by environmental uncertainty and variability, a challenge not yet
addressed well in existing literature. Therefore, we first propose a diffusion
model-based AI-generated optimal decision (AGOD) algorithm, which can generate
the optimal ASP selection decisions. We then apply AGOD to deep reinforcement
learning (DRL), resulting in the Deep Diffusion Soft Actor-Critic (D2SAC)
algorithm, which achieves efficient and effective ASP selection. Our
comprehensive experiments demonstrate that D2SAC outperforms seven leading DRL
algorithms. Furthermore, the proposed AGOD algorithm has the potential for
extension to various optimization problems in wireless networks, positioning it
a promising approach for the future research on AIGC-driven services in
Metaverse. The implementation of our proposed method is available at:
https://github.com/Lizonghang/AGOD
Extending an open source enterprise service bus for dynamic discovery and selection of cloud data hosting solutions based on WS-policy
As part of Cloud computing, the service model Platform-as-a-Service (PaaS) has emerged, where customers can develop and host internet-scale applications on Cloud infrastructure. The Enterprise Service Bus (ESB) is one possible building block of a PaaS offering, providing integration capabilities for Service-Oriented architectures. Dynamic service discovery and selection support for an ESB increases flexibility of the application composed of reusable services in the Cloud and gives providers the possibility react faster on changes in the market.
In this master's thesis we specify, design and implement Dynamic Discovery and Selection of Cloud Data Hosting Solutions for an open-source ESB. Provided dynamic service discovery and selection endpoint/service allows users of tenants to send requests with attached policies, while tenants register Cloud Data Hosting Solutions with the policies that describe their capabilities. To provide uniform policy language a new WS-Policy Assertion Language is created and specified that is used to express functional and non-functional properties of Cloud Data Hosting Solutions. By matching a policy in a request and policies of Cloud Data Hosting Solutions, a suitable Cloud data store service is discovered. Moreover, we ensure data isolation between tenants while providing dynamic service discovery and selection
- ā¦