5 research outputs found
Trust Management Model for Cloud Computing Environment
Software as a service or (SaaS) is a new software development and deployment
paradigm over the cloud and offers Information Technology services dynamically
as "on-demand" basis over the internet. Trust is one of the fundamental
security concepts on storing and delivering such services. In general, trust
factors are integrated into such existent security frameworks in order to add a
security level to entities collaborations through the trust relationship.
However, deploying trust factor in the secured cloud environment are more
complex engineering task due to the existence of heterogeneous types of service
providers and consumers. In this paper, a formal trust management model has
been introduced to manage the trust and its properties for SaaS in cloud
computing environment. The model is capable to represent the direct trust,
recommended trust, reputation etc. formally. For the analysis of the trust
properties in the cloud environment, the proposed approach estimates the trust
value and uncertainty of each peer by computing decay function, number of
positive interactions, reputation factor and satisfaction level for the
collected information.Comment: 5 Pages, 2 Figures, Conferenc
Using machine learning for intelligent shard sizing on the cloud
Sharding implementations use conservative approximations for determining the number of cloud instances required and the size of the shards to be stored on each of them. Conservative approximations are often inaccurate and result in overloaded deployments, which need reactive refinement. Reactive refinement results in demand for additional resources from an already overloaded system and is counterproductive.
This paper proposes an algorithm that eliminates the need for conservative approximations and reduces the need for reactive refinement. A multiple linear regression based machine learning algorithm is used to predict the latency of requests for a given application deployed on a cloud machine. The predicted latency helps to decide accurately and with certainty if the capacity of the cloud machine will satisfy the service level agreement for effective operation of the application. Application of the proposed methods on a popular database schema on the cloud resulted in highly accurate predictions. The results of the deployment and the tests performed to establish the accuracy have been presented in detail and are shown to establish the authenticity of the claims
Efficient read monotonic data aggregation across shards on the cloud
Client-centric consistency models define the view of the data storage expected by a client in relation to the operations done by a client within a session. Monotonic reads is a client-centric consistency model which ensures that if a process has seen a particular value for the object, any subsequent accesses will never return any previous values. Monotonic reads are used in several applications like news feeds and social networks to ensure that the user always has a forward moving view of the data.
The idea of Monotonic reads over multiple copies of the data and for lightly loaded systems is intuitive and easy to implement. For example, ensuring that a client session always fetches data from the same server automatically ensures that the user will never view old data.
However, such a simplistic setup will not work for large deployments on the cloud, where the data is sharded across multiple high availability setups and there are several million clients accessing data at the same time. In such a setup it becomes necessary to ensure that the data fetched from multiple shards are logically consistent with each other. The use of trivial implementations, like sticky sessions, causes severe performance degradation during peak loads.
This paper explores the challenges surrounding consistent monotonic reads over a sharded setup on the cloud and proposes an efficient architecture for the same. Performance of the proposed architecture is measured by implementing it on a cloud setup and measuring the response times for different shard counts. We show that the proposed solution scales with almost no change in performance as the number of shards increases