636 research outputs found

    Enforcing reputation constraints on business process workflows

    Get PDF
    The problem of trust in determining the flow of execution of business processes has been in the centre of research interst in the last decade as business processes become a de facto model of Internet-based commerce, particularly with the increasing popularity in Cloud computing. One of the main mea-sures of trust is reputation, where the quality of services as provided to their clients can be used as the main factor in calculating service and service provider reputation values. The work presented here contributes to the solving of this problem by defining a model for the calculation of service reputa-tion levels in a BPEL-based business workflow. These levels of reputation are then used to control the execution of the workflow based on service-level agreement constraints provided by the users of the workflow. The main contribution of the paper is to first present a formal meaning for BPEL processes, which is constrained by reputation requirements from the users, and then we demonstrate that these requirements can be enforced using a reference architecture with a case scenario from the domain of distributed map processing. Finally, the paper discusses the possible threats that can be launched on such an architecture

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation

    Full text link
    A human computation system can be viewed as a distributed system in which the processors are humans, called workers. Such systems harness the cognitive power of a group of workers connected to the Internet to execute relatively simple tasks, whose solutions, once grouped, solve a problem that systems equipped with only machines could not solve satisfactorily. Examples of such systems are Amazon Mechanical Turk and the Zooniverse platform. A human computation application comprises a group of tasks, each of them can be performed by one worker. Tasks might have dependencies among each other. In this study, we propose a theoretical framework to analyze such type of application from a distributed systems point of view. Our framework is established on three dimensions that represent different perspectives in which human computation applications can be approached: quality-of-service requirements, design and management strategies, and human aspects. By using this framework, we review human computation in the perspective of programmers seeking to improve the design of human computation applications and managers seeking to increase the effectiveness of human computation infrastructures in running such applications. In doing so, besides integrating and organizing what has been done in this direction, we also put into perspective the fact that the human aspects of the workers in such systems introduce new challenges in terms of, for example, task assignment, dependency management, and fault prevention and tolerance. We discuss how they are related to distributed systems and other areas of knowledge.Comment: 3 figures, 1 tabl

    Survey on Mobile Social Cloud Computing (MSCC)

    Get PDF
    Due to enhancement in technology the use of mobile devices increases with time. Now mobile devices (mobiles, PDA, Laptops etc.) became an essential part of mankind’s life. With the ease of Internet the popularity of Social Networking Services (SNS) among people increases. With the sharp drops in the prices, the working of mobile devices including smart phones and laptops is rising steadily. So due to this, mobile devices are now used as a provider of computing resources and services instead of requester. For this concept of Cloud Computing (CC) is merged with the mobile computing and SNS which is known as MSCC. MSCC is technology of future and it enables users/consumers to access the services in a fast and efficient manner. MSCC is the integration of three different technologies 1) Mobile Computing 2) SNS 3) Cloud Computing. Here mobile devices are (those have moments) using SNS (Both as a provider or requester) in Cloud Computing (CC) environment. In such environment, a user through mobile devices canparticipate in a social network through relationships which are based on trust. Units of the identical or alike social network can share services or data of cloud with other users of that social network without any authentication by using their mobile device as they be members of the identical social network. Various techniques are revised and improved to achieve good performance in a cloud computing network environment. In this work, there is a detailed survey of existing social cloud and mobile cloud techniques and their application areas. The comparative survey tables can be used as a guideline to select a technique suitable for different applications at hand. This survey paper reports the results of a survey of Mobile Social Cloud Computing (MSCC) regarding the importance of security of MSCC. Here we compare the works of different researcher in the field of MSCC on the basis of some essential features like security algorithm used, Qos and Fault tolerant strategy used, ease of proposed algorithm, space complexity etc. Considering all the limitations of the existing social cloud and mobile cloud techniques, an adaptive MSCC framework of Fault tolerance for future research is proposed

    A Review on Framework and Quality of Service Based Web Services Discovery

    Get PDF
    Selection of Web services (WSs) is one of the most important steps in the application of different types of WSs such as WS composition systems and the Universal Description, Discovery, and Integration (UDDI) registries. The more available these WSs on the Internet are, the wider the number of these services whose functions match the various service requests is. Selecting WSs with higher quality largely depends on the quality of service (QoS) since it plays a significant role in selecting such services. In achieving this selection of the best WSs, the potential WSs are ranked according to the user’s necessities on service quality. In many cases, the value of QoS ontology is realized by its support for nonfunctional features of WSs. This ontology is also capable of providing solutions to the interoperability of QoS description. Moreover, based on the QoS ontology, it becomes more possible to develop a framework of semantic WS discovery. The framework enhances the automatic discovery of WSs and can improve the users’ efficiency in finding the best web services. Thus, Web Services are software functionalities publish and accessible through the Internet. Different protocols and web mechanism have been defined to access these Services

    Planning and Optimization During the Life-Cycle of Service Level Agreements for Cloud Computing

    Get PDF
    Ein Service Level Agreement (SLA) ist ein elektronischer Vertrag zwischen dem Kunden und dem Anbieter eines Services. Die beteiligten Partner kl aren ihre Erwartungen und Verp ichtungen in Bezug auf den Dienst und dessen Qualit at. SLAs werden bereits f ur die Beschreibung von Cloud-Computing-Diensten eingesetzt. Der Diensteanbieter stellt sicher, dass die Dienstqualit at erf ullt wird und mit den Anforderungen des Kunden bis zum Ende der vereinbarten Laufzeit ubereinstimmt. Die Durchf uhrung der SLAs erfordert einen erheblichen Aufwand, um Autonomie, Wirtschaftlichkeit und E zienz zu erreichen. Der gegenw artige Stand der Technik im SLA-Management begegnet Herausforderungen wie SLA-Darstellung f ur Cloud- Dienste, gesch aftsbezogene SLA-Optimierungen, Dienste-Outsourcing und Ressourcenmanagement. Diese Gebiete scha en zentrale und aktuelle Forschungsthemen. Das Management von SLAs in unterschiedlichen Phasen w ahrend ihrer Laufzeit erfordert eine daf ur entwickelte Methodik. Dadurch wird die Realisierung von Cloud SLAManagement vereinfacht. Ich pr asentiere ein breit gef achertes Modell im SLA-Laufzeitmanagement, das die genannten Herausforderungen adressiert. Diese Herangehensweise erm oglicht eine automatische Dienstemodellierung, sowie Aushandlung, Bereitstellung und Monitoring von SLAs. W ahrend der Erstellungsphase skizziere ich, wie die Modellierungsstrukturen verbessert und vereinfacht werden k onnen. Ein weiteres Ziel von meinem Ansatz ist die Minimierung von Implementierungs- und Outsourcingkosten zugunsten von Wettbewerbsf ahigkeit. In der SLA-Monitoringphase entwickle ich Strategien f ur die Auswahl und Zuweisung von virtuellen Cloud Ressourcen in Migrationsphasen. Anschlie end pr ufe ich mittels Monitoring eine gr o ere Zusammenstellung von SLAs, ob die vereinbarten Fehlertoleranzen eingehalten werden. Die vorliegende Arbeit leistet einen Beitrag zu einem Entwurf der GWDG und deren wissenschaftlichen Communities. Die Forschung, die zu dieser Doktorarbeit gef uhrt hat, wurde als Teil von dem SLA@SOI EU/FP7 integriertem Projekt durchgef uhrt (contract No. 216556)

    Business-driven resource allocation and management for data centres in cloud computing markets

    Get PDF
    Cloud Computing markets arise as an efficient way to allocate resources for the execution of tasks and services within a set of geographically dispersed providers from different organisations. Client applications and service providers meet in a market and negotiate for the sales of services by means of the signature of a Service Level Agreement that contains the Quality of Service terms that the Cloud provider has to guarantee by managing properly its resources. Current implementations of Cloud markets suffer from a lack of information flow between the negotiating agents, which sell the resources, and the resource managers that allocate the resources to fulfil the agreed Quality of Service. This thesis establishes an intermediate layer between the market agents and the resource managers. In consequence, agents can perform accurate negotiations by considering the status of the resources in their negotiation models, and providers can manage their resources considering both the performance and the business objectives. This thesis defines a set of policies for the negotiation and enforcement of Service Level Agreements. Such policies deal with different Business-Level Objectives: maximisation of the revenue, classification of clients, trust and reputation maximisation, and risk minimisation. This thesis demonstrates the effectiveness of such policies by means of fine-grained simulations. A pricing model may be influenced by many parameters. The weight of such parameters within the final model is not always known, or it can change as the market environment evolves. This thesis models and evaluates how the providers can self-adapt to changing environments by means of genetic algorithms. Providers that rapidly adapt to changes in the environment achieve higher revenues than providers that do not. Policies are usually conceived for the short term: they model the behaviour of the system by considering the current status and the expected immediate after their application. This thesis defines and evaluates a trust and reputation system that enforces providers to consider the impact of their decisions in the long term. The trust and reputation system expels providers and clients with dishonest behaviour, and providers that consider the impact of their reputation in their actions improve on the achievement of their Business-Level Objectives. Finally, this thesis studies the risk as the effects of the uncertainty over the expected outcomes of cloud providers. The particularities of cloud appliances as a set of interconnected resources are studied, as well as how the risk is propagated through the linked nodes. Incorporating risk models helps providers differentiate Service Level Agreements according to their risk, take preventive actions in the focus of the risk, and pricing accordingly. Applying risk management raises the fulfilment rate of the Service-Level Agreements and increases the profit of the providerPostprint (published version

    Developing a Trustworthy Cloud Service Framework for Cloud Computing Security

    Get PDF
    Cloud computing is quickly becoming an essential platform for sharing infrastructure, software, apps, and corporate resources. Cloud computing has many advantages, but users still have a lot of questions about the dependability and safety of cloud services. Concerns about the hazards associated with the possible exploitation of this technology to undertake criminal operations might threaten the undeniable success of cloud computing. To ensure happy customers, the cloud model must prioritize safety, openness, and dependability.Its main purpose is data security, which concerns everyone contemplating cloud services. A cloud-based assault protection system will safeguard data, communications, and information.According to studies, the recommended technique is successful, however updating tags and blocks when data is amended requires computation and communication expenses. Scalability, data secrecy, and decentralized double encryption improve security.The proposed method employs cloud servers for computation-intensive tasks and protects data content by depriving data owners and users of privilege information. Also ensures responsibility. Sharing health data on the cloud is feasible, cost-effective, efficient, adaptive, and better for individuals. This"Advanced Encryption Standard with Lightweight Cipher-text-Identity and Attribute-based Encryption" (AES-lightweight CP-ABE) aims to protect sensitive data
    • …
    corecore