270 research outputs found

    A Dynamic Data-Driven Simulation Approach for Preventing Service Level Agreement Violations in Cloud Federation

    Get PDF
    The new possibility of accessing an infinite pool of computational resources at a drastically reduced price has made cloud computing popular. With the increase in its adoption and unpredictability of workload, cloud providers are faced with the problem of meeting their service level agreement (SLA) claims as demonstrated by large vendors such as Amazon and Google. Therefore, users of cloud resources are embracing the more promising cloud federation model to ensure service guarantees. Here, users have the option of selecting between multiple cloud providers and subsequently switching to a more reliable one in the event of a provider’s inability to meet its SLA. In this paper, we propose a novel dynamic data-driven architecture capable of realising resource provision in a cloud federation with minimal SLA violations. We exemplify the approach with the aid of case studies to demonstrate its feasibility. Keywords

    High Quality P2P Service Provisioning via Decentralized Trust Management

    Get PDF
    Trust management is essential to fostering cooperation and high quality service provisioning in several peer-to-peer (P2P) applications. Among those applications are customer-to-customer (C2C) trading sites and markets of services implemented on top of centralized infrastructures, P2P systems, or online social networks. Under these application contexts, existing work does not adequately address the heterogeneity of the problem settings in practice. This heterogeneity includes the different approaches employed by the participants to evaluate trustworthiness of their partners, the diversity in contextual factors that influence service provisioning quality, as well as the variety of possible behavioral patterns of the participants. This thesis presents the design and usage of appropriate computational trust models to enforce cooperation and ensure high quality P2P service provisioning, considering the above heterogeneity issues. In this thesis, first I will propose a graphical probabilistic framework for peers to model and evaluate trustworthiness of the others in a highly heterogeneous setting. The framework targets many important issues in trust research literature: the multi-dimensionality of trust, the reliability of different rating sources, and the personalized modeling and computation of trust in a participant based on the quality of services it provides. Next, an analysis on the effective usage of computational trust models in environments where participants exhibit various behaviors, e.g., honest, rational, and malicious, will be presented. I provide theoretical results showing the conditions under which cooperation emerges when using trust learning models with a given detecting accuracy and how cooperation can still be sustained while reducing the cost and accuracy of those models. As another contribution, I also design and implement a general prototyping and simulation framework for reputation-based trust systems. The developed simulator can be used for many purposes, such as to discover new trust-related phenomena or to evaluate performance of a trust learning algorithm in complex settings. Two potential applications of computational trust models are then discussed: (1) the selection and ranking of (Web) services based on quality ratings from reputable users, and (2) the use of a trust model to choose reliable delegates in a key recovery scenario in a distributed online social network. Finally, I will identify a number of various issues in building next-generation, open reputation-based trust management systems as well as propose several future research directions starting from the work in this thesis

    Enhancing trustability in MMOGs environments

    Get PDF
    Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds (VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more autonomic, security, and trust mechanisms in a way similar to humans do in the real life world. As known, this is a difficult matter because trusting in humans and organizations depends on the perception and experience of each individual, which is difficult to quantify or measure. In fact, these societal environments lack trust mechanisms similar to those involved in humans-to-human interactions. Besides, interactions mediated by compute devices are constantly evolving, requiring trust mechanisms that keep the pace with the developments and assess risk situations. In VW/MMOGs, it is widely recognized that users develop trust relationships from their in-world interactions with others. However, these trust relationships end up not being represented in the data structures (or databases) of such virtual worlds, though they sometimes appear associated to reputation and recommendation systems. In addition, as far as we know, the user is not provided with a personal trust tool to sustain his/her decision making while he/she interacts with other users in the virtual or game world. In order to solve this problem, as well as those mentioned above, we propose herein a formal representation of these personal trust relationships, which are based on avataravatar interactions. The leading idea is to provide each avatar-impersonated player with a personal trust tool that follows a distributed trust model, i.e., the trust data is distributed over the societal network of a given VW/MMOG. Representing, manipulating, and inferring trust from the user/player point of view certainly is a grand challenge. When someone meets an unknown individual, the question is “Can I trust him/her or not?”. It is clear that this requires the user to have access to a representation of trust about others, but, unless we are using an open source VW/MMOG, it is difficult —not to say unfeasible— to get access to such data. Even, in an open source system, a number of users may refuse to pass information about its friends, acquaintances, or others. Putting together its own data and gathered data obtained from others, the avatar-impersonated player should be able to come across a trust result about its current trustee. For the trust assessment method used in this thesis, we use subjective logic operators and graph search algorithms to undertake such trust inference about the trustee. The proposed trust inference system has been validated using a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy increase in evaluating trustability of avatars. Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g., graph search methods), on an individual basis, rather than based on usual centralized reputation systems. In particular, and unlike other trust discovery methods, our methods run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft), mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo, Facebook) necessitam de mecanismos de confiança mais autĂłnomos, capazes de assegurar a segurança e a confiança de uma forma semelhante Ă  que os seres humanos utilizam na vida real. Como se sabe, esta nĂŁo Ă© uma questĂŁo fĂĄcil. Porque confiar em seres humanos e ou organizaçÔes depende da percepção e da experiĂȘncia de cada indivĂ­duo, o que Ă© difĂ­cil de quantificar ou medir Ă  partida. Na verdade, esses ambientes sociais carecem dos mecanismos de confiança presentes em interacçÔes humanas presenciais. AlĂ©m disso, as interacçÔes mediadas por dispositivos computacionais estĂŁo em constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da evolução para avaliar situaçÔes de risco. Em VW/MMOGs, Ă© amplamente reconhecido que os utilizadores desenvolvem relaçÔes de confiança a partir das suas interacçÔes no mundo com outros. No entanto, essas relaçÔes de confiança acabam por nĂŁo ser representadas nas estruturas de dados (ou bases de dados) do VW/MMOG especĂ­fico, embora Ă s vezes apareçam associados Ă  reputação e a sistemas de reputação. AlĂ©m disso, tanto quanto sabemos, ao utilizador nĂŁo lhe Ă© facultado nenhum mecanismo que suporte uma ferramenta de confiança individual para sustentar o seu processo de tomada de decisĂŁo, enquanto ele interage com outros utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como os mencionados acima, propomos nesta tese uma representação formal para essas relaçÔes de confiança pessoal, baseada em interacçÔes avatar-avatar. A ideia principal Ă© fornecer a cada jogador representado por um avatar uma ferramenta de confiança pessoal que segue um modelo de confiança distribuĂ­da, ou seja, os dados de confiança sĂŁo distribuĂ­dos atravĂ©s da rede social de um determinado VW/MMOG. Representar, manipular e inferir a confiança do ponto de utilizador/jogador, Ă© certamente um grande desafio. Quando alguĂ©m encontra um indivĂ­duo desconhecido, a pergunta Ă© “Posso confiar ou nĂŁo nele?”. É claro que isto requer que o utilizador tenha acesso a uma representação de confiança sobre os outros, mas, a menos que possamos usar uma plataforma VW/MMOG de cĂłdigo aberto, Ă© difĂ­cil — para nĂŁo dizer impossĂ­vel — obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de cĂłdigo aberto, um nĂșmero de utilizadores pode recusar partilhar informaçÔes sobre seus amigos, conhecidos, ou sobre outros. Ao juntar seus prĂłprios dados com os dados obtidos de outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir. Relativamente ao mĂ©todo de avaliação de confiança empregue nesta tese, utilizamos lĂłgica subjectiva para a representação da confiança, e tambĂ©m operadores lĂłgicos da lĂłgica subjectiva juntamente com algoritmos de procura em grafos para empreender o processo de inferĂȘncia da confiança relativamente a outro utilizador. O sistema de inferĂȘncia de confiança proposto foi validado atravĂ©s de um nĂșmero de cenĂĄrios Open-Simulator (opensimulator.org), que mostrou um aumento na precisĂŁo na avaliação da confiança de avatares. Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos virtuais, conjuntamente com mĂ©tricas de avaliação de confiança (por exemplo, a lĂłgica subjectiva) e em mĂ©todos de procura de caminhos de confiança (com por exemplo, atravĂ©s de mĂ©todos de pesquisa em grafos), partindo de uma base individual, em vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao contrĂĄrio de outros mĂ©todos de determinação do grau de confiança, os nossos mĂ©todos sĂŁo executados em tempo real

    Promoting Honesty in Electronic Marketplaces: Combining Trust Modeling and Incentive Mechanism Design

    Get PDF
    This thesis work is in the area of modeling trust in multi-agent systems, systems of software agents designed to act on behalf of users (buyers and sellers), in applications such as e-commerce. The focus is on developing an approach for buyers to model the trustworthiness of sellers in order to make effective decisions about which sellers to select for business. One challenge is the problem of unfair ratings, which arises when modeling the trust of sellers relies on ratings provided by other buyers (called advisors). Existing approaches for coping with this problem fail in scenarios where the majority of advisors are dishonest, buyers do not have much personal experience with sellers, advisors try to flood the trust modeling system with unfair ratings, and sellers vary their behavior widely. We propose a novel personalized approach for effectively modeling trustworthiness of advisors, allowing a buyer to 1) model the private reputation of an advisor based on their ratings for commonly rated sellers 2) model the public reputation of the advisor based on all ratings for the sellers ever rated by that agent 3) flexibly weight the private and public reputation into one combined measure of the trustworthiness of the advisor. Our approach tracks ratings provided according to their time windows and limits the ratings accepted, in order to cope with advisors flooding the system and to deal with changes in agents' behavior. Experimental evidence demonstrates that our model outperforms other models in detecting dishonest advisors and is able to assist buyers to gain the largest profit when doing business with sellers. Equipped with this richer method for modeling trustworthiness of advisors, we then embed this reasoning into a novel trust-based incentive mechanism to encourage agents to be honest. In this mechanism, buyers select the most trustworthy advisors as their neighbors from which they can ask advice about sellers, forming a social network. In contrast with other researchers, we also have sellers model the reputation of buyers. Sellers will offer better rewards to satisfy buyers that are well respected in the social network, in order to build their own reputation. We provide precise formulae used by sellers when reasoning about immediate and future profit to determine their bidding behavior and the rewards to buyers, and emphasize the importance for buyers to adopt a strategy to limit the number of sellers that are considered for each good to be purchased. We theoretically prove that our mechanism promotes honesty from buyers in reporting seller ratings, and honesty from sellers in delivering products as promised. We also provide a series of experimental results in a simulated dynamic environment where agents may be arriving and departing. This provides a stronger defense of the mechanism as one that is robust to important conditions in the marketplace. Our experiments clearly show the gains in profit enjoyed by both honest sellers and honest buyers when our mechanism is introduced and our proposed strategies are followed. In general, our research will serve to promote honesty amongst buyers and sellers in e-marketplaces. Our particular proposal of allowing sellers to model buyers opens a new direction in trust modeling research. The novel direction of designing an incentive mechanism based on trust modeling and using this mechanism to further help trust modeling by diminishing the problem of unfair ratings will hope to bridge researchers in the areas of trust modeling and mechanism design

    Expert Stock Picker: The Wisdom of (Experts in) Crowds

    Get PDF
    The phrase the wisdom of crowds suggests that good verdicts can be achieved by averaging the opinions and insights of large, diverse groups of people who possess varied types of information. Online user-generated content enables researchers to view the opinions of large numbers of users publicly. These opinions, in the form of reviews and votes, can be used to automatically generate remarkably accurate verdicts-collective estimations of future performance-about companies, products, and people on the Web to resolve very tough problems. The wealth and richness of user-generated content may enable firms and individuals to aggregate consumer-think for better business understanding. Our main contribution, here applied to user-generated stock pick votes from a widely used online financial newsletter, is a genetic algorithm approach that can be used to identify the appropriate vote weights for users based on their prior individual voting success. Our method allows us to identify and rank experts within the crowd, enabling better stock pick decisions than the S&P 500. We show that the online crowd performs better, on average, than the S&P 500 for two test time periods, 2008 and 2009, in terms of both overall returns and risk-adjusted returns, as measured by the Sharpe ratio. Furthermore, we show that giving more weight to the votes of the experts in the crowds increases the accuracy of the verdicts, yielding an even greater return in the same time periods. We test our approach by utilizing more than three years of publicly available stock pick data. We compare our method to approaches derived from both the computer science and finance literature. We believe that our approach can be generalized to other domains where user opinions are publicly available early and where those opinions can be evaluated. For example, YouTube video ratings may be used to predict downloads, or online reviewer ratings on Digg may be used to predict the success or popularity of a story

    Context-aware task scheduling in distributed computing systems

    Full text link
    These days, the popularity of technologies such as machine learning, augmented reality, and big data analytics is growing dramatically. This leads to a higher demand of computational power not only for IT professionals but also for ordinary device users who benefit from new applications. At the same time, the computational performance of end-user devices increases to meet the demands of these resource-hungry applications. As a result, there is a coexistence of a huge demand of computational power on the one side and a large pool of computational resources on the other side. Bringing these two sides together is the idea of computational resource sharing systems which allow applications to forward computationally intensive workload to remote resources. This technique is often used in cloud computing where customers can rent computational power. However, we argue that not only cloud resources can be used as offloading targets. Rather, idle CPU cycles from end-user administered devices at the edge of the network can be spontaneously leveraged as well. Edge devices, however, are not only heterogeneous in their hardware and software capabilities, they also do not provide any guarantees in terms of reliability or performance. Does it mean that either the applications that require further guarantees or the unpredictable resources need to be excluded from such a sharing system? In this thesis, we propose a solution to this problem by introducing the Tasklet system, our approach for a computational resource sharing system. The Tasklet system supports computation offloading to arbitrary types of devices, including stable cloud instances as well as unpredictable end-user owned edge resources. Therefore, the Tasklet system is structured into multiple layers. The lowest layer is a best-effort resource sharing system which provides lightweight task scheduling and execution. Here, best-effort means that in case of a failure, the task execution is dropped and that tasks are allocated to resources randomly. To provide execution guarantees such as a reliable or timely execution, we add a Quality of Computation (QoC) layer on top of the best-effort execution layer. The QoC layer enforces the guarantees for applications by using a context-aware task scheduler which monitors the available resources in the computing environment and performs the matchmaking between resources and tasks based on the current state of the system. As edge resources are controlled by individuals, we consider the fact that these users need to be able to decide with whom they want to share their resources and for which price. Thus, we add a social layer on top of the system that allows users to establish friendship connections which can then be leveraged for social-aware task allocation and accounting of shared computation

    Flexible provisioning of Web service workflows

    No full text
    Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures
    • 

    corecore