732 research outputs found

    Collusion in Peer-to-Peer Systems

    Get PDF
    Peer-to-peer systems have reached a widespread use, ranging from academic and industrial applications to home entertainment. The key advantage of this paradigm lies in its scalability and flexibility, consequences of the participants sharing their resources for the common welfare. Security in such systems is a desirable goal. For example, when mission-critical operations or bank transactions are involved, their effectiveness strongly depends on the perception that users have about the system dependability and trustworthiness. A major threat to the security of these systems is the phenomenon of collusion. Peers can be selfish colluders, when they try to fool the system to gain unfair advantages over other peers, or malicious, when their purpose is to subvert the system or disturb other users. The problem, however, has received so far only a marginal attention by the research community. While several solutions exist to counter attacks in peer-to-peer systems, very few of them are meant to directly counter colluders and their attacks. Reputation, micro-payments, and concepts of game theory are currently used as the main means to obtain fairness in the usage of the resources. Our goal is to provide an overview of the topic by examining the key issues involved. We measure the relevance of the problem in the current literature and the effectiveness of existing philosophies against it, to suggest fruitful directions in the further development of the field

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    Incentives and Two-Sided Matching - Engineering Coordination Mechanisms for Social Clouds

    Get PDF
    The Social Cloud framework leverages existing relationships between members of a social network for the exchange of resources. This thesis focuses on the design of coordination mechanisms to address two challenges in this scenario. In the first part, user participation incentives are studied. In the second part, heuristics for two-sided matching-based resource allocation are designed and evaluated

    Self-management for large-scale distributed systems

    Get PDF
    Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management. In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises

    Get PDF

    Business-driven resource allocation and management for data centres in cloud computing markets

    Get PDF
    Cloud Computing markets arise as an efficient way to allocate resources for the execution of tasks and services within a set of geographically dispersed providers from different organisations. Client applications and service providers meet in a market and negotiate for the sales of services by means of the signature of a Service Level Agreement that contains the Quality of Service terms that the Cloud provider has to guarantee by managing properly its resources. Current implementations of Cloud markets suffer from a lack of information flow between the negotiating agents, which sell the resources, and the resource managers that allocate the resources to fulfil the agreed Quality of Service. This thesis establishes an intermediate layer between the market agents and the resource managers. In consequence, agents can perform accurate negotiations by considering the status of the resources in their negotiation models, and providers can manage their resources considering both the performance and the business objectives. This thesis defines a set of policies for the negotiation and enforcement of Service Level Agreements. Such policies deal with different Business-Level Objectives: maximisation of the revenue, classification of clients, trust and reputation maximisation, and risk minimisation. This thesis demonstrates the effectiveness of such policies by means of fine-grained simulations. A pricing model may be influenced by many parameters. The weight of such parameters within the final model is not always known, or it can change as the market environment evolves. This thesis models and evaluates how the providers can self-adapt to changing environments by means of genetic algorithms. Providers that rapidly adapt to changes in the environment achieve higher revenues than providers that do not. Policies are usually conceived for the short term: they model the behaviour of the system by considering the current status and the expected immediate after their application. This thesis defines and evaluates a trust and reputation system that enforces providers to consider the impact of their decisions in the long term. The trust and reputation system expels providers and clients with dishonest behaviour, and providers that consider the impact of their reputation in their actions improve on the achievement of their Business-Level Objectives. Finally, this thesis studies the risk as the effects of the uncertainty over the expected outcomes of cloud providers. The particularities of cloud appliances as a set of interconnected resources are studied, as well as how the risk is propagated through the linked nodes. Incorporating risk models helps providers differentiate Service Level Agreements according to their risk, take preventive actions in the focus of the risk, and pricing accordingly. Applying risk management raises the fulfilment rate of the Service-Level Agreements and increases the profit of the providerPostprint (published version

    A COGNITIVE ARCHITECTURE FOR AMBIENT INTELLIGENCE

    Get PDF
    L’Ambient Intelligence (AmI) Ăš caratterizzata dall’uso di sistemi pervasivi per monitorare l’ambiente e modificarlo secondo le esigenze degli utenti e rispettando vincoli definiti globalmente. Questi sistemi non possono prescindere da requisiti come la scalabilitĂ  e la trasparenza per l’utente. Una tecnologia che consente di raggiungere questi obiettivi Ăš rappresentata dalle reti di sensori wireless (WSN), caratterizzate da bassi costi e bassa intrusivitĂ . Tuttavia, sebbene in grado di effettuare elaborazioni a bordo dei singoli nodi, le WSN non hanno da sole le capacitĂ  di elaborazione necessarie a supportare un sistema intelligente; d’altra parte senza questa attivitĂ  di pre-elaborazione la mole di dati sensoriali puĂČ facilmente sopraffare un sistema centralizzato con un’eccessiva quantitĂ  di dettagli superflui. Questo lavoro presenta un’architettura cognitiva in grado di percepire e controllare l’ambiente di cui fa parte, basata su un nuovo approccio per l’estrazione di conoscenza a partire dai dati grezzi, attraverso livelli crescenti di astrazione. Le WSN sono utilizzate come strumento sensoriale pervasivo, le cui capacitĂ  computazionali vengono utilizzate per pre-elaborare i dati rilevati, in modo da consentire ad un sistema centralizzato intelligente di effettuare ragionamenti di alto livello. L’architettura proposta Ăš stata utilizzata per sviluppare un testbed dotato degli strumenti hardware e software necessari allo sviluppo e alla gestione di applicazioni di AmI basate su WSN, il cui obiettivo principale sia il risparmio energetico. Per fare in modo che le applicazioni di AmI siano in grado di comunicare con il mondo esterno in maniera affidabile, per richiedere servizi ad agenti esterni, l’architettura Ăš stata arricchita con un protocollo di gestione distribuita della reputazione. È stata inoltre sviluppata un’applicazione di esempio che sfrutta le caratteristiche del testbed, con l’obiettivo di controllare la temperatura in un ambiente lavorativo. Quest’applicazione rileva la presenza dell’utente attraverso un modulo per la fusione di dati multi-sensoriali basato su reti bayesiane, e sfrutta questa informazione in un controllore fuzzy multi-obiettivo che controlla gli attuatori sulla base delle preferenze dell’utente e del risparmio energetico.Ambient Intelligence (AmI) systems are characterized by the use of pervasive equipments for monitoring and modifying the environment according to users’ needs, and to globally defined constraints. Furthermore, such systems cannot ignore requirements about ubiquity, scalability, and transparency to the user. An enabling technology capable of accomplishing these goals is represented by Wireless Sensor Networks (WSNs), characterized by low-costs and unintrusiveness. However, although provided of in-network processing capabilities, WSNs do not exhibit processing features able to support comprehensive intelligent systems; on the other hand, without this pre-processing activities the wealth of sensory data may easily overwhelm a centralized AmI system, clogging it with superfluous details. This work proposes a cognitive architecture able to perceive, decide upon, and control the environment of which the system is part, based on a new approach to knowledge extraction from raw data, that addresses this issue at different abstraction levels. WSNs are used as the pervasive sensory tool, and their computational capabilities are exploited to remotely perform preliminary data processing. A central intelligent unit subsequently extracts higher-level concepts in order to carry on symbolic reasoning. The aim of the reasoning is to plan a sequence of actions that will lead the environment to a state as close as possible to the users’ desires, taking into account both implicit and explicit feedbacks from the users, while considering global system-driven goals, such as energy saving. The proposed conceptual architecture was exploited to develop a testbed providing the hardware and software tools for the development and management of AmI applications based on WSNs, whose main goal is energy saving for global sustainability. In order to make the AmI system able to communicate with the external world in a reliable way, when some services are required to external agents, the architecture was enriched with a distributed reputation management protocol. A sample application exploiting the testbed features was implemented for addressing temperature control in a work environment. Knowledge about the user’s presence is obtained through a multi-sensor data fusion module based on Bayesian networks, and this information is exploited by a multi-objective fuzzy controller that operates on actuators taking into account users’ preference and energy consumption constraints
    • 

    corecore