33 research outputs found

    An Analysis of incentives mechanisms and evaluation on BitTorrent

    Get PDF
    Since the first peer-to-peer communities appeared, their number of users has increased considerably owing to the benefits they offer compared to their alternative architectures in the sharing and distribution of multimedia content. However, due to its distributed nature, they can suffer an important problem of misuse: free-riding. Free-riding consists on users consuming resources without contributing to the system. Such behaviour not only is not fair for the rest of the users, but also threatens the success of this type of nets. With the motivation to avoid free-riding, the mechanisms of incentives were born. They provide the system with a method to motivate the nodes and make them share their resources with the other users. In one word, they provide the net with the needed fairness to achieve a good performance for all users. This thesis is organised in two main parts. In the first part there is a comprehensive study of the state of the art regarding the incentive mechanisms, resulting in a classification depending on the characteristics of the studied algorithms. That study provides the reader with a first sight of the strengths and weaknesses of each algorithm. In the second part there is a test scenario based in the virtualization of machines that was useful to evaluate empirically some of the studied algorithms. Finally, a series of experiments were carried out in order to compare some characteristics of these algorithms and thus verify or deny the conclusions resulted in the study of the state of the art

    Roadmap for KRSM RTD

    Get PDF

    Networks, complexity and internet regulation: scale-free law

    Get PDF
    No description supplie

    ID5.2 Roadmap for KRSM RTD

    Get PDF
    Roadmap for KRSM RTD activities.The work on this publication has been sponsored by the TENCompetence Integrated Project that is funded by the European Commission's 6th Framework Programme, priority IST/Technology Enhanced Learning. Contract 027087 [http://www.tencompetence.org

    Scalable Performance Analysis of Massively Parallel Stochastic Systems

    No full text
    The accurate performance analysis of large-scale computer and communication systems is directly inhibited by an exponential growth in the state-space of the underlying Markovian performance model. This is particularly true when considering massively-parallel architectures such as cloud or grid computing infrastructures. Nevertheless, an ability to extract quantitative performance measures such as passage-time distributions from performance models of these systems is critical for providers of these services. Indeed, without such an ability, they remain unable to offer realistic end-to-end service level agreements (SLAs) which they can have any confidence of honouring. Additionally, this must be possible in a short enough period of time to allow many different parameter combinations in a complex system to be tested. If we can achieve this rapid performance analysis goal, it will enable service providers and engineers to determine the cost-optimal behaviour which satisfies the SLAs. In this thesis, we develop a scalable performance analysis framework for the grouped PEPA stochastic process algebra. Our approach is based on the approximation of key model quantities such as means and variances by tractable systems of ordinary differential equations (ODEs). Crucially, the size of these systems of ODEs is independent of the number of interacting entities within the model, making these analysis techniques extremely scalable. The reliability of our approach is directly supported by convergence results and, in some cases, explicit error bounds. We focus on extracting passage-time measures from performance models since these are very commonly the language in which a service level agreement is phrased. We design scalable analysis techniques which can handle passages defined both in terms of entire component populations as well as individual or tagged members of a large population. A precise and straightforward specification of a passage-time service level agreement is as important to the performance engineering process as its evaluation. This is especially true of large and complex models of industrial-scale systems. To address this, we introduce the unified stochastic probe framework. Unified stochastic probes are used to generate a model augmentation which exposes explicitly the SLA measure of interest to the analysis toolkit. In this thesis, we deploy these probes to define many detailed and derived performance measures that can be automatically and directly analysed using rapid ODE techniques. In this way, we tackle applicable problems at many levels of the performance engineering process: from specification and model representation to efficient and scalable analysis

    Framework for privacy-aware content distribution in peer-to- peer networks with copyright protection

    Get PDF
    The use of peer-to-peer (P2P) networks for multimedia distribution has spread out globally in recent years. This mass popularity is primarily driven by the efficient distribution of content, also giving rise to piracy and copyright infringement as well as privacy concerns. An end user (buyer) of a P2P content distribution system does not want to reveal his/her identity during a transaction with a content owner (merchant), whereas the merchant does not want the buyer to further redistribute the content illegally. Therefore, there is a strong need for content distribution mechanisms over P2P networks that do not pose security and privacy threats to copyright holders and end users, respectively. However, the current systems being developed to provide copyright and privacy protection to merchants and end users employ cryptographic mechanisms, which incur high computational and communication costs, making these systems impractical for the distribution of big files, such as music albums or movies.El uso de soluciones de igual a igual (peer-to-peer, P2P) para la distribuciĂłn multimedia se ha extendido mundialmente en los Ășltimos años. La amplia popularidad de este paradigma se debe, principalmente, a la distribuciĂłn eficiente de los contenidos, pero tambiĂ©n da lugar a la piraterĂ­a, a la violaciĂłn del copyright y a problemas de privacidad. Un usuario final (comprador) de un sistema de distribuciĂłn de contenidos P2P no quiere revelar su identidad durante una transacciĂłn con un propietario de contenidos (comerciante), mientras que el comerciante no quiere que el comprador pueda redistribuir ilegalmente el contenido mĂĄs adelante. Por lo tanto, existe una fuerte necesidad de mecanismos de distribuciĂłn de contenidos por medio de redes P2P que no supongan un riesgo de seguridad y privacidad a los titulares de derechos y los usuarios finales, respectivamente. Sin embargo, los sistemas actuales que se desarrollan con el propĂłsito de proteger el copyright y la privacidad de los comerciantes y los usuarios finales emplean mecanismos de cifrado que implican unas cargas computacionales y de comunicaciones muy elevadas que convierten a estos sistemas en poco prĂĄcticos para distribuir archivos de gran tamaño, tales como ĂĄlbumes de mĂșsica o pelĂ­culas.L'Ășs de solucions d'igual a igual (peer-to-peer, P2P) per a la distribuciĂł multimĂšdia s'ha estĂšs mundialment els darrers anys. L'Ă mplia popularitat d'aquest paradigma es deu, principalment, a la distribuciĂł eficient dels continguts, perĂČ tambĂ© dĂłna lloc a la pirateria, a la violaciĂł del copyright i a problemes de privadesa. Un usuari final (comprador) d'un sistema de distribuciĂł de continguts P2P no vol revelar la seva identitat durant una transacciĂł amb un propietari de continguts (comerciant), mentre que el comerciant no vol que el comprador pugui redistribuir il·legalment el contingut mĂ©s endavant. Per tant, hi ha una gran necessitat de mecanismes de distribuciĂł de continguts per mitjĂ  de xarxes P2P que no comportin un risc de seguretat i privadesa als titulars de drets i els usuaris finals, respectivament. Tanmateix, els sistemes actuals que es desenvolupen amb el propĂČsit de protegir el copyright i la privadesa dels comerciants i els usuaris finals fan servir mecanismes d'encriptaciĂł que impliquen unes cĂ rregues computacionals i de comunicacions molt elevades que fan aquests sistemes poc prĂ ctics per a distribuir arxius de grans dimensions, com ara Ă lbums de mĂșsica o pel·lĂ­cules

    Tenth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools Aarhus, Denmark, October 19-21, 2009

    Get PDF
    This booklet contains the proceedings of the Tenth Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, October 19-21, 2009. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.cs.au.dk/CPnets/events/workshop0

    Viiteraamistik turvariskide haldamiseks plokiahela abil

    Get PDF
    Turvalise tarkvara loomiseks on olemas erinevad programmid (nt OWASP), ohumudelid (nt STRIDE), turvariskide juhtimise mudelid (nt ISSRM) ja eeskirjad (nt GDPR). Turvaohud aga arenevad pidevalt, sest traditsiooniline tehnoloogiline infrastruktuur ei rakenda turvameetmeid kavandatult. Blockchain nĂ€ib leevendavat traditsiooniliste rakenduste turvaohte. Kuigi plokiahelapĂ”hiseid rakendusi peetakse vĂ€hem haavatavateks, ei saanud need erinevate turvaohtude eest kaitsmise hĂ”bekuuliks. Lisaks areneb plokiahela domeen pidevalt, pakkudes uusi tehnikaid ja sageli vahetatavaid disainikontseptsioone, mille tulemuseks on kontseptuaalne ebaselgus ja segadus turvaohtude tĂ”husal kĂ€sitlemisel. Üldiselt kĂ€sitleme traditsiooniliste rakenduste TJ-e probleemi, kasutades vastumeetmena plokiahelat ja plokiahelapĂ”histe rakenduste TJ-t. Alustuseks uurime, kuidas plokiahel leevendab traditsiooniliste rakenduste turvaohte, ja tulemuseks on plokiahelapĂ”hine vĂ”rdlusmudel (PV), mis jĂ€rgib TJ-e domeenimudelit. JĂ€rgmisena esitleme PV-it kontseptualiseerimisega alusontoloogiana kĂ”rgema taseme vĂ”rdlusontoloogiat (ULRO). Pakume ULRO kahte eksemplari. Esimene eksemplar sisaldab Cordat, kui lubatud plokiahelat ja finantsjuhtumit. Teine eksemplar sisaldab lubadeta plokiahelate komponente ja tervishoiu juhtumit. MĂ”lemad ontoloogiaesitlused aitavad traditsiooniliste ja plokiahelapĂ”histe rakenduste TJ-es. Lisaks koostasime veebipĂ”hise ontoloogia parsimise tööriista OwlParser. Kaastööde tulemusel loodi ontoloogiapĂ”hine turberaamistik turvariskide haldamiseks plokiahela abil. Raamistik on dĂŒnaamiline, toetab TJ-e iteratiivset protsessi ja potentsiaalselt vĂ€hendab traditsiooniliste ja plokiahelapĂ”histe rakenduste turbeohte.Various programs (e.g., OWASP), threat models (e.g., STRIDE), security risk management models (e.g., ISSRM), and regulations (e.g., GDPR) exist to communicate and reduce the security threats to build secure software. However, security threats continuously evolve because the traditional technology infrastructure does not implement security measures by design. Blockchain is appearing to mitigate traditional applications’ security threats. Although blockchain-based applications are considered less vulnerable, they did not become the silver bullet for securing against different security threats. Moreover, the blockchain domain is constantly evolving, providing new techniques and often interchangeable design concepts, resulting in conceptual ambiguity and confusion in treating security threats effectively. Overall, we address the problem of traditional applications’ SRM using blockchain as a countermeasure and the SRM of blockchain-based applications. We start by surveying how blockchain mitigates the security threats of traditional applications, and the outcome is a blockchain-based reference model (BbRM) that adheres to the SRM domain model. Next, we present an upper-level reference ontology (ULRO) as a foundation ontology and provide two instantiations of the ULRO. The first instantiation includes Corda as a permissioned blockchain and the financial case. The second instantiation includes the permissionless blockchain components and the healthcare case. Both ontology representations help in the SRM of traditional and blockchain-based applications. Furthermore, we built a web-based ontology parsing tool, OwlParser. Contributions resulted in an ontology-based security reference framework for managing security risks using blockchain. The framework is dynamic, supports the iterative process of SRM, and potentially lessens the security threats of traditional and blockchain-based applications.https://www.ester.ee/record=b551352

    Networks, complexity and internet regulation scale-free law

    Get PDF
    This book, then, starts with a general statement: that regulators should try, wherever possible, to use the physical methodological tools presently available in order to draft better legislation. While such an assertion may be applied to the law in general, this work will concentrate on the much narrower area of Internet regulation and the science of complex networks The Internet is the subject of this book not only because it is my main area of research, but also because –without over-emphasising the importance of the Internet to everyday life– one cannot deny that the growth and popularisation of the global communications network has had a tremendous impact on the way in which we interact with one another. The Internet is, however, just one of many interactive networks. One way of looking at the complex and chaotic nature of society is to see it as a collection of different nodes of interaction. Humans are constantly surrounded by networks: the social network, the financial network, the transport network, the telecommunications network and even the network of our own bodies. Understanding how these systems operate and interact with one another has been the realm of physicists, economists, biologists and mathematicians. Until recently, the study of networks has been mainly theoretical and academic, because it is difficult to gather data about large and complex systems that is sufficiently reliable to support proper empirical application. In recent years, though, the Internet has given researchers the opportunity to study and test the mathematical descriptions of these vast complex systems. The growth rate and structure of cyberspace has allowed researchers to map and test several previously unproven theories about how links and hubs within networks interact with one another. The Web now provides the means with which to test the organisational structures, architecture and growth of networks, and even permits some limited prediction about their behaviour, strengths and vulnerabilities. The main objective of this book is first and foremost to serve as an introduction to the wider legal audience to some of the theories of complexity and networks. The second objective is more ambitious. By looking at the application of complexity theory and network science in various areas of Internet regulation, it is hoped that there will be enough evidence to postulate a theory of Internet regulation based on network science. To achieve these two goals, Chapter 2 will look in detail at the science of complex networks to set the stage for the legal and regulatory arguments to follow. With the increase in reliability of the descriptive (and sometimes predictive) nature of network science, a logical next step for legal scholars is to look at the legal implications of the characteristics of networks. Chapter 3 highlights the efforts of academics and practitioners who have started to find potential uses for network science tools. Chapter 4 takes this idea further, and explores how network theory can shape Internet regulation. The following chapters will analyse the potential for application of the tools described in the previous chapters, applying complexity theory to specific areas of study related to Internet Law. Chapter 5 deals with the subject of copyright in the digital world. Chapter 6 explores the issue of peer-production and user-generated content using network science as an analytical framework. Chapter 7 finishes the evidence section of the work by studying the impact of network architecture in the field of cybercrime, and asks whether the existing architecture hinders or assists efforts to tackle those problems. It is clear that these are very disparate areas of study. It is not the intention of this book to be overreaching in its scope, although I am mindful that it covers a lot of ground and attempts to study and describe some disciplines that fall outside of my intellectual comfort zone. While the focus of the work is the Internet, its applications may extend beyond mere electronic bits. Without trying to be over-ambitious, it is my strong belief that legal scholarship has been neglectful in that it has been slow to respond to the wealth of research into complexity. That is not to say that there has been no legal research on the topic, but it would seem that lawyers, legislators and policy-makers are reluctant to consider technical solutions to legal problems. It is hoped then that this work will serve as a stepping stone that will lead to new interest in some of the theories that I describe
    corecore