319 research outputs found

    ANALYSIS OF BOTNET CLASSIFICATION AND DETECTION BASED ON C&C CHANNEL

    Get PDF
    Botnet is a serious threat to cyber-security. Botnet is a robot that can enter the computer and perform DDoS attacks through attacker’s command. Botnets are designed to extract confidential information from network channels such as LAN, Peer or Internet. They perform on hacker's intention through Command & Control(C&C) where attacker can control the whole network and can clinch illegal activities such as identity theft, unauthorized logins and money transactions. Thus, for security reason, it is very important to understand botnet behavior and go through its countermeasures. This thesis draws together the main ideas of network anomaly, botnet behavior, taxonomy of botnet, famous botnet attacks and detections processes. Based on network protocols, botnets are mainly 3 types: IRC, HTTP, and P2P botnet. All 3 botnet's behavior, vulnerability, and detection processes with examples are explained individually in upcoming chapters. Meanwhile saying shortly, IRC Botnet refers to early botnets targeting chat and messaging applications, HTTP Botnet targets internet browsing/domains and P2P Botnet targets peer network i.e. decentralized servers. Each Botnet's design, target, infecting and spreading mechanism can be different from each other. For an instance, IRC Botnet is targeted for small environment attacks where HTTP and P2P are for huge network traffic. Furthermore, detection techniques and algorithms filtration processes are also different among each of them. Based on these individual botnet's behavior, many research papers have analyzed numerous botnet detection techniques such as graph-based structure, clustering algorithm and so on. Thus, this thesis also analyzes popular detection mechanisms, C&C channels, Botnet working patterns, recorded datasets, results and false positive rates of bots prominently found in IRC, HTTP and P2P. Research area covers C&C channels, botnet behavior, domain browsing, IRC, algorithms, intrusion and detection, network and peer, security and test results. Research articles are conducted from scientific books through online source and University of Turku library

    An architectural framework for self-configuration and self-improvement at runtime

    Get PDF
    [no abstract

    Joinder is Coming: Why Denying Swarm Joinder in BitTorrent Cases May Do More Harm than Good

    Get PDF

    The state of peer-to-peer network simulators

    Get PDF
    Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate and extend existing work. We look at the landscape of simulators for research in peer-to-peer (P2P) networks by conducting a survey of a combined total of over 280 papers from before and after 2007 (the year of the last survey in this area), and comment on the large quantity of research using bespoke, closed-source simulators. We propose a set of criteria that P2P simulators should meet, and poll the P2P research community for their agreement. We aim to drive the community towards performing their experiments on simulators that allow for others to validate their results

    Cognitive networking techniques on content distribution networks

    Get PDF
    First we want to design a strategy based on Artificial Intelligence (AI) techniques with the aim of increasing peers download performance. Some AI algorithms can find patterns in the information available to a peer locally, and use it to predict values that cannot be calculated by means of mathematical formulas. An important aspect of these techniques is that can be trained in order to improve its interpretation of the local available information. With this process they can make more accurate predictions and perform better results. We will use this prediction system to increase our knowledge about the swarm and the peers who are part of it. This global knowledge increase can be used to optimize the algorithms of BitTorrent and can represent a great improvement in peers download capacity. Our second challenge is to create a reduced group of peers (Crowd) that focus their efforts on improving the condition of the swarm through collaborative techniques. The basic idea of this approach is to organize a group of peers to act as a single node and focus them on getting all pieces of the content they are interested in. This involves avoiding, as far as possible, to download pieces that any of the members already have. The main goal of this technique consists of reaching as quickly as possible a copy of the content distributed between all members of the Crowd. Getting a distributed copy of the content is expected to increase the availability of parts and reduce dependence on the seeds (users who have the complete content), which would represent a great benefit for the whole swarm. Another aspect that we want to investigate is the use of a priority system among members of the Crowd. We consider that in certain situations to prioritize the Crowd peers at expense of regular peers can result in a significant increase of the download ratio

    Impact of Location on Content Delivery

    Get PDF
    Steigende Benutzerzahlen und steigende Internetnutzung sind seit über 15 Jahren verantwortlich für ein exponentielles Wachstum des Internetverkehrs. Darüber hinaus haben neue Applikationen und Anwendungsfälle zu einer Veränderung der Eigenschaften des Verkehrs geführt. Zum Beispiel erlauben soziale Netze dem Benutzer die Veröffentlichung eigener Inhalte. Diese benutzergenerierten Inhalte werden häufig auf beliebten Webseiten wie YouTube, Twitter oder Facebook publiziert. Weitere Beispiele sind die Angebote an interaktiven oder multimedialen Inhalten wie Google Maps oder Fernsehdienste (IPTV). Die Einführung von Peer-to-Peer-Protokollen (P2P) im Jahre 1998 bewirkte einen noch radikaleren Wandel, da sie den direkten Austausch von großen Mengen an Daten erlauben: Die Peers übertragen die Daten ohne einen dazwischenliegenden, oft zentralisierten Server. Allerdings zeigen aktuelle Forschungsarbeiten, dass Internetverkehr wieder von HTTP dominiert wird, zum Großteil auf Kosten von P2P. Dieses Verkehrswachstum erhöht die Anforderungen an die Komponenten aus denen das Internet aufgebaut ist, z.B. Server und Router. Darüber hinaus wird der Großteil des Verkehrs von wenigen, sehr beliebten Diensten erzeugt. Die gewaltige Nachfrage nach solchen beliebten Inhalten kann nicht mehr durch das traditionelle Hostingmodell gedeckt werden, bei dem jeder Inhalt nur auf einem Server verfügbar gemacht wird. Stattdessen müssen Inhalteanbieter ihre Infrastruktur ausweiten, z.B. indem sie sie in großen Datenzentren vervielfältigen, oder indem sie den Dienst einer Content Distribution Infrastructure wie Akamai oder Limelight in Anspruch nehmen. Darüber hinaus müssen nicht nur die Anbieter von Inhalten sich der Nachfrage anpassen: Auch die Netzwerkinfrastruktur muss kontinuierlich mit der ständig steigenden Nachfrage mitwachsen. In dieser Doktorarbeit charakterisieren wir die Auswirkung von Content Delivery auf das Netzwerk. Wir nutzen Datensätze aus aktiven und aus passiven Messungen, die es uns ermöglichen, das Problem auf verschiedenen Abstraktionsebenen zu untersuchen: vom detaillierten Verhalten auf der Protokollebene von verschiedenen Content Delivery-Methoden bis hin zum ganzheitlichen Bild des Identifizierens und Kartographierens der Content Distribution Infrastructures, die für die populärsten Inhalte verantwortlich sind. Unsere Ergebnisse zeigen, dass das Cachen von Inhalten immer noch ein schwieriges Problem darstellt und dass die Wahl des DNS-Resolvers durch den Nutzer einen ausgeprägten Einfluß auf den Serverwahlmechanismus der Content Distribution Infrastructure hat. Wir schlagen vor, Webinhalte zu kartographieren, um darauf rückschließen zu können, wie Content Distribution Infrastructures ausgerollt sind und welche Rollen verschiedene Organisationen im Internet einnehmen. Wir schließen die Arbeit ab, indem wir unsere Ergebnisse mit zeitnahen Arbeiten vergleichen und geben Empfehlungen, wie man die Auslieferung von Inhalten weiter verbessern kann, an alle betroffenen Parteien: Benutzer, Internetdienstanbieter und Content Distribution Infrastructures.The increasing number of users as well as their demand for more and richer content has led to an exponential growth of Internet traffic for more than 15 years. In addition, new applications and use cases have changed the type of traffic. For example, social networking enables users to publish their own content. This user generated content is often published on popular sites such as YouTube, Twitter, and Facebook. Another example are the offerings of interactive and multi-media content by content providers, e.g., Google Maps or IPTV services. With the introduction of peer-to-peer (P2P) protocols in 1998 an even more radical change emerged because P2P protocols allow users to directly exchange large amounts of content: The peers transfer data without the need for an intermediary and often centralized server. However, as shown by recent studies Internet traffic is again dominated by HTTP, mostly at the expense of P2P. This traffic growth increases the demands on the infrastructure components that form the Internet, e.g., servers and routers. Moreover, most of the traffic is generated by a few very popular services. The enormous demand for such popular content cannot be satisfied by the traditional hosting model in which content is located on a single server. Instead, content providers need to scale up their delivery infrastructure, e.g., by using replication in large data centers or by buying service from content delivery infrastructures, e.g., Akamai or Limelight. Moreover, not only content providers have to cope with the demand: The network infrastructure also needs to be constantly upgraded to keep up with the growing demand for content. In this thesis we characterize the impact of content delivery on the network. We utilize data sets from both active and passive measurements. This allows us to cover a wide range of abstraction levels from a detailed protocol level view of several content delivery mechanisms to the high-level picture of identifying and mapping the content infrastructures that are hosting the most popular content. We find that caching content is still hard and that the user's choice of DNS resolvers has a profound impact on the server selection mechanism of content distribution infrastructures. We propose Web content cartography to infer how content distribution infrastructures are deployed and what the role of different organizations in the Internet is. We conclude by putting our findings in the context of contemporary work and give recommendations on how to improve content delivery to all parties involved: users, Internet service providers, and content distribution infrastructures

    The Future of Free Expression in a Digital Age

    Get PDF
    In the twenty-first century, at the very moment that our economic and social lives are increasingly dominated by information technology and information flows, the judge-made doctrines of the First Amendment seem increasingly irrelevant to the key free speech battles of the future. The most important decisions affecting the future of freedom of speech will not occur in constitutional law; they will be decisions about technological design, legislative and administrative regulations, the formation of new business models, and the collective activities of end-users. Moreover, the values of freedom of expression will become subsumed within a larger set of concerns that I call knowledge and information policy. The essay uses debates over network neutrality and intermediary liability as examples of these trends. Freedom of speech depends not only on the mere absence of state censorship, but also on an infrastructure of free expression. Properly designed, it gives people opportunities to create and build technologies and institutions that other people can use for communication and association. Hence policies that promote innovation and protect the freedom to create new technologies and applications are increasingly central to free speech values. The great tension in twentieth century free speech theory was the increasing protection of the formal freedom to speak against the background of mass broadcast technologies that reserved practical freedom to a relative few. The tension in twenty-first century free speech theory is somewhat different: New technologies offer ordinary citizens a vast range of new opportunities to speak, create and publish; they decentralize control over culture, over information production and over access to mass audiences. But these same technologies also make information and culture increasingly valuable commodities that can be bought and sold and exported to markets around the world. These two conflicting effects - toward greater participation and propertization - are produced by the same set of technological advances. Technologies that create new possibilities for democratic cultural participation often threaten business models that seek to commodify knowledge and control its access and distribution. Intellectual property and telecommunications law may be the terrain on which this struggle occurs, but what is at stake is the practical structure of freedom of speech in the new century

    Defense Against the Dark Arts of Copyright Trolling

    Get PDF
    In this Article, we offer both a legal and a pragmatic framework for defending against copyright trolls. Lawsuits alleging online copyright infringement by John Doe defendants have accounted for roughly half of all copyright cases filed in the United States over the past three years. In the typical case, the plaintiff\u27s claims of infringement rely on a poorly substantiated form pleading and are targeted indiscriminately at noninfringers as well as infringers. This practice is a subset of the broader problem of opportunistic litigation, but it persists due to certain unique features of copyright law and the technical complexity of Internet technology. The plaintiffs bringing these cases target hundreds or thousands of defendants nationwide and seek quick settlements priced just low enough that it is less expensive for the defendant to pay rather than to defend the claim, regardless of the claim\u27s merits. We report new empirical data on the continued growth of this form of copyright trolling in the United States. We also undertake a detailed analysis of the legal and factual underpinnings of these cases. Despite their underlying weakness, plaintiffs have exploited information asymmetries, the high cost of federal court litigation, and the extravagant threat of statutory damages for copyright infringement to leverage settlements from the guilty and the innocent alike. We analyze the weaknesses of the typical plaintiff\u27s case and integrate that analysis into a strategy roadmap for both defense lawyers and pro se defendants. In short, as our title suggests, we provide a useful guide to the defense against the dark arts of copyright trolling
    corecore