369 research outputs found
Library and Tools for Server-Side DNSSEC Implementation
Tato práce se zabývá analýzou současných open source řešení pro zabezpečení DNS zón pomocí technologie DNSSEC. Na základě provedené rešerše je navržena a implementována nová knihovna pro použití na autoritativních DNS serverech. Cílem knihovny je zachovat výhody stávajících řešení a vyřešit jejich nedostatky. Součástí návrhu je i sada nástrojů pro správu politiky a klíčů. Funkčnost vytvořené knihovny je ukázána na jejím použití v serveru Knot DNS.This thesis deals with currently available open-source solutions for securing DNS zones using the DNSSEC mechanism. Based on the findings, a new DNSSEC library for an authoritative name server is designed and implemented. The aim of the library is to keep the benefits of existing solutions and to eliminate their drawbacks. Also a set of utilities to manage keys and signing policy is proposed. The functionality of the library is demonstrated by it's use in the Knot DNS server.
Mobile Application for Capturing and Monitoring of DNS Traffic
Předmětem této práce je návrh a implementace aplikace pro systém Android, která zachytává respektive monitoruje DNS síťový provoz a také umožňuje načítat PCAP soubory. Nezávisle na vstupu poskytuje možnost přehledně zobrazit data jednotlivých paketů síťového provozu. Zachycená data mohou být ukládána a to do PCAP souboru, který lze později pomocí této aplikace otevřít.Subject of this thesis is design and implementation of aplication for Android system that captures and monitors DNS network traffic and also allows to load PCAP files. Independently of input this application gives the option to clearly show data of individual network traffic packets. Captured data could be also saved to PCAP files, these can be later opened with this application.
Recommended from our members
High Availability for Carrier-Grade SIP Infrastructure on Cloud Platforms
SIP infrastructure on cloud platforms has the potential to be both scalable and highly available. In our previous project, we focused on the scalability aspect of SIP services on cloud platforms; the focus of this project is on the high availability aspect. We investigated the effects of component fault on service availability with the goal of understanding how high availability can be guaranteed even in the face of component faults. The experiments were conducted empirically on a real system that runs on Amazon EC2. Our analysis shows that most component faults are masked with a simple automatic failover technique. However, we have also identified fundamental problems that cannot be addressed by simple failover techniques; a problem involving DNS cache in resolvers and a problem involving static failover configurations. Recommendations on how to solve these problems are included in the report
Techniques of data prefetching, replication, and consistency in the Internet
Internet has become a major infrastructure for information sharing in our daily life, and indispensable to critical and large applications in industry, government, business, and education. Internet bandwidth (or the network speed to transfer data) has been dramatically increased, however, the latency time (or the delay to physically access data) has been reduced in a much slower pace. The rich bandwidth and lagging latency can be effectively coped with in Internet systems by three data management techniques: caching, replication, and prefetching. The focus of this dissertation is to address the latency problem in Internet by utilizing the rich bandwidth and large storage capacity for efficiently prefetching data to significantly improve the Web content caching performance, by proposing and implementing scalable data consistency maintenance methods to handle Internet Web address caching in distributed name systems (DNS), and to handle massive data replications in peer-to-peer systems. While the DNS service is critical in Internet, peer-to-peer data sharing is being accepted as an important activity in Internet.;We have made three contributions in developing prefetching techniques. First, we have proposed an efficient data structure for maintaining Web access information, called popularity-based Prediction by Partial Matching (PB-PPM), where data are placed and replaced guided by popularity information of Web accesses, thus only important and useful information is stored. PB-PPM greatly reduces the required storage space, and improves the prediction accuracy. Second, a major weakness in existing Web servers is that prefetching activities are scheduled independently of dynamically changing server workloads. Without a proper control and coordination between the two kinds of activities, prefetching can negatively affect the Web services and degrade the Web access performance. to address this problem, we have developed a queuing model to characterize the interactions. Guided by the model, we have designed a coordination scheme that dynamically adjusts the prefetching aggressiveness in Web Servers. This scheme not only prevents the Web servers from being overloaded, but it can also minimize the average server response time. Finally, we have proposed a scheme that effectively coordinates the sharing of access information for both proxy and Web servers. With the support of this scheme, the accuracy of prefetching decisions is significantly improved.;Regarding data consistency support for Internet caching and data replications, we have conducted three significant studies. First, we have developed a consistency support technique to maintain the data consistency among the replicas in structured P2P networks. Based on Pastry, an existing and popular P2P system, we have implemented this scheme, and show that it can effectively maintain consistency while prevent hot-spot and node-failure problems. Second, we have designed and implemented a DNS cache update protocol, called DNScup, to provide strong consistency for domain/IP mappings. Finally, we have developed a dynamic lease scheme to timely update the replicas in Internet
Recommended from our members
A Framework for Benevolent Computer Worms
The objective of this research was to discover and define the characteristics a benevolent computer worm would have in order to reduce the risks of such a tool as a method to combat against computer security threats. Prominent malicious and benevolent computer worms were studied as well as the ethical and legal aspects of benevolent worms. A set of desired characteristics for a benevolent worm framework, as well as how those characteristics help to mitigate risk, was developed. A benevolent worm was created and tested in an environment with exploitable systems to demonstrate the feasibility of using a benevolent worm to patch and protect systems without causing excessive consumption of network resources and to provide accountability through logs. The conclusion reached was that it was feasible to construct a benevolent worm such that the benefits to the community (or network) as a whole in securing it outweighed the risks
Framework for DNS Server Testing
Tato práce se zabývá úpravami frameworku určeného pro testování DNS serverů. Framework je vyvíjen sdružením NIC.CZ a slouží především pro testování jejich DNS serveru Knot DNS. Cílem této práce jsou úpravy frameworku, které umožní jednodušší testování za pomoci tohoto frameworku, jako například: podpora více implementací DNS serverů, paralelizace testování, prvky dummy server a box-in-the-middle, rozdělení na více komponent a celková úprava stávajícího frameworku. Úvod práce je věnován autoritativním DNS serverům a základům testování. Zbývající část práce se zabývá stavem dosavadního frameworku a stavem a testováním upraveného frameworku.This thesis deals with the modifications of the framework designed for DNS servers testing. Framework is developed by NIC.CZ association and is used primarily for testing the DNS server Knot DNS. The aim of this work are modifications of the framework that will allow simpler testing with this framework, such as: support for multiple implementations of DNS servers, parallel testing, components dummy server and box-in-the-middle, division into multiple components and overall modification of the existing framework. Introduction of thesis is dedicated to the authoritative DNS servers and to the foundations of testing. The remaining part of the thesis deals with the state of the existing framework and the state and testing of modified framework.
Campus Communications Systems: Converging Technologies
This book is a rewrite of Campus Telecommunications Systems: Managing Change, a book that was written by ACUTA in 1995. In the past decade, our industry has experienced a thousand-fold increase in data rates as we migrated from 10 megabit links (10 million bits per second) to 10 gigabit links (10 billion bits per second), we have seen the National Telecommunications Policy completely revamped; we have seen the combination of voice, data, and video onto one network; and we have seen many of our service providers merge into larger corporations able to offer more diverse services. When this book was last written, A CUT A meant telecommunications, convergence was a mathematical term, triple play was a baseball term, and terms such as iPod, DoS, and QoS did not exist. This book is designed to be a communications primer to be used by new entrants into the field of communications in higher education and by veteran communications professionals who want additional information in areas other than their field of expertise. There are reference books and text books available on every topic discussed in this book if a more in-depth explanation is desired. Individual chapters were authored by communications professionals from various member campuses. This allowed the authors to share their years of experience (more years than many of us would care to admit to) with the community at large.
Foreword Walt Magnussen, Ph.D.
Preface Ron Kovac, Ph.D.
1 The Technology Landscape: Historical Overview . Walt Magnussen, Ph.D.
2 Emerging Trends and Technologies . Joanne Kossuth
3 Network Security . Beth Chancellor
4 Security and Disaster Planning and Management Marjorie Windelberg, Ph.D.
5 Student Services in a University Setting . Walt Magnussen, Ph.D.
6 Administrative Services David E. O\u27Neill
7 The Business Side of Information Technology George Denbow
8 The Role of Consultants . David C. Metz
Glossary Michelle Narcavag
Correlating IPv6 addresses for network situational awareness
The advent of the IPv6 protocol on enterprise networks provides fresh challenges to network incident investigators. Unlike the conventional behavior and implementation of its predecessor, the typical deployment of IPv6 presents issues with address generation (host-based autoconfiguration rather than centralized distribution), address multiplicity (multiple addresses per host simultaneously), and address volatility (randomization and frequent rotation of host identifiers). These factors make it difficult for an investigator, when reviewing a log file or packet capture ex post facto, to both identify the origin of a particular log entry/packet and identify all log entries/packets related to a specific network entity (since multiple addresses may have been used). I have demonstrated a system, titled IPv6 Address Correlator (IPAC), that allows incident investigators to match both a specific IPv6 address to a network entity (identified by its MAC address and the physical switch port to which it is attached) and a specific entity to a set of IPv6 addresses in use within an organization\u27s networks at any given point in time. This system relies on the normal operation of the Neighbor Discovery Protocol for IPv6 (NDP) and bridge forwarding table notifications from Ethernet switches to keep a record of IPv6 and MAC address usage over time. With this information, it is possible to pair each IPv6 address to a MAC address and each MAC address to a physical switch port. When the IPAC system is deployed throughout an organization\u27s networks, aggregated IPv6 and MAC addressing timeline information can be used to identify which host caused an entry in a log file or sent/received a captured packet, as well as correlate all packets or log entries related to a given host
Internet of Things From Hype to Reality
The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions
- …