2,094 research outputs found
Entropy/IP: Uncovering Structure in IPv6 Addresses
In this paper, we introduce Entropy/IP: a system that discovers Internet
address structure based on analyses of a subset of IPv6 addresses known to be
active, i.e., training data, gleaned by readily available passive and active
means. The system is completely automated and employs a combination of
information-theoretic and machine learning techniques to probabilistically
model IPv6 addresses. We present results showing that our system is effective
in exposing structural characteristics of portions of the IPv6 Internet address
space populated by active client, service, and router addresses.
In addition to visualizing the address structure for exploration, the system
uses its models to generate candidate target addresses for scanning. For each
of 15 evaluated datasets, we train on 1K addresses and generate 1M candidates
for scanning. We achieve some success in 14 datasets, finding up to 40% of the
generated addresses to be active. In 11 of these datasets, we find active
network identifiers (e.g., /64 prefixes or `subnets') not seen in training.
Thus, we provide the first evidence that it is practical to discover subnets
and hosts by scanning probabilistically selected areas of the IPv6 address
space not known to contain active hosts a priori.Comment: Paper presented at the ACM IMC 2016 in Santa Monica, USA
(https://dl.acm.org/citation.cfm?id=2987445). Live Demo site available at
http://www.entropy-ip.com
Temporal and Spatial Classification of Active IPv6 Addresses
There is striking volume of World-Wide Web activity on IPv6 today. In early
2015, one large Content Distribution Network handles 50 billion IPv6 requests
per day from hundreds of millions of IPv6 client addresses; billions of unique
client addresses are observed per month. Address counts, however, obscure the
number of hosts with IPv6 connectivity to the global Internet. There are
numerous address assignment and subnetting options in use; privacy addresses
and dynamic subnet pools significantly inflate the number of active IPv6
addresses. As the IPv6 address space is vast, it is infeasible to
comprehensively probe every possible unicast IPv6 address. Thus, to survey the
characteristics of IPv6 addressing, we perform a year-long passive measurement
study, analyzing the IPv6 addresses gleaned from activity logs for all clients
accessing a global CDN.
The goal of our work is to develop flexible classification and measurement
methods for IPv6, motivated by the fact that its addresses are not merely more
numerous; they are different in kind. We introduce the notion of classifying
addresses and prefixes in two ways: (1) temporally, according to their
instances of activity to discern which addresses can be considered stable; (2)
spatially, according to the density or sparsity of aggregates in which active
addresses reside. We present measurement and classification results numerically
and visually that: provide details on IPv6 address use and structure in global
operation across the past year; establish the efficacy of our classification
methods; and demonstrate that such classification can clarify dimensions of the
Internet that otherwise appear quite blurred by current IPv6 addressing
practices
Recommended from our members
Understanding Flaws in the Deployment and Implementation of Web Encryption
In recent years, the web has switched from using the unencrypted HTTP protocol to using encrypted communications. Primarily, this resulted in increasing deployment of TLS to mitigate information leakage over the network. This development has led many web service operators to mistakenly think that migrating from HTTP to HTTPS will magically protect them from information leakage without any additional effort on their end to guar- antee the desired security properties. In reality, despite the fact that there exists enough infrastructure in place and the protocols have been “tested” (by virtue of being in wide, but not ubiquitous, use for many years), deploying HTTPS is a highly challenging task due to the technical complexity of its underlying protocols (i.e., HTTP, TLS) as well as the complexity of the TLS certificate ecosystem and this of popular client applications such as web browsers. For example, we found that many websites still avoid ubiquitous encryption and force only critical functionality and sensitive data access over encrypted connections while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. Thus, it is crucial for developers to verify the correctness of their deployments and implementations.
In this dissertation, in an effort to improve users’ privacy, we highlight semantic flaws in the implementations of both web servers and clients, caused by the improper deployment of web encryption protocols. First, we conduct an in-depth assessment of major websites and explore what functionality and information is exposed to attackers that have hijacked a user’s HTTP cookies. We identify a recurring pattern across websites with partially de- ployed HTTPS, namely, that service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-secure cookies. Our cookie hijacking study reveals a number of severe flaws; for example, attackers can obtain the user’s saved address and visited websites from e.g., Google, Bing, and Yahoo allow attackers to extract the contact list and send emails from the user’s account. To estimate the extent of the threat, we run measurements on a university public wireless network for a period of 30 days and detect over 282K accounts exposing the cookies required for our hijacking attacks.
Next, we explore and study security mechanisms purposed to eliminate this problem by enforcing encryption such as HSTS and HTTPS Everywhere. We evaluate each mechanism in terms of its adoption and effectiveness. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain, no mechanism can effectively protect users from cookie hijacking and information leakage.
Finally, as the security guarantees of TLS (in turn HTTPS), are critically dependent on the correct validation of X.509 server certificates, we study hostname verification, a critical component in the certificate validation process. We develop HVLearn, a novel testing framework to verify the correctness of hostname verification implementations and use HVLearn to analyze a number of popular TLS libraries and applications. To this end, we found 8 unique violations of the RFC specifications. Several of these violations are critical and can render the affected implementations vulnerable to man-in-the-middle attacks
Evaluating the Impacts of Detecting X.509 Covert Channels
This quasi-experimental before-and-after study examined the performance impacts of detecting X.509 covert channels in the Suricata intrusion detection system. Relevant literature and previous studies surrounding covert channels and covert channel detection, X.509 certificates, and intrusion detection system performance were evaluated. This study used Jason Reaves’ X.509 covert channel proof of concept code to generate malicious network traffic for detection (2018). Various detection rules for intrusion detection systems were created to aid in the detection of the X.509 covert channel. The central processing unit (CPU) and memory utilization impacts that each rule had on the intrusion detection system was studied and analyzed. Statistically significant figures found that the rules do have an impact on the performance of the system, some more than others. Finally, pathways towards future related research in creating efficient covert channel detection mechanisms were identified
Data Minimisation in Communication Protocols: A Formal Analysis Framework and Application to Identity Management
With the growing amount of personal information exchanged over the Internet,
privacy is becoming more and more a concern for users. One of the key
principles in protecting privacy is data minimisation. This principle requires
that only the minimum amount of information necessary to accomplish a certain
goal is collected and processed. "Privacy-enhancing" communication protocols
have been proposed to guarantee data minimisation in a wide range of
applications. However, currently there is no satisfactory way to assess and
compare the privacy they offer in a precise way: existing analyses are either
too informal and high-level, or specific for one particular system. In this
work, we propose a general formal framework to analyse and compare
communication protocols with respect to privacy by data minimisation. Privacy
requirements are formalised independent of a particular protocol in terms of
the knowledge of (coalitions of) actors in a three-layer model of personal
information. These requirements are then verified automatically for particular
protocols by computing this knowledge from a description of their
communication. We validate our framework in an identity management (IdM) case
study. As IdM systems are used more and more to satisfy the increasing need for
reliable on-line identification and authentication, privacy is becoming an
increasingly critical issue. We use our framework to analyse and compare four
identity management systems. Finally, we discuss the completeness and
(re)usability of the proposed framework
Library and Tools for Server-Side DNSSEC Implementation
Tato práce se zabĂ˝vá analĂ˝zou souÄŤasnĂ˝ch open source Ĺ™ešenĂ pro zabezpeÄŤenĂ DNS zĂłn pomocĂ technologie DNSSEC. Na základÄ› provedenĂ© rešerše je navrĹľena a implementována nová knihovna pro pouĹľitĂ na autoritativnĂch DNS serverech. CĂlem knihovny je zachovat vĂ˝hody stávajĂcĂch Ĺ™ešenĂ a vyĹ™ešit jejich nedostatky. SoučástĂ návrhu je i sada nástrojĹŻ pro správu politiky a klĂÄŤĹŻ. FunkÄŤnost vytvoĹ™enĂ© knihovny je ukázána na jejĂm pouĹľitĂ v serveru Knot DNS.This thesis deals with currently available open-source solutions for securing DNS zones using the DNSSEC mechanism. Based on the findings, a new DNSSEC library for an authoritative name server is designed and implemented. The aim of the library is to keep the benefits of existing solutions and to eliminate their drawbacks. Also a set of utilities to manage keys and signing policy is proposed. The functionality of the library is demonstrated by it's use in the Knot DNS server.
Basic Properties of the Persona Model
This document proposes a terminology and a model for representation of user data in information systems in the form of "persona'' objects. It provides the mechanisms for evaluation how the personae relate to real-world subjects or to each other. A mechanism how to evaluate some anonymity and identity properties is proposed. This paper also describes the linking of personae by the use of shared identifiers. The local, global, persistent and transient personal linking types are considered. The model is used to describe persona linking in the different practical technologies as the example of model applicability
- …