61 research outputs found
Detecting Byzantine Attacks Without Clean Reference
We consider an amplify-and-forward relay network composed of a source, two
relays, and a destination. In this network, the two relays are untrusted in the
sense that they may perform Byzantine attacks by forwarding altered symbols to
the destination. Note that every symbol received by the destination may be
altered, and hence no clean reference observation is available to the
destination. For this network, we identify a large family of Byzantine attacks
that can be detected in the physical layer. We further investigate how the
channel conditions impact the detection against this family of attacks. In
particular, we prove that all Byzantine attacks in this family can be detected
with asymptotically small miss detection and false alarm probabilities by using
a sufficiently large number of channel observations \emph{if and only if} the
network satisfies a non-manipulability condition. No pre-shared secret or
secret transmission is needed for the detection of these attacks, demonstrating
the value of this physical-layer security technique for counteracting Byzantine
attacks.Comment: 16 pages, 7 figures, accepted to appear on IEEE Transactions on
Information Forensics and Security, July 201
Physical detection of misbehavior in relay systems with unreliable channel state information
We study the detection 1 of misbehavior in a Gaussian relay system, where the source transmits information to the destination with the assistance of an amplify-and-forward relay node subject to unreliable channel state information (CSI). The relay node may be potentially malicious and corrupt the network by forwarding garbled information. In this situation, misleading feedback may take place, since reliable CSI is unavailable at the source and/or the destination. By classifying the action of the relay as detectable or undetectable, we propose a novel approach that is capable of coping with any malicious attack detected and continuing to work effectively in the presence of unreliable CSI. We demonstrate that the detectable class of attacks can be successfully detected with a high probability. Meanwhile, the undetectable class of attacks does not affect the performance improvements that are achievable by cooperative diversity, even though such an attack may fool the proposed detection approach. We also extend the method to deal with the case in which there is no direct link between the source and the destination. The effectiveness of the proposed approach has been validated by numerical results
Malicious relay node detection with unsupervised learning in amplify-forward cooperative networks
This paper presents malicious relay node detection in a cooperative network using unsupervised learning based on the received signal samples over the source to destination (S-D) link at the destination node. We consider the situations in which possible maliciousness of the relay is the regenerative, injection or garbling type attacks over the source signal according to attack modeling in the communication. The proposed approach here for such an attack detection problem is to apply unsupervised machine learning using one-class classifier (OCC) algorithms. Among the algorithms compared, One-Class Support Vector Machines (OSVM) with kernel radial basis function (RBF) has the largest accuracy performance in detecting malicious node attacks with certain types and also detect trustable relay by using specific features of the symbol constellation of the received signal. Results show that we can achieve detection accuracy about 99% with SVM-RBF and k-NN learning algorithms for garbling type relay attacks. The results also encourage that OCC algorithms considered in this study with different feature selections could be effective in detecting other types of relay attacks
Neyman-Pearson Decision in Traffic Analysis
The increase of encrypted traffic on the Internet may become a problem for network-security applications such as intrusion-detection systems or interfere with forensic investigations. This fact has increased the awareness for traffic analysis, i.e., inferring information from communication patterns instead of its content. Deciding correctly that a known network flow is either the same or part of an observed one can be extremely useful for several network-security applications such as intrusion detection and tracing anonymous connections. In many cases, the flows of interest are relayed through many nodes that reencrypt the flow, making traffic analysis the only possible solution. There exist two well-known techniques to solve this problem: passive traffic analysis and flow watermarking. The former is undetectable but in general has a much worse performance than watermarking, whereas the latter can be detected and modified in such a way that the watermark is destroyed. In the first part of this dissertation we design techniques where the traffic analyst (TA) is one end of an anonymous communication and wants to deanonymize the other host, under this premise that the arrival time of the TA\u27s packets/requests can be predicted with high confidence. This, together with the use of an optimal detector, based on Neyman-Pearson lemma, allow the TA deanonymize the other host with high confidence even with short flows. We start by studying the forensic problem of leaving identifiable traces on the log of a Tor\u27s hidden service, in this case the used predictor comes in the HTTP header. Afterwards, we propose two different methods for locating Tor hidden services, the first one is based on the arrival time of the request cell and the second one uses the number of cells in certain time intervals. In both of these methods, the predictor is based on the round-trip time and in some cases in the position inside its burst, hence this method does not need the TA to have access to the decrypted flow. The second part of this dissertation deals with scenarios where an accurate predictor is not feasible for the TA. This traffic analysis technique is based on correlating the inter-packet delays (IPDs) using a Neyman-Pearson detector. Our method can be used as a passive analysis or as a watermarking technique. This algorithm is first made robust against adversary models that add chaff traffic, split the flows or add random delays. Afterwards, we study this scenario from a game-theoretic point of view, analyzing two different games: the first deals with the identification of independent flows, while the second one decides whether a flow has been watermarked/fingerprinted or not
The Embedding Capacity of Information Flows Under Renewal Traffic
Given two independent point processes and a certain rule for matching points
between them, what is the fraction of matched points over infinitely long
streams? In many application contexts, e.g., secure networking, a meaningful
matching rule is that of a maximum causal delay, and the problem is related to
embedding a flow of packets in cover traffic such that no traffic analysis can
detect it. We study the best undetectable embedding policy and the
corresponding maximum flow rate ---that we call the embedding capacity--- under
the assumption that the cover traffic can be modeled as arbitrary renewal
processes. We find that computing the embedding capacity requires the inversion
of very structured linear systems that, for a broad range of renewal models
encountered in practice, admits a fully analytical expression in terms of the
renewal function of the processes. Our main theoretical contribution is a
simple closed form of such relationship. This result enables us to explore
properties of the embedding capacity, obtaining closed-form solutions for
selected distribution families and a suite of sufficient conditions on the
capacity ordering. We evaluate our solution on real network traces, which shows
a noticeable match for tight delay constraints. A gap between the predicted and
the actual embedding capacities appears for looser constraints, and further
investigation reveals that it is caused by inaccuracy of the renewal traffic
model rather than of the solution itself.Comment: Sumbitted to IEEE Trans. on Information Theory on March 10, 201
Wireless Device Authentication Techniques Using Physical-Layer Device Fingerprint
Due to the open nature of the radio signal propagation medium, wireless communication is inherently more vulnerable to various attacks than wired communication. Consequently, communication security is always one of the critical concerns in wireless networks. Given that the sophisticated adversaries may cover up their malicious behaviors through impersonation of legitimate devices, reliable wireless authentication is becoming indispensable to prevent such impersonation-based attacks through verification of the claimed identities of wireless devices.
Conventional wireless authentication is achieved above the physical layer using upper-layer identities and key-based cryptography. As a result, user authenticity can even be validated for the malicious attackers using compromised security key. Recently, many studies have proven that wireless devices can be authenticated by exploiting unique physical-layer characteristics. Compared to the key-based approach, the possession of such physical-layer characteristics is directly associated with the transceiver\u27s unique radio-frequency hardware and corresponding communication environment, which are extremely difficult to forge in practice. However, the reliability of physical-layer authentication is not always high enough. Due to the popularity of cooperative communications, effective implementation of physical-layer authentication in wireless relay systems is urgently needed. On the other hand, the integration with existing upper-layer authentication protocols still has many challenges, e.g., end-to-end authentication. This dissertation is motivated to develop novel physical-layer authentication techniques in addressing the aforementioned challenges.
In achieving enhanced wireless authentication, we first specifically identify the technique challenges in authenticating cooperative amplify-and-forward (AF) relay. Since AF relay only works at the physical layer, all of the existing upper-layer authentication protocols are ineffective in identifying AF relay nodes. To solve this problem, a novel device fingerprint of AF relay consisting of wireless channel gains and in-phase and quadrature imbalances (IQI) is proposed. Using this device fingerprint, satisfactory authentication accuracy is achieved when the signal-to-noise ratio is high enough. Besides, the optimal AF relay identification system is studied to maximize the performance of identifying multiple AF relays in the low signal-to-noise regime and small IQI. The optimal signals for quadrature amplitude modulation and phase shift keying modulations are derived to defend against the repeated access attempts made by some attackers with specific IQIs.
Exploring effective authentication enhancement technique is another key objective of this dissertation. Due to the fast variation of channel-based fingerprints as well as the limited range of device-specific fingerprints, the performance of physical-layer authentication is not always reliable. In light of this, the physical-layer authentication is enhanced in two aspects. On the one hand, the device fingerprinting can be strengthened by considering multiple characteristics. The proper characteristics selection strategy, measurement method and optimal weighted combination of the selected characteristics are investigated. On the other hand, the accuracy of fingerprint estimation and differentiation can be improved by exploiting diversity techniques. To be specific, cooperative diversity in the form of involving multiple collaborative receivers is used in differentiating both frequency-dependent and frequency-independent device fingerprints. As a typical combining method of the space diversity techniques, the maximal-ratio combining is also applied in the receiver side to combat the channel degeneration effect and increase the fingerprint-to-noise ratio.
Given the inherent weaknesses of the widely utilized upper-layer authentication protocols, it is straightforward to consider physical-layer authentication as an effective complement to reinforce existing authentication schemes. To this end, a cross-layer authentication is designed to seamlessly integrate the physical-layer authentication with existing infrastructures and protocols. The specific problems such as physical-layer key generation as well as the end-to-end authentication in networks are investigated. In addition, the authentication complexity reduction is also studied. Through prediction, pre-sharing and reusing the physical-layer information, the authentication processing time can be significantly shortened
Recommended from our members
TOWARDS RELIABLE CIRCUMVENTION OF INTERNET CENSORSHIP
The Internet plays a crucial role in today\u27s social and political movements by facilitating the free circulation of speech, information, and ideas; democracy and human rights throughout the world critically depend on preserving and bolstering the Internet\u27s openness. Consequently, repressive regimes, totalitarian governments, and corrupt corporations regulate, monitor, and restrict the access to the Internet, which is broadly known as Internet \emph{censorship}. Most countries are improving the internet infrastructures, as a result they can implement more advanced censoring techniques. Also with the advancements in the application of machine learning techniques for network traffic analysis have enabled the more sophisticated Internet censorship. In this thesis, We take a close look at the main pillars of internet censorship, we will introduce new defense and attacks in the internet censorship literature.
Internet censorship techniques investigate users’ communications and they can decide to interrupt a connection to prevent a user from communicating with a specific entity. Traffic analysis is one of the main techniques used to infer information from internet communications. One of the major challenges to traffic analysis mechanisms is scaling the techniques to today\u27s exploding volumes of network traffic, i.e., they impose high storage, communications, and computation overheads. We aim at addressing this scalability issue by introducing a new direction for traffic analysis, which we call \emph{compressive traffic analysis}. Moreover, we show that, unfortunately, traffic analysis attacks can be conducted on Anonymity systems with drastically higher accuracies than before by leveraging emerging learning mechanisms. We particularly design a system, called \deepcorr, that outperforms the state-of-the-art by significant margins in correlating network connections. \deepcorr leverages an advanced deep learning architecture to \emph{learn} a flow correlation function tailored to complex networks. Also to be able to analyze the weakness of such approaches we show that an adversary can defeat deep neural network based traffic analysis techniques by applying statistically undetectable \emph{adversarial perturbations} on the patterns of live network traffic.
We also design techniques to circumvent internet censorship. Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. We propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to \emph{all} previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it \emph{downstream-only} decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Then, we propose to use game theoretic approaches to model the arms races between the censors and the censorship circumvention tools. This will allow us to analyze the effect of different parameters or censoring behaviors on the performance of censorship circumvention tools. We apply our methods on two fundamental problems in internet censorship.
Finally, to bring our ideas to practice, we designed a new censorship circumvention tool called \name. \name aims at increasing the collateral damage of censorship by employing a ``mass\u27\u27 of normal Internet users, from both censored and uncensored areas, to serve as circumvention proxies
- …