11 research outputs found
Protecting web services with service oriented traceback architecture
Service oriented architecture (SOA) is a way of reorganizing software infrastructure into a set of service abstracts. In the area of applying SOA to Web service security, there have been some well defined security dimensions. However, current Web security systems, like WS-Security are not efficient enough to handle distributed denial of service (DDoS) attacks. Our new approach, service oriented traceback architecture (SOTA), provides a framework to be able to identify the source of an attack. This is accomplished by deploying our defence system at distributed routers, in order to examine the incoming SOAP messages and place our own SOAP header. By this method, we can then use the new SOAP header information, to traceback through the network the source of the attack. According to our experimental performance evaluations, we find that SOTA is quite scaleable, simple and quite effective at identifying the source.<br /
Unified Defense against DDoS Attacks
Abstract. With DoS/DDoS attacks emerging as one of the primary security threats in today's Internet, the search is on for an efficient DDoS defense mechanism that would provide attack prevention, mitigation and traceback features, in as few packets as possible and with no collateral damage. Although several techniques have been proposed to tackle this growing menace, there exists no effective solution to date, due to the growing sophistication of the attacks and also the increasingly complex Internet architecture. In this paper, we propose an unified framework that integrates traceback and mitigation capabilities for an effective attack defense. Some significant aspects of our approach include: (1) a novel data cube model to represent the traceback information, and its slicing along the lines of path signatures rather than router signatures, (2) characterizing traceback as a transmission scheduling problem on the data cube representation, and achieving scheduling optimality using a novel metric called utility, (3) and finally an information delivery architecture employing both packet marking and data logging in a distributed manner to achieve faster response times. The proposed scheme can thus provide both per-packet mitigation and multi-packet traceback capabilities due to effective data slicing of the cube, and can attain higher detection speeds due to novel utility rate analysis. We also contrast this unified scheme with other well-known schemes in literature to understand the performance tradeoffs, while providing an experimental evaluation of the proposed scheme on real data sets
Affecting IP traceback with recent Internet topology maps
Computer network attacks are on the increase and are more sophisticated in today\u27s network environment than ever before. One step in tackling the increasing spate of attacks is the availability of a system that can trace attack packets back to their original sources irrespective of invalid or manipulated source addresses. IP Traceback is one of such methods, and several schemes have already been proposed in this area. Notably though, no traceback scheme is in wide use today due to reasons including a lack of compatibility with existing network protocols and infrastructure, as well as the high costs of deployment. Recently, remarkable progress has been made in the area of Internet topology mappings and more detailed and useful maps and metrics of the Internet are being made available to the corporate and academic research communities. This thesis introduces a novel use of these maps to influence IP Traceback in general, and packet marking schemes in particular. We note that while other schemes have previously taken advantage of such maps, most of these have viewed the maps from the available router node level. We take a novel router-aggregation node view of the Internet and explore ways to use this to make improvements to packet marking schemes and solving the problem of the limited space available in the current IP header for marking purposes. We evaluate our proposed schemes using real network paths traversed by several traceroute packets from diverse sources and to various destinations, and compare our results to other packet marking schemes. Finally, we explore the possibility of partial deployment of one of our schemes and estimate the probability of success at different stages of deployment
Tradeoffs in Probabilistic Packet Marking for IP Traceback
There has been considerable recent interest in probabilistic packet marking schemes for the problem of tracing a sequence of network packets back to an anonymous source. An important consideration for such schemes is the number of packet header bits that need to be allocated to the marking protocol. Let b denote this value. All previous schemes belong to a class of protocols for which b must be at least log n, where n is the number of bits used to represent the path of the packets. In this paper, we introduce a new marking technique for tracing a sequence of packets sent along the same path. There has been considerable recent interest... This new technique is effective even when b = 1. In other words, the sequence of packets can be traced back to their source using only a single bit in the packet header. With this scheme, the number of packets required to reconstruct the path is O(2^2n), but we also show that &Omega;(2^n) packets are required for any protocol where b = 1. We also study the tradeoff between b and the number of packets required. We provide a protocol and a lower bound that together demonstrate that for the optimal protocol, the number of packets required (roughly) increases exponentially with n, but decreases doubly exponentially with b. The protocol we..
A composable approach to design of newer techniques for large-scale denial-of-service attack attribution
Since its early days, the Internet has witnessed not only a phenomenal growth, but also a large number of security attacks, and in recent years, denial-of-service (DoS) attacks have emerged as one of the top threats. The stateless and destination-oriented Internet routing combined with the ability to harness a large number of compromised machines and the relative ease and low costs of launching such attacks has made this a hard problem to address. Additionally, the myriad requirements of scalability, incremental deployment, adequate user privacy protections, and appropriate economic incentives has further complicated the design of DDoS defense mechanisms. While the many research proposals to date have focussed differently on prevention, mitigation, or traceback of DDoS attacks, the lack of a comprehensive approach satisfying the different design criteria for successful attack attribution is indeed disturbing.
Our first contribution here has been the design of a composable data model that has helped us represent the various dimensions of the attack attribution problem, particularly the performance attributes of accuracy, effectiveness, speed and overhead, as orthogonal and mutually independent design considerations. We have then designed custom optimizations along each of these dimensions, and have further integrated them into a single composite model, to provide strong performance guarantees. Thus, the proposed model has given us a single framework that can not only address the individual shortcomings of the various known attack attribution techniques, but also provide a more wholesome counter-measure against DDoS attacks.
Our second contribution here has been a concrete implementation based on the proposed composable data model, having adopted a graph-theoretic approach to identify and subsequently stitch together individual edge fragments in the Internet graph to reveal the true routing path of any network data packet. The proposed approach has been analyzed through theoretical and experimental evaluation across multiple metrics, including scalability, incremental deployment, speed and efficiency of the distributed algorithm, and finally the total overhead associated with its deployment. We have thereby shown that it is realistically feasible to provide strong performance and scalability guarantees for Internet-wide attack attribution.
Our third contribution here has further advanced the state of the art by directly identifying individual path fragments in the Internet graph, having adopted a distributed divide-and-conquer approach employing simple recurrence relations as individual building blocks. A detailed analysis of the proposed approach on real-life Internet topologies with respect to network storage and traffic overhead, has provided a more realistic characterization. Thus, not only does the proposed approach lend well for simplified operations at scale but can also provide robust network-wide performance and security guarantees for Internet-wide attack attribution.
Our final contribution here has introduced the notion of anonymity in the overall attack attribution process to significantly broaden its scope. The highly invasive nature of wide-spread data gathering for network traceback continues to violate one of the key principles of Internet use today - the ability to stay anonymous and operate freely without retribution. In this regard, we have successfully reconciled these mutually divergent requirements to make it not only economically feasible and politically viable but also socially acceptable.
This work opens up several directions for future research - analysis of existing attack attribution techniques to identify further scope for improvements, incorporation of newer attributes into the design framework of the composable data model abstraction, and finally design of newer attack attribution techniques that comprehensively integrate the various attack prevention, mitigation and traceback techniques in an efficient manner
Towards IP traceback based defense against DDoS attacks.
Lau Nga Sin.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 101-110).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Research Motivation --- p.2Chapter 1.2 --- Problem Statement --- p.3Chapter 1.3 --- Research Objectives --- p.4Chapter 1.4 --- Structure of the Thesis --- p.6Chapter 2 --- Background Study on DDoS Attacks --- p.8Chapter 2.1 --- Distributed Denial of Service Attacks --- p.8Chapter 2.1.1 --- DDoS Attack Architecture --- p.9Chapter 2.1.2 --- DDoS Attack Taxonomy --- p.11Chapter 2.1.3 --- DDoS Tools --- p.19Chapter 2.1.4 --- DDoS Detection --- p.21Chapter 2.2 --- DDoS Countermeasure: Attack Source Traceback --- p.23Chapter 2.2.1 --- Link Testing --- p.23Chapter 2.2.2 --- Logging --- p.24Chapter 2.2.3 --- ICMP-based traceback --- p.26Chapter 2.2.4 --- Packet marking --- p.28Chapter 2.2.5 --- Comparison of various IP Traceback Schemes --- p.31Chapter 2.3 --- DDoS Countermeasure: Packet Filtering --- p.33Chapter 2.3.1 --- Ingress Filtering --- p.33Chapter 2.3.2 --- Egress Filtering --- p.34Chapter 2.3.3 --- Route-based Packet Filtering --- p.35Chapter 2.3.4 --- IP Traceback-based Packet Filtering --- p.36Chapter 2.3.5 --- Router-based Pushback --- p.37Chapter 3 --- Domain-based IP Traceback Scheme --- p.40Chapter 3.1 --- Overview of our IP Traceback Scheme --- p.41Chapter 3.2 --- Assumptions --- p.44Chapter 3.3 --- Proposed Packet Marking Scheme --- p.45Chapter 3.3.1 --- IP Markings with Edge Sampling --- p.46Chapter 3.3.2 --- Domain-based Design Motivation --- p.48Chapter 3.3.3 --- Mathematical Principle --- p.49Chapter 3.3.4 --- Marking Mechanism --- p.51Chapter 3.3.5 --- Storage Space of the Marking Fields --- p.56Chapter 3.3.6 --- Packet Marking Integrity --- p.57Chapter 3.3.7 --- Path Reconstruction --- p.58Chapter 4 --- Route-based Packet Filtering Scheme --- p.62Chapter 4.1 --- Placement of Filters --- p.63Chapter 4.1.1 --- At Sources' Networks --- p.64Chapter 4.1.2 --- At Victim's Network --- p.64Chapter 4.2 --- Proposed Packet Filtering Scheme --- p.65Chapter 4.2.1 --- Classification of Packets --- p.66Chapter 4.2.2 --- Filtering Mechanism --- p.67Chapter 5 --- Performance Evaluation --- p.70Chapter 5.1 --- Simulation Setup --- p.70Chapter 5.2 --- Experiments on IP Traceback Scheme --- p.72Chapter 5.2.1 --- Performance Metrics --- p.72Chapter 5.2.2 --- Choice of Marking Probabilities --- p.73Chapter 5.2.3 --- Experimental Results --- p.75Chapter 5.3 --- Experiments on Packet Filtering Scheme --- p.82Chapter 5.3.1 --- Performance Metrics --- p.82Chapter 5.3.2 --- Choices of Filtering Probabilities --- p.84Chapter 5.3.3 --- Experimental Results --- p.85Chapter 5.4 --- Deployment Issues --- p.91Chapter 5.4.1 --- Backward Compatibility --- p.91Chapter 5.4.2 --- Processing Overheads to the Routers and Network --- p.93Chapter 5.5 --- Evaluations --- p.95Chapter 6 --- Conclusion --- p.96Chapter 6.1 --- Contributions --- p.96Chapter 6.2 --- Discussions and future work --- p.99Bibliography --- p.11
Design Optimization and Security For Communication Networks
In this work we introduce a new mathematical tool for optimization
of routes, topology design, and energy efficiency in wireless
sensor networks. We introduce a vector field formulation that
models communication in the network, and routing is performed in
the direction of this vector field at every location of the
network. The magnitude of the vector field at every location
represents the density of amount of data that is being transited
through that location. We define the total communication cost in
the network as the integral of a quadratic form of the vector
field over the network area.
With the above formulation, we introduce a mathematical machinery
based on partial differential equations very similar to the
Maxwell's equations in electrostatic theory. We show that in order
to minimize the cost, the routes should be found based on the
solution of these partial differential equations. In our
formulation, the sensors are sources of information, and they are
similar to the positive charges in electrostatics, the
destinations are sinks of information and they are similar to
negative charges, and the network is similar to a non-homogeneous
dielectric media with variable dielectric constant (or
permittivity coefficient).
In one of the applications of our mathematical model based on the
vector fields, we offer a scheme for energy efficient routing. Our
routing scheme is based on changing the permittivity coefficient
to a higher value in the places of the network where nodes have
high residual energy, and setting it to a low value in the places
of the network where the nodes do not have much energy left. Our
simulations show that our method gives a significant increase in
the network life compared to the shortest
path and weighted shortest path schemes.
Our initial focus is on the case where there is only one
destination in the network, and later we extend our approach to
the case where there are multiple destinations in the network. In
the case of having multiple destinations, we need to partition the
network into several areas known as regions of attraction of the
destinations. Each destination is responsible for collecting all
messages being generated in its region of attraction. The
complexity of the optimization problem in this case is how to
define regions of attraction for the destinations and how much
communication load to assign to each destination to optimize the
performance of the network. We use our vector field model to solve
the optimization problem for this case. We define a vector field,
which is conservative, and hence it can be written as the gradient
of a scalar field (also known as a potential field). Then we show
that in the optimal assignment of the communication load of the
network to the destinations, the value of that potential field
should be equal
at the locations of all the destinations.
Another application of our vector field model is to find the
optimal locations of the destinations in the network. We show that
the vector field gives the gradient of the cost function with
respect to the locations of the destinations. Based on this fact,
we suggest an algorithm to be applied during the design phase of a
network to relocate the destinations for reducing the
communication cost function. The performance of our proposed
schemes is confirmed by several examples and simulation
experiments.
In another part of this work we focus on the notions of
responsiveness and conformance of TCP traffic in communication
networks. We introduce the notion of responsiveness for TCP
aggregates and define it as the degree to which a TCP aggregate
reduces its sending rate to the network as a response to packet
drops. We define metrics that describe the responsiveness of TCP
aggregates, and suggest two methods for determining the values of
these quantities. The first method is based on a test in which we
drop a few packets from the aggregate intentionally and measure
the resulting rate decrease of that aggregate. This kind of test
is not robust to multiple simultaneous tests performed at
different routers. We make the test robust to multiple
simultaneous tests by using ideas from the CDMA approach to
multiple access channels in communication theory. Based on this
approach, we introduce tests of responsiveness for aggregates, and
call it CDMA based Aggregate Perturbation Method (CAPM). We use
CAPM to perform congestion control. A distinguishing feature of
our congestion control scheme is that it maintains a
degree of fairness among different aggregates.
In the next step we modify CAPM to offer methods for estimating
the proportion of an aggregate of TCP traffic that does not
conform to protocol specifications, and hence may belong to a DDoS
attack. Our methods work by intentionally perturbing the aggregate
by dropping a very small number of packets from it and observing
the response of the aggregate. We offer two methods for
conformance testing. In the first method, we apply the
perturbation tests to SYN packets being sent at the start of the
TCP 3-way handshake, and we use the fact that the rate of ACK
packets being exchanged in the handshake should follow the rate of
perturbations. In the second method, we apply the perturbation
tests to the TCP data packets and use the fact that the rate of
retransmitted data packets should follow the rate of
perturbations. In both methods, we use signature based
perturbations, which means packet drops are performed with a rate
given by a function of time. We use analogy of our problem with
multiple access communication to find signatures. Specifically, we
assign orthogonal CDMA based signatures to different routers in a
distributed implementation of our methods. As a result of
orthogonality, the performance does not degrade because of cross
interference made by simultaneously testing routers. We have shown
efficacy of our methods through mathematical analysis and
extensive simulation
experiments