18 research outputs found
FAIR: Forwarding Accountability for Internet Reputability
This paper presents FAIR, a forwarding accountability mechanism that
incentivizes ISPs to apply stricter security policies to their customers. The
Autonomous System (AS) of the receiver specifies a traffic profile that the
sender AS must adhere to. Transit ASes on the path mark packets. In case of
traffic profile violations, the marked packets are used as a proof of
misbehavior.
FAIR introduces low bandwidth overhead and requires no per-packet and no
per-flow state for forwarding. We describe integration with IP and demonstrate
a software switch running on commodity hardware that can switch packets at a
line rate of 120 Gbps, and can forward 140M minimum-sized packets per second,
limited by the hardware I/O subsystem.
Moreover, this paper proposes a "suspicious bit" for packet headers - an
application that builds on top of FAIR's proofs of misbehavior and flags
packets to warn other entities in the network.Comment: 16 pages, 12 figure
A composable approach to design of newer techniques for large-scale denial-of-service attack attribution
Since its early days, the Internet has witnessed not only a phenomenal growth, but also a large number of security attacks, and in recent years, denial-of-service (DoS) attacks have emerged as one of the top threats. The stateless and destination-oriented Internet routing combined with the ability to harness a large number of compromised machines and the relative ease and low costs of launching such attacks has made this a hard problem to address. Additionally, the myriad requirements of scalability, incremental deployment, adequate user privacy protections, and appropriate economic incentives has further complicated the design of DDoS defense mechanisms. While the many research proposals to date have focussed differently on prevention, mitigation, or traceback of DDoS attacks, the lack of a comprehensive approach satisfying the different design criteria for successful attack attribution is indeed disturbing.
Our first contribution here has been the design of a composable data model that has helped us represent the various dimensions of the attack attribution problem, particularly the performance attributes of accuracy, effectiveness, speed and overhead, as orthogonal and mutually independent design considerations. We have then designed custom optimizations along each of these dimensions, and have further integrated them into a single composite model, to provide strong performance guarantees. Thus, the proposed model has given us a single framework that can not only address the individual shortcomings of the various known attack attribution techniques, but also provide a more wholesome counter-measure against DDoS attacks.
Our second contribution here has been a concrete implementation based on the proposed composable data model, having adopted a graph-theoretic approach to identify and subsequently stitch together individual edge fragments in the Internet graph to reveal the true routing path of any network data packet. The proposed approach has been analyzed through theoretical and experimental evaluation across multiple metrics, including scalability, incremental deployment, speed and efficiency of the distributed algorithm, and finally the total overhead associated with its deployment. We have thereby shown that it is realistically feasible to provide strong performance and scalability guarantees for Internet-wide attack attribution.
Our third contribution here has further advanced the state of the art by directly identifying individual path fragments in the Internet graph, having adopted a distributed divide-and-conquer approach employing simple recurrence relations as individual building blocks. A detailed analysis of the proposed approach on real-life Internet topologies with respect to network storage and traffic overhead, has provided a more realistic characterization. Thus, not only does the proposed approach lend well for simplified operations at scale but can also provide robust network-wide performance and security guarantees for Internet-wide attack attribution.
Our final contribution here has introduced the notion of anonymity in the overall attack attribution process to significantly broaden its scope. The highly invasive nature of wide-spread data gathering for network traceback continues to violate one of the key principles of Internet use today - the ability to stay anonymous and operate freely without retribution. In this regard, we have successfully reconciled these mutually divergent requirements to make it not only economically feasible and politically viable but also socially acceptable.
This work opens up several directions for future research - analysis of existing attack attribution techniques to identify further scope for improvements, incorporation of newer attributes into the design framework of the composable data model abstraction, and finally design of newer attack attribution techniques that comprehensively integrate the various attack prevention, mitigation and traceback techniques in an efficient manner
Deployable filtering architectures against large denial-of-service attacks
Denial-of-Service attacks continue to grow in size and frequency despite serious underreporting.
While several research solutions have been proposed over the years, they have had
important deployment hurdles that have prevented them from seeing any significant level of
deployment on the Internet. Commercial solutions exist, but they are costly and generally are
not meant to scale to Internet-wide levels.
In this thesis we present three filtering architectures against large Denial-of-Service attacks.
Their emphasis is in providing an effective solution against such attacks while using
simple mechanisms in order to overcome the deployment hurdles faced by other solutions.
While these are well-suited to being implemented in fast routing hardware, in the early stages
of deployment this is unlikely to be the case. Because of this, we implemented them on low-cost
off-the-shelf hardware and evaluated their performance on a network testbed. The results are
very encouraging: this setup allows us to forward traffic on a single PC at rates of millions of
packets per second even for minimum-sized packets, while at the same time processing as many
as one million filters; this gives us confidence that the architecture as a whole could combat even
the large botnets currently being reported. Better yet, we show that this single-PC performance
scales well with the number of CPU cores and network interfaces, which is promising for our
solutions if we consider the current trend in processor design.
In addition to using simple mechanisms, we discuss how the architectures provide clear
incentives for ISPs that adopt them early, both at the destination as well as at the sources of
attacks. The hope is that these will be sufficient to achieve some level of initial deployment.
The larger goal is to have an architectural solution against large DoS deployed in place before
even more harmful attacks take place; this thesis is hopefully a step in that direction
Avoiditals: Enhanced Cyber-Attack Taxonomy In Securing Information Technology Infrastructure
An operation of an organization is currently using a digital environment which opens to potential cyber-attacks. These phenomena become worst as the cyberattack landscape is changing rapidly. The impact of cyber-attacks varies depending on the scope of the organization and the value of assets that need to be protected. It is difficult to assess the damage to an organization from cyberattacks due to a lack of understanding of tools, metrics, and knowledge on the type of attacks and their impacts. Hence, this paper aims to identify domains and sub-domains of cyber-attack taxonomy to facilitate the understanding of cyber-attacks. Four phases are carried in this research: identify existing cyber-attack taxonomy, determine and classify domains and sub-domains of cyber-attack, and construct the enhanced cyber-attack taxonomy. The existing cyber-attack taxonomies are analyzed, domains and sub-domains are selected based on the focus and objectives of the research, and the proposed taxonomy named AVOIDITALS Cyber-attack Taxonomy is constructed. AVOIDITALS consists of 8 domains, 105 sub-domains, 142 sub-sub-domains, and 90 other sub-sub-domains that act as a guideline to assist administrators in determining cyber-attacks through cyber-attacks pattern identification that commonly occurred on digital infrastructure and provide the best prevention method to minimize impact. This research can be further developed in line with the emergence of new types and categories of current cyberattacks and the future
Scalable schemes against Distributed Denial of Service attacks
Defense against Distributed Denial of Service (DDoS) attacks is one of the primary concerns on the Internet today. DDoS attacks are difficult to prevent because of the open, interconnected nature of the Internet and its underlying protocols, which can be used in several ways to deny service. Attackers hide their identity by using third parties such as private chat channels on IRC (Internet Relay Chat). They also insert false return IP address, spoofing, in a packet which makes it difficult for the victim to determine the packet\u27s origin. We propose three novel and realistic traceback mechanisms which offer many advantages over the existing schemes. All the three schemes take advantage of the Autonomous System topology and consider the fact that the attacker\u27s packets may traverse through a number of domains under different administrative control. Most of the traceback mechanisms make wrong assumptions that the network details of a company under an administrative control are disclosed to the public. For security reasons, this is not the case most of the times. The proposed schemes overcome this drawback by considering reconstruction at the inter and intra AS levels. Hierarchical Internet Traceback (HIT) and Simple Traceback Mechanism (STM) trace back to an attacker in two phases. In the first phase the attack originating Autonomous System is identified while in the second phase the attacker within an AS is identified. Both the schemes, HIT and STM, allow the victim to trace back to the attackers in a few seconds. Their computational overhead is very low and they scale to large distributed attacks with thousands of attackers. Fast Autonomous System Traceback allows complete attack path reconstruction with few packets. We use traceroute maps of real Internet topologies CAIDA\u27s skitter to simulate DDoS attacks and validate our design
Tracking Normalized Network Traffic Entropy to Detect DDoS Attacks in P4
Distributed Denial-of-Service (DDoS) attacks represent a persistent threat to
modern telecommunications networks: detecting and counteracting them is still a
crucial unresolved challenge for network operators. DDoS attack detection is
usually carried out in one or more central nodes that collect significant
amounts of monitoring data from networking devices, potentially creating issues
related to network overload or delay in detection. The dawn of programmable
data planes in Software-Defined Networks can help mitigate this issue, opening
the door to the detection of DDoS attacks directly in the data plane of the
switches. However, the most widely-adopted data plane programming language,
namely P4, lacks supporting many arithmetic operations, therefore, some of the
advanced network monitoring functionalities needed for DDoS detection cannot be
straightforwardly implemented in P4. This work overcomes such a limitation and
presents two novel strategies for flow cardinality and for normalized network
traffic entropy estimation that only use P4-supported operations and guarantee
a low relative error. Additionally, based on these contributions, we propose a
DDoS detection strategy relying on variations of the normalized network traffic
entropy. Results show that it has comparable or higher detection accuracy than
state-of-the-art solutions, yet being simpler and entirely executed in the data
plane.Comment: Accepted by TDSC on 24/09/202
APCN: A Scalable Architecture for Balancing Accountability and Privacy in Large-scale Content-based Networks
This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this record. Balancing accountability and privacy has become extremely important in cyberspace, and the Internet has evolved to be dominated by content transmission. Several research efforts have been devoted to contributing to either accountability or privacy protection, but none of them has managed to consider both factors in content-based networks. An efficient solution is therefore urgently demanded by service and content providers. However, proposing such a solution is very challenging, because the following questions need to be considered simultaneously: (1) How can the conflict between privacy and accountability be avoided? (2) How is content identified and accountability performed based on packets belonging to that content? (3) How can the scalability issue be alleviated on massive content accountability in large-scale networks? To address these questions, we propose the first scalable architecture for balancing Accountability and Privacy in large-scale Content-based Networks (APCN). In particular, an innovative method for identifying content is proposed to effectively distinguish the content issued by different senders and from different flows, enabling the accountability of a content based on any of its packets. Furthermore, a new idea with double-delegate (i.e., source and local delegates) is proposed to improve the performance and alleviate the scalability issue on content accountability in large-scale networks. Extensive NS-3 experiments with real trace are conducted to validate the efficiency of the proposed APCN. The results demonstrate that APCN outperforms existing related solutions in terms of lower round-trip time and higher cache hit rate under different network configurations.National Key R&D Program of ChinaNational Science and Technology Major Project of the Ministry of Science and Technology of ChinaNational Natural Science Foundation of Chin
Minimal deployable endpoint-driven network forwarding: principle, designs and applications
Networked systems now have significant impact on human lives: the Internet, connecting the world globally, is the foundation of our information age, the data centers, running hundreds of thousands of servers, drive the era of cloud computing, and even the Tor project, a networked system providing online anonymity, now serves millions of daily users.
Guided by the end-to-end principle, many computer networks have been designed with a simple and flexible core offering general data transfer service, whereas the bulk of the application-level functionalities have been implemented on endpoints that are attached to the edge of the network. Although the end-to-end design principle gives these networked systems tremendous success, a number of new requirements have emerged for computer networks and their running applications, including untrustworthy of endpoints, privacy requirement of endpoints, more demanding applications, the rise of third-party Intermediaries and the asymmetric capability of endpoints and so on. These emerging requirements have created various challenges in different networked systems.
To address these challenges, there are no obvious solutions without adding in-network functions to the network core. However, no design principle has ever been proposed for guiding the implementation of in-network functions. In this thesis, We propose the first such principle and apply this principle to propose four designs in three different networked systems to address four separate challenges. We demonstrate through detailed implementation and extensive evaluations that the proposed principle can live in harmony with the end-to-end principle, and a combination of the two principle offers more complete, effective and accurate guides for innovating the modern computer networks and their applications.Ope