261 research outputs found
Evaluation of Windows Servers Security Under ICMP and TCP Denial of Service Attacks
Securing server from Distributed denial of service (DDoS) attacks is a challenging task for network operators. DDOS attacks are known to reduce the performance of web based applications and reduce the number of legitimate client connections. In this thesis, we evaluate performance of a Windows server 2003 under these attacks. In this thesis, we also evaluate and compare effectiveness of three different protection mechanisms, namely SYN Cache, SYN Cookie and SYN proxy protection methods, to protect against TCP SYN DDoS attacks. It is found that the SYN attack protection at the server is more effective at lower loads of SYN attack traffic, whereas the SYN cookies protection is more effective at higher loads compared to other methods
Mitigating Botnet-based DDoS Attacks against Web Servers
Distributed denial-of-service (DDoS) attacks have become wide-spread on the Internet. They continuously target retail merchants, financial companies and government institutions, disrupting the availability of their online resources and causing millions of dollars of financial losses. Software vulnerabilities and proliferation of malware have helped create a class of application-level DDoS attacks using networks of compromised hosts (botnets). In a botnet-based DDoS attack, an attacker orders large numbers of bots to send seemingly regular HTTP and HTTPS requests to a web server, so as to deplete the server's CPU, disk, or memory capacity.
Researchers have proposed client authentication mechanisms, such as CAPTCHA puzzles, to distinguish bot traffic from legitimate client activity and discard bot-originated packets. However, CAPTCHA authentication is vulnerable to denial-of-service and artificial intelligence attacks. This dissertation proposes that clients instead use hardware tokens to authenticate in a federated authentication environment. The federated authentication solution must resist both man-in-the-middle and denial-of-service attacks. The proposed system architecture uses the Kerberos protocol to satisfy both requirements. This work proposes novel extensions to Kerberos to make it more suitable for generic web authentication.
A server could verify client credentials and blacklist repeated offenders. Traffic from blacklisted clients, however, still traverses the server's network stack and consumes server resources. This work proposes Sentinel, a dedicated front-end network device that intercepts server-bound traffic, verifies authentication credentials and filters blacklisted traffic before it reaches the server. Using a front-end device also allows transparently deploying hardware acceleration using network co-processors. Network co-processors can discard blacklisted traffic at the hardware level before it wastes front-end host resources.
We implement the proposed system architecture by integrating existing software applications and libraries. We validate the system implementation by evaluating its performance under DDoS attacks consisting of floods of HTTP and HTTPS requests
OnionBots: Subverting Privacy Infrastructure for Cyber Attacks
Over the last decade botnets survived by adopting a sequence of increasingly
sophisticated strategies to evade detection and take overs, and to monetize
their infrastructure. At the same time, the success of privacy infrastructures
such as Tor opened the door to illegal activities, including botnets,
ransomware, and a marketplace for drugs and contraband. We contend that the
next waves of botnets will extensively subvert privacy infrastructure and
cryptographic mechanisms. In this work we propose to preemptively investigate
the design and mitigation of such botnets. We first, introduce OnionBots, what
we believe will be the next generation of resilient, stealthy botnets.
OnionBots use privacy infrastructures for cyber attacks by completely
decoupling their operation from the infected host IP address and by carrying
traffic that does not leak information about its source, destination, and
nature. Such bots live symbiotically within the privacy infrastructures to
evade detection, measurement, scale estimation, observation, and in general all
IP-based current mitigation techniques. Furthermore, we show that with an
adequate self-healing network maintenance scheme, that is simple to implement,
OnionBots achieve a low diameter and a low degree and are robust to
partitioning under node deletions. We developed a mitigation technique, called
SOAP, that neutralizes the nodes of the basic OnionBots. We also outline and
discuss a set of techniques that can enable subsequent waves of Super
OnionBots. In light of the potential of such botnets, we believe that the
research community should proactively develop detection and mitigation methods
to thwart OnionBots, potentially making adjustments to privacy infrastructure.Comment: 12 pages, 8 figure
Survey on Security Issues in Cloud Computing and Associated Mitigation Techniques
Cloud Computing holds the potential to eliminate the requirements for setting
up of high-cost computing infrastructure for IT-based solutions and services
that the industry uses. It promises to provide a flexible IT architecture,
accessible through internet for lightweight portable devices. This would allow
multi-fold increase in the capacity or capabilities of the existing and new
software. In a cloud computing environment, the entire data reside over a set
of networked resources, enabling the data to be accessed through virtual
machines. Since these data-centers may lie in any corner of the world beyond
the reach and control of users, there are multifarious security and privacy
challenges that need to be understood and taken care of. Also, one can never
deny the possibility of a server breakdown that has been witnessed, rather
quite often in the recent times. There are various issues that need to be dealt
with respect to security and privacy in a cloud computing scenario. This
extensive survey paper aims to elaborate and analyze the numerous unresolved
issues threatening the cloud computing adoption and diffusion affecting the
various stake-holders linked to it.Comment: 20 pages, 2 Figures, 1 Table. arXiv admin note: substantial text
overlap with arXiv:1109.538
Adaptive Response System for Distributed Denial-of-Service Attacks
The continued prevalence and severe damaging effects of the Distributed Denial of Service (DDoS)
attacks in today’s Internet raise growing security concerns and call for an immediate response to come
up with better solutions to tackle DDoS attacks. The current DDoS prevention mechanisms are usually
inflexible and determined attackers with knowledge of these mechanisms, could work around them.
Most existing detection and response mechanisms are standalone systems which do not rely on
adaptive updates to mitigate attacks. As different responses vary in their “leniency” in treating
detected attack traffic, there is a need for an Adaptive Response System.
We designed and implemented our DDoS Adaptive ResponsE (DARE) System, which is a
distributed DDoS mitigation system capable of executing appropriate detection and mitigation
responses automatically and adaptively according to the attacks. It supports easy integrations for both
signature-based and anomaly-based detection modules. Additionally, the design of DARE’s individual
components takes into consideration the strengths and weaknesses of existing defence mechanisms,
and the characteristics and possible future mutations of DDoS attacks. These components consist of an
Enhanced TCP SYN Attack Detector and Bloom-based Filter, a DDoS Flooding Attack Detector and
Flow Identifier, and a Non Intrusive IP Traceback mechanism. The components work together
interactively to adapt the detections and responses in accordance to the attack types. Experiments
conducted on DARE show that the attack detection and mitigation are successfully completed within
seconds, with about 60% to 86% of the attack traffic being dropped, while availability for legitimate
and new legitimate requests is maintained. DARE is able to detect and trigger appropriate responses in
accordance to the attacks being launched with high accuracy, effectiveness and efficiency.
We also designed and implemented a Traffic Redirection Attack Protection System (TRAPS), a
stand-alone DDoS attack detection and mitigation system for IPv6 networks. In TRAPS, the victim
under attack verifies the authenticity of the source by performing virtual relocations to differentiate the
legitimate traffic from the attack traffic. TRAPS requires minimal deployment effort and does not
require modifications to the Internet infrastructure due to its incorporation of the Mobile IPv6
protocol. Experiments to test the feasibility of TRAPS were carried out in a testbed environment to
verify that it would work with the existing Mobile IPv6 implementation. It was observed that the
operations of each module were functioning correctly and TRAPS was able to successfully mitigate an
attack launched with spoofed source IP addresses
Cyber Security
This open access book constitutes the refereed proceedings of the 16th International Annual Conference on Cyber Security, CNCERT 2020, held in Beijing, China, in August 2020. The 17 papers presented were carefully reviewed and selected from 58 submissions. The papers are organized according to the following topical sections: access control; cryptography; denial-of-service attacks; hardware security implementation; intrusion/anomaly detection and malware mitigation; social network security and privacy; systems security
Herding Vulnerable Cats: A Statistical Approach to Disentangle Joint Responsibility for Web Security in Shared Hosting
Hosting providers play a key role in fighting web compromise, but their
ability to prevent abuse is constrained by the security practices of their own
customers. {\em Shared} hosting, offers a unique perspective since customers
operate under restricted privileges and providers retain more control over
configurations. We present the first empirical analysis of the distribution of
web security features and software patching practices in shared hosting
providers, the influence of providers on these security practices, and their
impact on web compromise rates. We construct provider-level features on the
global market for shared hosting -- containing 1,259 providers -- by gathering
indicators from 442,684 domains. Exploratory factor analysis of 15 indicators
identifies four main latent factors that capture security efforts: content
security, webmaster security, web infrastructure security and web application
security. We confirm, via a fixed-effect regression model, that providers exert
significant influence over the latter two factors, which are both related to
the software stack in their hosting environment. Finally, by means of GLM
regression analysis of these factors on phishing and malware abuse, we show
that the four security and software patching factors explain between 10\% and
19\% of the variance in abuse at providers, after controlling for size. For
web-application security for instance, we found that when a provider moves from
the bottom 10\% to the best-performing 10\%, it would experience 4 times fewer
phishing incidents. We show that providers have influence over patch
levels--even higher in the stack, where CMSes can run as client-side
software--and that this influence is tied to a substantial reduction in abuse
levels
Performance Analysis of an Application-Level Mechanism for Preventing Service Flooding in the Internet
One of the most impacting technological developments during the last few years has been the emergence of the Internet. With rapid growth of the Internet, it is becoming increasingly difficult to provide the necessary services to all users within a designated time period. As the gap between the network-line and application-server rates is growing, it is getting easier to launch Distributed Denial of Service (DDoS) attacks against services on the Internet, and remain undetected within the network. Gligor's rate control scheme is a novel mechanism for providing strong access guarantees to clients for accessing public services, by generating and enforcing simple user-level agreements on dedicated special purpose servers.
This thesis studies the results obtained from simulations, when this rate control scheme is applied to two kinds of networks, namely, Content Distribution Networks, and Domain Name Server-based networks. In particular, the server utilization, and client waiting times were studied with the aim of finding bounds on parameters that improve server performance, and of providing clients with reasonable maximum waiting times to service
- …