221 research outputs found
Command & Control: Understanding, Denying and Detecting - A review of malware C2 techniques, detection and defences
In this survey, we first briefly review the current state of cyber attacks,
highlighting significant recent changes in how and why such attacks are
performed. We then investigate the mechanics of malware command and control
(C2) establishment: we provide a comprehensive review of the techniques used by
attackers to set up such a channel and to hide its presence from the attacked
parties and the security tools they use. We then switch to the defensive side
of the problem, and review approaches that have been proposed for the detection
and disruption of C2 channels. We also map such techniques to widely-adopted
security controls, emphasizing gaps or limitations (and success stories) in
current best practices.Comment: Work commissioned by CPNI, available at c2report.org. 38 pages.
Listing abstract compressed from version appearing in repor
{SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment
Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users
{SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment
Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users
Flow Data Collection in Large Scale Networks
In this chapter, we present flow-based network traffic monitoring of large scale networks. Continuous Internet traffic increase requires a deployment of advanced monitoring techniques to provide near real-time and long-term network visibility. Collected flow data can be further used for network behavioral analysis to indicate legitimate and malicious traffic, proving cyber threats, etc. An early warning system should integrate flow-based monitoring to ensure network situational awareness.Kapitola pĹ™edstavuje monitorovánĂ sĂĹĄovĂ©ho provozu v rozsáhlĂ˝ch poÄŤĂtaÄŤovĂ˝ch sĂtĂch zaloĹľenĂ© na IP tocĂch. NepĹ™etrĹľitĂ˝ rĹŻst internetovĂ©ho provozu vyĹľaduje nasazenĂ pokroÄŤilĂ˝ch monitorovacĂch technik, kterĂ© poskytujĂ v reálnĂ©m ÄŤase a dlouhodobÄ› pohled na dÄ›nĂ v sĂti. NasbĂraná data mohou dále slouĹľit pro analĂ˝zu chovánĂ sĂtÄ› k rozlišenĂ legitimnĂho a škodlivĂ©ho provozu, dokazovánĂ kybernetickĂ˝ch hrozeb atd. SystĂ©m vÄŤasnĂ©ho varovánĂ by mÄ›l integrovat monitorovánĂ sĂĹĄovĂ˝ch tokĹŻ, aby mohl poskytovat pĹ™ehled o situaci na sĂti
Detection of Early-Stage Enterprise Infection by Mining Large-Scale Log Data
Recent years have seen the rise of more sophisticated attacks including
advanced persistent threats (APTs) which pose severe risks to organizations and
governments by targeting confidential proprietary information. Additionally,
new malware strains are appearing at a higher rate than ever before. Since many
of these malware are designed to evade existing security products, traditional
defenses deployed by most enterprises today, e.g., anti-virus, firewalls,
intrusion detection systems, often fail at detecting infections at an early
stage.
We address the problem of detecting early-stage infection in an enterprise
setting by proposing a new framework based on belief propagation inspired from
graph theory. Belief propagation can be used either with "seeds" of compromised
hosts or malicious domains (provided by the enterprise security operation
center -- SOC) or without any seeds. In the latter case we develop a detector
of C&C communication particularly tailored to enterprises which can detect a
stealthy compromise of only a single host communicating with the C&C server.
We demonstrate that our techniques perform well on detecting enterprise
infections. We achieve high accuracy with low false detection and false
negative rates on two months of anonymized DNS logs released by Los Alamos
National Lab (LANL), which include APT infection attacks simulated by LANL
domain experts. We also apply our algorithms to 38TB of real-world web proxy
logs collected at the border of a large enterprise. Through careful manual
investigation in collaboration with the enterprise SOC, we show that our
techniques identified hundreds of malicious domains overlooked by
state-of-the-art security products
Unveiling SSHCure 3.0: Flow-based SSH Compromise Detection
Network-based intrusion detection systems have always been designed to report on the presence of attacks. Due to the sheer and ever-increasing number of attacks on the Internet, Computer Security Incident Response Teams (CSIRTs) are overwhelmed with attack reports. For that reason, there is a need for the detection of compromises rather than compromise attempts, since those incidents are the ones that have to be taken care of. In previous works, we have demonstrated and validated our state-of-the-art compromise detection algorithm that works on exported flow data, i.e, data exported using NetFlow or IPFIX. The detection algorithm has been implemented as part of our open-source intrusion detection system SSHCure.\ud
In this demonstration, we showcase the latest release of SSHCure, which includes many new features, such as an overhauled user interface design based on user surveys, integration with incident reporting tools, blacklist integration and IPv6 support. Attendees will be able to explore SSHCure in a semi-live fashion by means of practical examples of situations that CSIRT members encounter in their daily activities
Community-Based Intrusion Detection
Today, virtually every company world-wide is connected to the Internet. This wide-spread connectivity has given rise to sophisticated, targeted, Internet-based attacks. For example, between 2012 and 2013 security researchers counted an average of about 74 targeted attacks per day. These attacks are motivated by economical, financial, or political interests and commonly referred to as “Advanced Persistent Threat (APT)” attacks. Unfortunately, many of these attacks are successful and the adversaries manage to steal important data or disrupt vital services. Victims are preferably companies from vital industries, such as banks, defense contractors, or power plants. Given that these industries are well-protected, often employing a team of security specialists, the question is: How can these attacks be so successful?
Researchers have identified several properties of APT attacks which make them so efficient. First, they are adaptable. This means that they can change the way they attack and the tools they use for this purpose at any given moment in time. Second, they conceal their actions and communication by using encryption, for example. This renders many defense systems useless as they assume complete access to the actual communication content. Third, their
actions are stealthy — either by keeping communication to the bare minimum or by mimicking legitimate users. This makes them “fly below the radar” of defense systems which check for anomalous communication. And finally, with the goal to increase their impact or monetisation prospects, their attacks are targeted against several companies from the same industry. Since months can pass between the first attack, its detection, and comprehensive analysis, it is often too late to deploy appropriate counter-measures at businesses peers. Instead, it is much more likely that they have already been attacked successfully.
This thesis tries to answer the question whether the last property (industry-wide attacks) can be used to detect such attacks. It presents the design, implementation and evaluation of a community-based intrusion detection system, capable of protecting businesses at industry-scale. The contributions of this thesis are as follows. First, it presents a novel algorithm for community detection which can detect an industry (e.g., energy, financial, or defense industries) in Internet communication. Second, it demonstrates the design, implementation, and evaluation of a distributed graph mining engine that is able to scale with the throughput of the input data while maintaining an end-to-end latency for updates in the range of a few milliseconds. Third, it illustrates the usage of this engine to detect APT attacks against industries by analyzing IP flow information from an Internet service provider.
Finally, it introduces a detection algorithm- and input-agnostic intrusion detection engine which supports not only intrusion detection on IP flow but any other intrusion detection algorithm and data-source as well
Mathematical and Statistical Opportunities in Cyber Security
The role of mathematics in a complex system such as the Internet has yet to
be deeply explored. In this paper, we summarize some of the important and
pressing problems in cyber security from the viewpoint of open science
environments. We start by posing the question "What fundamental problems exist
within cyber security research that can be helped by advanced mathematics and
statistics?" Our first and most important assumption is that access to
real-world data is necessary to understand large and complex systems like the
Internet. Our second assumption is that many proposed cyber security solutions
could critically damage both the openness and the productivity of scientific
research. After examining a range of cyber security problems, we come to the
conclusion that the field of cyber security poses a rich set of new and
exciting research opportunities for the mathematical and statistical sciences
A Survey on Big Data for Network Traffic Monitoring and Analysis
Network Traffic Monitoring and Analysis (NTMA) represents a key component for network management, especially to guarantee the correct operation of large-scale networks such as the Internet. As the complexity of Internet services and the volume of traffic continue to increase, it becomes difficult to design scalable NTMA applications. Applications such as traffic classification and policing require real-time and scalable approaches. Anomaly detection and security mechanisms require to quickly identify and react to unpredictable events while processing millions of heterogeneous events. At last, the system has to collect, store, and process massive sets of historical data for post-mortem analysis. Those are precisely the challenges faced by general big data approaches: Volume, Velocity, Variety, and Veracity. This survey brings together NTMA and big data. We catalog previous work on NTMA that adopt big data approaches to understand to what extent the potential of big data is being explored in NTMA. This survey mainly focuses on approaches and technologies to manage the big NTMA data, additionally briefly discussing big data analytics (e.g., machine learning) for the sake of NTMA. Finally, we provide guidelines for future work, discussing lessons learned, and research directions
- …