101 research outputs found
System Analysis of SPAM
Increasing reliance on the electronic mail (e-mail) has attracted spammers to send more and more spam e-mails in order to maximizing their financial gains. These unwanted e-mails are not only clogging the Internet traffic but are also causing storage problems at the receiving servers. Besides these, spam e-mails also serve as a vehicle to a variety of online crimes and abuses. Although several anti-spam procedures are currently employed to distinguish spam e-mails from the legitimate e-mails yet spammers and phishes obfuscate their e-mail content to circumvent anti-spam procedures. Efficiency of anti-spam procedures to combat spam entry into the system greatly depend on their level of operation and a clear insight of various possible modes of spamming. In this paper we investigate directed graph model of Internet e-mail infrastructure and spamming modes used by spammers to inject spam into the system. The paper outlines the routes, system components, devices and protocols exploited by each spamming mode
Towards secure message systems
Message systems, which transfer information from sender to recipient via communication networks, are indispensable to our modern society. The enormous user base of message systems and their critical role in information delivery make it the top priority to secure message systems. This dissertation focuses on securing the two most representative and dominant messages systems---e-mail and instant messaging (IM)---from two complementary aspects: defending against unwanted messages and ensuring reliable delivery of wanted messages.;To curtail unwanted messages and protect e-mail and instant messaging users, this dissertation proposes two mechanisms DBSpam and HoneyIM, which can effectively thwart e-mail spam laundering and foil malicious instant message spreading, respectively. DBSpam exploits the distinct characteristics of connection correlation and packet symmetry embedded in the behavior of spam laundering and utilizes a simple statistical method, Sequential Probability Ratio Test, to detect and break spam laundering activities inside a customer network in a timely manner. The experimental results demonstrate that DBSpam is effective in quickly and accurately capturing and suppressing e-mail spam laundering activities and is capable of coping with high speed network traffic. HoneyIM leverages the inherent characteristic of spreading of IM malware and applies the honey-pot technology to the detection of malicious instant messages. More specifically, HoneyIM uses decoy accounts in normal users\u27 contact lists as honey-pots to capture malicious messages sent by IM malware and suppresses the spread of malicious instant messages by performing network-wide blocking. The efficacy of HoneyIM has been validated through both simulations and real experiments.;To improve e-mail reliability, that is, prevent losses of wanted e-mail, this dissertation proposes a collaboration-based autonomous e-mail reputation system called CARE. CARE introduces inter-domain collaboration without central authority or third party and enables each e-mail service provider to independently build its reputation database, including frequently contacted and unacquainted sending domains, based on the local e-mail history and the information exchanged with other collaborating domains. The effectiveness of CARE on improving e-mail reliability has been validated through a number of experiments, including a comparison of two large e-mail log traces from two universities, a real experiment of DNS snooping on more than 36,000 domains, and extensive simulation experiments in a large-scale environment
Recommended from our members
Traffic Analysis Attacks and Defenses in Low Latency Anonymous Communication
The recent public disclosure of mass surveillance of electronic communication, involving powerful government authorities, has drawn the public's attention to issues regarding Internet privacy. For almost a decade now, there have been several research efforts towards designing and deploying open source, trustworthy and reliable systems that ensure users' anonymity and privacy. These systems operate by hiding the true network identity of communicating parties against eavesdropping adversaries. Tor, acronym for The Onion Router, is an example of such a system. Such systems relay the traffic of their users through an overlay of nodes that are called Onion Routers and are operated by volunteers distributed across the globe. Such systems have served well as anti-censorship and anti-surveillance tools. However, recent publications have disclosed that powerful government organizations are seeking means to de-anonymize such systems and have deployed distributed monitoring infrastructure to aid their efforts.
Attacks against anonymous communication systems, like Tor, often involve trac analysis. In such attacks, an adversary, capable of observing network traffic statistics in several different networks, correlates the trac patterns in these networks, and associates otherwise seemingly unrelated network connections. The process can lead an adversary to the source of an anonymous connection. However, due to their design, consisting of globally distributed relays, the users of anonymity networks like Tor, can route their traffic virtually via any network; hiding their tracks and true identities from their communication peers and eavesdropping adversaries. De-anonymization of a random anonymous connection is hard, as the adversary is required to correlate traffic patterns in one network link to those in virtually all other networks. Past research mostly involved reducing the complexity of this process by rst reducing the set of relays or network routers to monitor, and then identifying the actual source of anonymous traffic among network connections that are routed via this reduced set of relays or network routers to monitor. A study of various research efforts in this field reveals that there have been many more efforts to reduce the set of relays or routers to be searched than to explore methods for actually identifying an anonymous user amidst the network connections using these routers and relays. Few have tried to comprehensively study a complete attack, that involves reducing the set of relays and routers to monitor and identifying the source of an anonymous connection. Although it is believed that systems like Tor are trivially vulnerable to traffic analysis, there are various technical challenges and issues that can become obstacles to accurately identifying the source of anonymous connection. It is hard to adjudge the vulnerability of anonymous communication systems without adequately exploring the issues involved in identifying the source of anonymous traffic.
We take steps to ll this gap by exploring two novel active trac analysis attacks, that solely rely on measurements of network statistics. In these attacks, the adversary tries to identify the source of an anonymous connection arriving to a server from an exit node. This generally involves correlating traffic entering and leaving the Tor network, linking otherwise unrelated connections. To increase the accuracy of identifying the victim connection among several connections, the adversary injects a traffic perturbation pattern into a connection arriving to the server from a Tor node, that the adversary wants to de-anonymize. One way to achieve this is by colluding with the server and injecting a traffic perturbation pattern using common traffic shaping tools. Our first attack involves a novel remote bandwidth estimation technique to conrm the identity of Tor relays and network routers along the path connecting a Tor client and a server by observing network bandwidth fluctuations deliberately injected by the server. The second attack involves correlating network statistics, for connections entering and leaving the Tor network, available from existing network infrastructure, such as Cisco's NetFlow, for identifying the source of an anonymous connection. Additionally, we explored a novel technique to defend against the latter attack. Most research towards defending against traffic analysis attacks, involving transmission of dummy traffic, have not been implemented due to fears of potential performance degradation. Our novel technique involves transmission of dummy traffic, consisting of packets with IP headers having small Time-to-Live (TTL) values. Such packets are discarded by the routers before they reach their destination. They distort NetFlow statistics, without degrading the client's performance. Finally, we present a strategy that employs transmission of unique plain-text decoy traffic, that appears sensitive, such as fake user credentials, through Tor nodes to decoy servers under our control. Periodic tallying of client and server logs to determine unsolicited connection attempts at the server is used to identify the eavesdropping nodes. Such malicious Tor node operators, eavesdropping on users' traffic, could be potential traffic analysis attackers
Inferring malicious network events in commercial ISP networks using traffic summarisation
With the recent increases in bandwidth available to home users, traffic rates for
commercial national networks have also been increasing rapidly. This presents
a problem for any network monitoring tool as the traffic rate they are expected
to monitor is rising on a monthly basis. Security within these networks is para-
mount as they are now an accepted home of trade and commerce. Core networks
have been demonstrably and repeatedly open to attack; these events have had
significant material costs to high profile targets.
Network monitoring is an important part of network security, providing in-
formation about potential security breaches and in understanding their impact.
Monitoring at high data rates is a significant problem; both in terms of processing
the information at line rates, and in terms of presenting the relevant information
to the appropriate persons or systems.
This thesis suggests that the use of summary statistics, gathered over a num-
ber of packets, is a sensible and effective way of coping with high data rates. A
methodology for discovering which metrics are appropriate for classifying signi-
ficant network events using statistical summaries is presented. It is shown that
the statistical measures found with this methodology can be used effectively as
a metric for defining periods of significant anomaly, and further classifying these
anomalies as legitimate or otherwise. In a laboratory environment, these metrics
were used to detect DoS traffic representing as little as 0.1% of the overall network
traffic.
The metrics discovered were then analysed to demonstrate that they are ap-
propriate and rational metrics for the detection of network level anomalies. These
metrics were shown to have distinctive characteristics during DoS by the analysis
of live network observations taken during DoS events.
This work was implemented and operated within a live system, at multiple
sites within the core of a commercial ISP network. The statistical summaries
are generated at city based points of presence and gathered centrally to allow for
spacial and topological correlation of security events.
The architecture chosen was shown to be
exible in its application. The system
was used to detect the level of VoIP traffic present on the network through the
implementation of packet size distribution analysis in a multi-gigabit environment.
It was also used to detect unsolicited SMTP generators injecting messages into
the core.
ii
Monitoring in a commercial network environment is subject to data protec-
tion legislation. Accordingly the system presented processed only network and
transport layer headers, all other data being discarded at the capture interface.
The system described in this thesis was operational for a period of 6 months,
during which a set of over 140 network anomalies, both malicious and benign were
observed over a range of localities. The system design, example anomalies and
metric analysis form the majority of this thesis
A Survey on Wireless Security: Technical Challenges, Recent Advances and Future Trends
This paper examines the security vulnerabilities and threats imposed by the
inherent open nature of wireless communications and to devise efficient defense
mechanisms for improving the wireless network security. We first summarize the
security requirements of wireless networks, including their authenticity,
confidentiality, integrity and availability issues. Next, a comprehensive
overview of security attacks encountered in wireless networks is presented in
view of the network protocol architecture, where the potential security threats
are discussed at each protocol layer. We also provide a survey of the existing
security protocols and algorithms that are adopted in the existing wireless
network standards, such as the Bluetooth, Wi-Fi, WiMAX, and the long-term
evolution (LTE) systems. Then, we discuss the state-of-the-art in
physical-layer security, which is an emerging technique of securing the open
communications environment against eavesdropping attacks at the physical layer.
We also introduce the family of various jamming attacks and their
counter-measures, including the constant jammer, intermittent jammer, reactive
jammer, adaptive jammer and intelligent jammer. Additionally, we discuss the
integration of physical-layer security into existing authentication and
cryptography mechanisms for further securing wireless networks. Finally, some
technical challenges which remain unresolved at the time of writing are
summarized and the future trends in wireless security are discussed.Comment: 36 pages. Accepted to Appear in Proceedings of the IEEE, 201
A Macroscopic Study of Network Security Threats at the Organizational Level.
Defenders of today's network are confronted with a large number of malicious activities such as spam, malware, and denial-of-service attacks. Although many studies have been performed on how to mitigate security threats, the interaction between attackers and defenders is like a game of Whac-a-Mole, in which the security community is chasing after attackers rather than helping defenders to build systematic defensive solutions. As a complement to these studies that focus on attackers or end hosts, this thesis studies security threats from the perspective of the organization, the central authority that manages and defends a group of end hosts. This perspective provides a balanced position to understand security problems and to deploy and evaluate defensive solutions.
This thesis explores how a macroscopic view of network security from an organization's perspective can be formed to help measure, understand, and mitigate security threats. To realize this goal, we bring together a broad collection of reputation blacklists. We first measure the properties of the malicious sources identified by these blacklists and their impact on an organization. We then aggregate the malicious sources to Internet organizations and characterize the maliciousness of organizations and their evolution over a period of two and half years. Next, we aim to understand the cause of different maliciousness levels in different organizations. By examining the relationship between eight security mismanagement symptoms and the maliciousness of organizations, we find a strong positive correlation between mismanagement and maliciousness. Lastly, motivated by the observation that there are organizations that have a significant fraction of their IP addresses involved in malicious activities, we evaluate the tradeoff of one type of mitigation solution at the organization level --- network takedowns.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116714/1/jingzj_1.pd
Formal analysis of firewall policies
This dissertation describes a technique for formally analyzing a firewall security policy using a quasi-reduced multiway decision diagram model. The analysis allows a system administrator to detect and repair errors in the configuration of the firewall without a tedious manual inspection of the firewall rules.;We present four major contributions. First, we describe a set of algorithms for representing a firewall rule set as a multi-way decision diagram and for solving logical queries against that model. We demonstrate the application of these techniques in a tool for analyzing iptables firewalls. Second, we present an extension of our work that enables analysis of systems of connected firewalls and firewalls that use network address translation and other packet mangling rules. Third, we demonstrate a technique for decomposing a network into classes of equivalent hosts. These classes can be used to detect errors in a firewall policy without apriori knowledge of potential vulnerabilities. They can also be used with other firewall testing techniques to ensure comprehensive coverage of the test space. Fourth, we discuss a strategy for partially automating repair of the firewall policy through the use of counterexamples and rule history.;Using these techniques, a system administrator can detect and repair common firewall errors, such as typos, out-of-order rules, and shadowed rules. She can also develop a specification of the behaviors of the firewall and validate the firewall policy against that specification
- …