870 research outputs found
Emerging Phishing Trends and Effectiveness of the Anti-Phishing Landing Page
Each month, more attacks are launched with the aim of making web users
believe that they are communicating with a trusted entity which compels them to
share their personal, financial information. Phishing costs Internet users
billions of dollars every year. Researchers at Carnegie Mellon University (CMU)
created an anti-phishing landing page supported by Anti-Phishing Working Group
(APWG) with the aim to train users on how to prevent themselves from phishing
attacks. It is used by financial institutions, phish site take down vendors,
government organizations, and online merchants. When a potential victim clicks
on a phishing link that has been taken down, he / she is redirected to the
landing page. In this paper, we present the comparative analysis on two
datasets that we obtained from APWG's landing page log files; one, from
September 7, 2008 - November 11, 2009, and other from January 1, 2014 - April
30, 2014. We found that the landing page has been successful in training users
against phishing. Forty six percent users clicked lesser number of phishing
URLs from January 2014 to April 2014 which shows that training from the landing
page helped users not to fall for phishing attacks. Our analysis shows that
phishers have started to modify their techniques by creating more legitimate
looking URLs and buying large number of domains to increase their activity. We
observed that phishers are exploiting ICANN accredited registrars to launch
their attacks even after strict surveillance. We saw that phishers are trying
to exploit free subdomain registration services to carry out attacks. In this
paper, we also compared the phishing e-mails used by phishers to lure victims
in 2008 and 2014. We found that the phishing e-mails have changed considerably
over time. Phishers have adopted new techniques like sending promotional
e-mails and emotionally targeting users in clicking phishing URLs
Command & Control: Understanding, Denying and Detecting - A review of malware C2 techniques, detection and defences
In this survey, we first briefly review the current state of cyber attacks,
highlighting significant recent changes in how and why such attacks are
performed. We then investigate the mechanics of malware command and control
(C2) establishment: we provide a comprehensive review of the techniques used by
attackers to set up such a channel and to hide its presence from the attacked
parties and the security tools they use. We then switch to the defensive side
of the problem, and review approaches that have been proposed for the detection
and disruption of C2 channels. We also map such techniques to widely-adopted
security controls, emphasizing gaps or limitations (and success stories) in
current best practices.Comment: Work commissioned by CPNI, available at c2report.org. 38 pages.
Listing abstract compressed from version appearing in repor
Improving the efficiency of spam filtering through cache architecture
Blacklists (BLs), also called Domain Name Systembased Blackhole List (DNSBLs) are the databases of known internet addresses used by the spammers to send out the spam mails. Mail servers use these lists to filter out the e-mails coming from different spam sources. In contrary, Whitelists (WLs) are the explicit list of senders from whom e-mail can be accepted or delivered. Mail Transport Agent (MTA) is usually configured to reject, challenge or flag the messages which have been sent from the sources listed on one or more DNSBLs and to allow the messages from the sources listed on the WLs. In this paper, we are demonstrating how the bandwidth (the overall requests and responses that need to go over the network) performance is improved by using local caches for BLs and WLs. The actual sender\u27s IP addresses are extracted from the e-mail log. These are then compared with the list in the local caches to find out if they should be accepted or not, before they are checked against the global DNSBLs by running \u27DNSBL queries\u27 (if required). Around three quarters of the e-mail sources have been observed to be filtered locally through caches with this method. Provision of local control over the lists and lower search (filtering) time are the other related benefits. © 2008 IEEE
Hiding in Plain Sight: A Longitudinal Study of Combosquatting Abuse
Domain squatting is a common adversarial practice where attackers register
domain names that are purposefully similar to popular domains. In this work, we
study a specific type of domain squatting called "combosquatting," in which
attackers register domains that combine a popular trademark with one or more
phrases (e.g., betterfacebook[.]com, youtube-live[.]com). We perform the first
large-scale, empirical study of combosquatting by analyzing more than 468
billion DNS records---collected from passive and active DNS data sources over
almost six years. We find that almost 60% of abusive combosquatting domains
live for more than 1,000 days, and even worse, we observe increased activity
associated with combosquatting year over year. Moreover, we show that
combosquatting is used to perform a spectrum of different types of abuse
including phishing, social engineering, affiliate abuse, trademark abuse, and
even advanced persistent threats. Our results suggest that combosquatting is a
real problem that requires increased scrutiny by the security community.Comment: ACM CCS 1
PhishDef: URL Names Say It All
Phishing is an increasingly sophisticated method to steal personal user
information using sites that pretend to be legitimate. In this paper, we take
the following steps to identify phishing URLs. First, we carefully select
lexical features of the URLs that are resistant to obfuscation techniques used
by attackers. Second, we evaluate the classification accuracy when using only
lexical features, both automatically and hand-selected, vs. when using
additional features. We show that lexical features are sufficient for all
practical purposes. Third, we thoroughly compare several classification
algorithms, and we propose to use an online method (AROW) that is able to
overcome noisy training data. Based on the insights gained from our analysis,
we propose PhishDef, a phishing detection system that uses only URL names and
combines the above three elements. PhishDef is a highly accurate method (when
compared to state-of-the-art approaches over real datasets), lightweight (thus
appropriate for online and client-side deployment), proactive (based on online
classification rather than blacklists), and resilient to training data
inaccuracies (thus enabling the use of large noisy training data).Comment: 9 pages, submitted to IEEE INFOCOM 201
Evaluating IP Blacklists Effectiveness
IP blacklists are widely used to increase network security by preventing
communications with peers that have been marked as malicious. There are several
commercial offerings as well as several free-of-charge blacklists maintained by
volunteers on the web. Despite their wide adoption, the effectiveness of the
different IP blacklists in real-world scenarios is still not clear. In this
paper, we conduct a large-scale network monitoring study which provides
insightful findings regarding the effectiveness of blacklists. The results
collected over several hundred thousand IP hosts belonging to three distinct
large production networks highlight that blacklists are often tuned for
precision, with the result that many malicious activities, such as scanning,
are completely undetected. The proposed instrumentation approach to detect IP
scanning and suspicious activities is implemented with home-grown and
open-source software. Our tools enable the creation of blacklists without the
security risks posed by the deployment of honeypots
Master of Puppets: Analyzing And Attacking A Botnet For Fun And Profit
A botnet is a network of compromised machines (bots), under the control of an
attacker. Many of these machines are infected without their owners' knowledge,
and botnets are the driving force behind several misuses and criminal
activities on the Internet (for example spam emails). Depending on its
topology, a botnet can have zero or more command and control (C&C) servers,
which are centralized machines controlled by the cybercriminal that issue
commands and receive reports back from the co-opted bots.
In this paper, we present a comprehensive analysis of the command and control
infrastructure of one of the world's largest proprietary spamming botnets
between 2007 and 2012: Cutwail/Pushdo. We identify the key functionalities
needed by a spamming botnet to operate effectively. We then develop a number of
attacks against the command and control logic of Cutwail that target those
functionalities, and make the spamming operations of the botnet less effective.
This analysis was made possible by having access to the source code of the C&C
software, as well as setting up our own Cutwail C&C server, and by implementing
a clone of the Cutwail bot. With the help of this tool, we were able to
enumerate the number of bots currently registered with the C&C server,
impersonate an existing bot to report false information to the C&C server, and
manipulate spamming statistics of an arbitrary bot stored in the C&C database.
Furthermore, we were able to make the control server inaccessible by conducting
a distributed denial of service (DDoS) attack. Our results may be used by law
enforcement and practitioners to develop better techniques to mitigate and
cripple other botnets, since many of findings are generic and are due to the
workflow of C&C communication in general
- …