102 research outputs found
Measuring and Disrupting Malware Distribution Networks: An Interdisciplinary Approach
Malware Delivery Networks (MDNs) are networks of webpages, servers, computers, and computer files that are used by cybercriminals to proliferate malicious software (or malware) onto victim machines. The business of malware delivery is a complex and multifaceted one that has become increasingly profitable over the last few years. Due to the ongoing arms race between cybercriminals and the security community, cybercriminals are constantly evolving and streamlining their techniques to beat security countermeasures and avoid disruption to their operations, such as by security researchers infiltrating their botnet operations, or law enforcement taking down their infrastructures and arresting those involved. So far, the research community has conducted insightful but isolated studies into the different facets of malicious file distribution. Hence, only a limited picture of the malicious file delivery ecosystem has been provided thus far, leaving many questions unanswered. Using a data-driven and interdisciplinary approach, the purpose of this research is twofold. One, to study and measure the malicious file delivery ecosystem, bringing prior research into context, and to understand precisely how these malware operations respond to security and law enforcement intervention. And two, taking into account the overlapping research efforts of the information security and crime science communities towards preventing cybercrime, this research aims to identify mitigation strategies and intervention points to disrupt this criminal economy more effectively
Recommended from our members
Detection, Triage, and Attribution of PII Phishing Sites
Stolen personally identifiable information (PII) can be abused to perform a multitude of crimes in the victim’s name. For instance, credit card information can be used in drug business, Social Security Numbers and health ID’s can be used in insurance fraud, and passport data can be used for human trafficking or in terrorism. Even Information typically considered publicly available (e.g. name, birthday, phone number, etc.) can be used for unauthorized registration of services and generation of new accounts using the victim’s identity (unauthorized account creation). Accordingly, modern phishing campaigns have outlived the goal of account takeover and are trending towards more sophisticated goals.
While criminal investigations in the real world evolved over centuries, digital forensics is only a few decades into the art. In digital forensics, threat analysts have pioneered the field of enhanced attribution - a study of threat intelligence that aims to find a link between attacks and attackers. Their findings provide valuable information for investigators, ultimately bolster takedown efforts and help determine the proper course of legal action. Despite an overwhelming offer of security solutions today suggesting great threat analysis capabilities, vendors only share attack signatures and additional intelligence remains locked into the vendor’s ecosystem. Victims often hesitate to disclose attacks, fearing reputation damage and the accidental revealing of intellectual property. This phenomenon limits the availability of postmortem analysis from real-world attacks and often forces third-party investigators, like government agencies, to mine their own data.
In the absence of industry data, it can be promising to actively infiltrate fraudsters in an independent sting operation. Intuitively, undercover agents can be used to monitor online markets for illegal offerings and another common industry practice is to trap attackers in monitored sandboxes called honeypots. Using honeypots, investigators lure and deceive an attacker into believing an attack was successful while simultaneously studying the attacker’s behavior. Insights gathered from this process allow investigators to examine the latest attack vectors, methodology, and overall trends. For either approach, investigators crave additional information about the attacker, such that they can know what to look for. In the context of phishing attacks, it has been repeatedly proposed to "shoot tracers into the cloud", by stuffing phishing sites with fake information that can later be recognized in one way or another. However, to the best of our knowledge, no existing solution can keep up with modern phishing campaigns, because they focus on credential stuffing only, while modern campaigns steal more than just user credentials — they increasingly target PII instead.We observe that the use of HTML form input fields is a commonality among both credential stealing and identity stealing phishing sites and we propose to thoroughly evaluate this feature for the detection, triage and attribution of phishing attacks. This process includes extracting the phishing site’s target PII from its HTML tags, investigating how JavaScript code stylometry can be used to fingerprint a phishing site for its detection, and determining commonalities between the threat actor’s personal styles.
Our evaluation shows that tag identifiers, and tags are the most important features for this machine learning classification task, lifting the accuracy from 68% without these features to up to 92% when including them. We show that tag identifiers and code stylometry can also be used to decide if a phishing site uses cloaking. Then we propose to build the first denial-of-phishing engine (DOPE) that handles all phishing; both Credential Stealing and PII theft. DOPE analyzes HTML tags to learn which information to provide, and we craft this information in a believable manner, meaning that it can be expected to pass credibility tests by the phisher
Enhancing Web Browsing Security
Web browsing has become an integral part of our lives, and we use browsers to perform many important activities almost everyday and everywhere. However, due to the vulnerabilities in Web browsers and Web applications and also due to Web users\u27 lack of security knowledge, browser-based attacks are rampant over the Internet and have caused substantial damage to both Web users and service providers. Enhancing Web browsing security is therefore of great need and importance.;This dissertation concentrates on enhancing the Web browsing security through exploring and experimenting with new approaches and software systems. Specifically, we have systematically studied four challenging Web browsing security problems: HTTP cookie management, phishing, insecure JavaScript practices, and browsing on untrusted public computers. We have proposed new approaches to address these problems, and built unique systems to validate our approaches.;To manage HTTP cookies, we have proposed an approach to automatically validate the usefulness of HTTP cookies at the client-side on behalf of users. By automatically removing useless cookies, our approach helps a user to strike an appropriate balance between maximizing usability and minimizing security risks. to protect against phishing attacks, we have proposed an approach to transparently feed a relatively large number of bogus credentials into a suspected phishing site. Using those bogus credentials, our approach conceals victims\u27 real credentials and enables a legitimate website to identify stolen credentials in a timely manner. to identify insecure JavaScript practices, we have proposed an execution-based measurement approach and performed a large-scale measurement study. Our work sheds light on the insecure JavaScript practices and especially reveals the severity and nature of insecure JavaScript inclusion and dynamic generation practices on the Web. to achieve secure and convenient Web browsing on untrusted public computers, we have proposed a simple approach that enables an extended browser on a mobile device and a regular browser on a public computer to collaboratively support a Web session. A user can securely perform sensitive interactions on the mobile device and conveniently perform other browsing interactions on the public computer
Stepping Up the Cybersecurity Game: Protecting Online Services from Malicious Activity
The rise in popularity of online services such as social networks,web-based emails, and blogs has made them a popular platform for attackers.Cybercriminals leverage such services to spread spam, malware, and stealpersonal information from their victims.In a typical cybercriminal operation, miscreants first infect their victims' machines with malicious software and have themjoin a botnet, which is a network of compromised computers. In the second step,the infected machines are often leveraged to connect to legitimate onlineservices and perform malicious activities.As a consequence, online services receive activity from both legitimate and malicious users. However, while legitimate users use these services for thepurposes they were designed for, malicious parties exploit them for theirillegal actions, which are often linked to an economic gain. In this thesis, I showthat the way in which malicious users and legitimate ones interact with Internetservices presents differences. I then develop mitigation techniques thatleverage such differences to detect and block malicious parties that misuseInternet services.As examples of this research approach, I first study the problem of spammingbotnets, which are misused to send hundreds of millions of spam emails tomailservers spread across the globe. I show that botmasters typically split alist of victim email addresses among their bots, and that it is possible toidentify bots belonging to the same botnet by enumerating the mailservers thatare contacted by IP addresses over time. I developed a system, calledBotMagnifier, which learns the set of mailservers contacted by the bots belongingto a certain botnet, and finds more bots belonging to that same botnet.I then study the problem of misused accounts on online social networks. I firstlook at the problem of fake accounts that are set up by cybercriminals to spreadmalicious content. I study the modus operandi of the cybercriminalscontrolling such accounts, and I then develop a system to automatically flag asocial network accounts as fake. I then look at the problem of legitimateaccounts getting compromised by miscreants, and I present COMPA, a system thatlearns the typical habits of social network users and considers messages thatdeviate from the learned behavior as possible compromises. As a last example, I present EvilCohort, a system that detects communities ofonline accounts that are accessed by the same botnet. EvilCohort works byclustering together accounts that are accessed by a common set of IP addresses,and can work on any online service that requires the use of accounts (socialnetworks, web-based emails, blogs, etc.)
Evaluating Resilience of Cyber-Physical-Social Systems
Nowadays, protecting the network is not the only security concern. Still, in cyber security,
websites and servers are becoming more popular as targets due to the ease with which
they can be accessed when compared to communication networks. Another threat in
cyber physical social systems with human interactions is that they can be attacked and
manipulated not only by technical hacking through networks, but also by manipulating
people and stealing users’ credentials. Therefore, systems should be evaluated beyond cy-
ber security, which means measuring their resilience as a piece of evidence that a system
works properly under cyber-attacks or incidents. In that way, cyber resilience is increas-
ingly discussed and described as the capacity of a system to maintain state awareness for
detecting cyber-attacks. All the tasks for making a system resilient should proactively
maintain a safe level of operational normalcy through rapid system reconfiguration to
detect attacks that would impact system performance. In this work, we broadly studied
a new paradigm of cyber physical social systems and defined a uniform definition of it.
To overcome the complexity of evaluating cyber resilience, especially in these inhomo-
geneous systems, we proposed a framework including applying Attack Tree refinements
and Hierarchical Timed Coloured Petri Nets to model intruder and defender behaviors
and evaluate the impact of each action on the behavior and performance of the system.Hoje em dia, proteger a rede não é a única preocupação de segurança. Ainda assim, na
segurança cibernética, sites e servidores estão se tornando mais populares como alvos
devido à facilidade com que podem ser acessados quando comparados às redes de comu-
nicação. Outra ameaça em sistemas sociais ciberfisicos com interações humanas é que eles
podem ser atacados e manipulados não apenas por hackers técnicos através de redes, mas
também pela manipulação de pessoas e roubo de credenciais de utilizadores. Portanto, os
sistemas devem ser avaliados para além da segurança cibernética, o que significa medir
sua resiliência como uma evidência de que um sistema funciona adequadamente sob
ataques ou incidentes cibernéticos. Dessa forma, a resiliência cibernética é cada vez mais
discutida e descrita como a capacidade de um sistema manter a consciência do estado para
detectar ataques cibernéticos. Todas as tarefas para tornar um sistema resiliente devem
manter proativamente um nível seguro de normalidade operacional por meio da reconfi-
guração rápida do sistema para detectar ataques que afetariam o desempenho do sistema.
Neste trabalho, um novo paradigma de sistemas sociais ciberfisicos é amplamente estu-
dado e uma definição uniforme é proposta. Para superar a complexidade de avaliar a
resiliência cibernética, especialmente nesses sistemas não homogéneos, é proposta uma
estrutura que inclui a aplicação de refinamentos de Árvores de Ataque e Redes de Petri
Coloridas Temporizadas Hierárquicas para modelar comportamentos de invasores e de-
fensores e avaliar o impacto de cada ação no comportamento e desempenho do sistema
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
This report surveys the landscape of potential security threats from
malicious uses of AI, and proposes ways to better forecast, prevent, and
mitigate these threats. After analyzing the ways in which AI may influence the
threat landscape in the digital, physical, and political domains, we make four
high-level recommendations for AI researchers and other stakeholders. We also
suggest several promising areas for further research that could expand the
portfolio of defenses, or make attacks less effective or harder to execute.
Finally, we discuss, but do not conclusively resolve, the long-term equilibrium
of attackers and defenders.Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder
The AI Revolution: Opportunities and Challenges for the Finance Sector
This report examines Artificial Intelligence (AI) in the financial sector,
outlining its potential to revolutionise the industry and identify its
challenges. It underscores the criticality of a well-rounded understanding of
AI, its capabilities, and its implications to effectively leverage its
potential while mitigating associated risks. The potential of AI potential
extends from augmenting existing operations to paving the way for novel
applications in the finance sector. The application of AI in the financial
sector is transforming the industry. Its use spans areas from customer service
enhancements, fraud detection, and risk management to credit assessments and
high-frequency trading. However, along with these benefits, AI also presents
several challenges. These include issues related to transparency,
interpretability, fairness, accountability, and trustworthiness. The use of AI
in the financial sector further raises critical questions about data privacy
and security. A further issue identified in this report is the systemic risk
that AI can introduce to the financial sector. Being prone to errors, AI can
exacerbate existing systemic risks, potentially leading to financial crises.
Regulation is crucial to harnessing the benefits of AI while mitigating its
potential risks. Despite the global recognition of this need, there remains a
lack of clear guidelines or legislation for AI use in finance. This report
discusses key principles that could guide the formation of effective AI
regulation in the financial sector, including the need for a risk-based
approach, the inclusion of ethical considerations, and the importance of
maintaining a balance between innovation and consumer protection. The report
provides recommendations for academia, the finance industry, and regulators
The Dark Menace: Characterizing Network-based Attacks in the Cloud
ABSTRACT As the cloud computing market continues to grow, the cloud platform is becoming an attractive target for attackers to disrupt services and steal data, and to compromise resources to launch attacks. In this paper, using three months of NetFlow data in 2013 from a large cloud provider, we present the first large-scale characterization of inbound attacks towards the cloud and outbound attacks from the cloud. We investigate nine types of attacks ranging from network-level attacks such as DDoS to application-level attacks such as SQL injection and spam. Our analysis covers the complexity, intensity, duration, and distribution of these attacks, highlighting the key challenges in defending against attacks in the cloud. By characterizing the diversity of cloud attacks, we aim to motivate the research community towards developing future security solutions for cloud systems
- …