180 research outputs found
Defacement Detection with Passive Adversaries
A novel approach to defacement detection is proposed in this paper, addressing explicitly the possible presence of a passive adversary. Defacement detection is an important security measure for Web Sites and Applications, aimed at avoiding unwanted modifications that would result in significant reputational damage. As in many other anomaly detection contexts, the algorithm used to identify possible defacements is obtained via an Adversarial Machine Learning process. We consider an exploratory setting, where the adversary can observe the detector’s alarm-generating behaviour, with the purpose of devising and injecting defacements that will pass undetected. It is then necessary to make to learning process unpredictable, so that the adversary will be unable to replicate it and predict the classifier’s behaviour. We achieve this goal by introducing a secret key—a key that our adversary does not know. The key will influence the learning process in a number of different ways, that are precisely defined in this paper. This includes the subset of examples and features that are actually used, the time of learning and testing, as well as the learning algorithm’s hyper-parameters. This learning methodology is successfully applied in this context, by using the system with both real and artificially modified Web sites. A year-long experimentation is also described, referred to the monitoring of the new Web Site of a major manufacturing company
Political Expression in Web Defacements
The idea of influencing public opinion through digital media is ubiquitous, yet little is
known about its origins. This thesis investigates the use of political communication
through hacked websites. It is at the same time an exploratory description of the
research tools and methods needed to find and retrieve such material.
The dissertation frames political expression through hacking as interference with the
strata of digital communication and positions it within a larger history of on- and offline
activist practices. The methodological section describes the difficulties of finding and
accessing defaced pages, which are almost exclusively held by community-based
archives. Based on already available and added metadata, the dataset of defacements is
surveyed and topics, periods of high activity and prominent defacer groups are
identified. Modes of expression are tracked to give insight to possible defacer
motivation. This survey then serves as the basis for the following analysis of two
emblematic clusters of activity: The Kashmir conflict and the 9/11 attacks. In a close
reading of selected defacements, communication strategies and general types of
defacements are described, thereby showcasing the diversity of defacer standpoints and
strategies which runs counter to the common uniform depiction of hackers. The notion
of defacements as forced injection of material into a public sphere is discussed
throughout these close readings and leads to the final analytical section discussing the
relation between defacements and WikiLeaks.
After reflecting on the themes that unite this dissertation, the conclusion reflects on the
preservation and availability of source material on defaced pages. The author expresses
the hope that both the research methodology as well as the applied analyses will
promote the understanding of web defacements as a resource for inquests into online
political expression
Cyber Places, Crime Patterns, and Cybercrime Prevention: An Environmental Criminology and Crime Analysis approach through Data Science
For years, academics have examined the potential usefulness of traditional criminological theories to explain and prevent cybercrime. Some analytical frameworks from Environmental Criminology and Crime Analysis (ECCA), such as the Routine Activities Approach and Situational Crime Prevention, are frequently used in theoretical and empirical research for this purpose. These efforts have led to a better understanding of how crime opportunities are generated in cyberspace, thus contributing to advancing the discipline. However, with a few exceptions, other ECCA analytical frameworks — especially those based on the idea of geographical place— have been largely ignored.
The limited attention devoted to ECCA from a global perspective means its true potential to prevent cybercrime has remained unknown to date. In this thesis we aim to overcome this geographical gap in order to show the potential of some of the essential concepts that underpin the ECCA approach, such as places and crime patterns, to analyse and prevent four crimes committed in cyberspace. To this end, this dissertation is structured in two phases: firstly, a proposal for the transposition of ECCA's fundamental propositions to cyberspace; and secondly, deriving from this approach some hypotheses are contrasted in four empirical studies through Data Science. The first study contrasts a number of premises of repeat victimization in a sample of more than nine million self-reported website defacements. The second examines the precipitators of crime at cyber places where allegedly fixed match results are advertised and the hyperlinked network they form. The third explores the situational contexts where repeated online harassment occurs among a sample of non-university students. And the fourth builds two metadata-driven machine learning models to detect online hate speech in a sample of Twitter messages collected after a terrorist attack. General results show
(1) that cybercrimes are not randomly distributed in space, time, or among people; and
(2) that the environmental features of the cyber places where they occur determine the emergence of crime opportunities. Overall, we conclude that the ECCA approach and, in particular, its place-based analytical frameworks can also be valid for analysing and preventing crime in cyberspace. We anticipate that this work can guide future research in this area including: the design of secure online environments, the allocation of preventive resources to high-risk cyber places, and the implementation of new evidence- based situational prevention measure
Antidefacement
Internet connects around three billions of users worldwide, a number increasing every day. Thanks to this technology, people, companies and devices perform several tasks, such as information broadcasting through websites. Because of the large volumes of sensitive information and the lack of security in the websites, the number of attacks on these applications has been increasing significantly. Attacks on websites have different purposes, one of these is the introduction of unauthorized modifications (defacement). Defacement is an issue which involves impacts on both, system users and company image, thus, the researchers community has been working on solutions to reduce security risks. This paper presents an introduction to the state of the art about techniques, methodologies and solutions proposed by both, the researchers community and the computer security industry
Recommended from our members
The threat of cyberterrorism: Contemporary consequences and prescriptions
This study researches the varying threats that emanate from terrorists who carry their activity into the online arena. It examines several elements of this threat, including virtual to virtual attacks and threats to critical infrastructure that can be traced to online sources. It then reports on the methods that terrorists employ in using information technology such as the internet for propaganda and other communication purposes. It discusses how the United States government has responded to these problems, and concludes with recommendations for best practices
On the Relevance of Social Media Platforms in Predicting The Volume and Patterns of Web Defacement Attacks
Social media platforms are commonly employed by law enforcement agencies for collecting Open Source Intelligence (OSNIT) on criminals, and assessing the risk they pose to the environment the live in. However, since no prior research has investigated the relationships between hackers’ use of social media platforms and their likelihood to generate cyber-attacks, this practice is less common among Information Technology Teams. Addressing this empirical gap, we draw on the social learning theory and estimate the relationships between hackers’ use of Facebook, Twitter, and YouTube and the frequency of web defacement attacks they generate in different times (weekdays vs. weekends) and against different targets (USA vs. non-USA websites). To answer our research questions, we use hackers’ reports of web defacement they generated (available on http://www.zone-h.org), and complement with an independent data collection we launched to identify these hackers’ use of different social media platforms. Results from a series of Negative Binomial Regression analyses reveal that hackers’ use of social media platforms, and specifically Twitter and Facebook, significantly increases the frequency of web defacement attacks they generate. However, while using these social media platforms significantly increases the volume of web defacement attacks these hackers generate during weekdays, it has no association with the volume of web defacement they launch over weekends. Finally, although hackers’ use of both Facebook and Twitter accounts increase the frequency of attacks they generate against non-USA websites, the use of Twitter only increases significantly the volume of web defacement attacks against USA websites
Cybercrime vs Hacktivism: Do we need a differentiated regulatory approach?
Background and aims:
Cybercrime is an issue that increases year on year, however rarely are the motivations behind these attacks investigated. More and more people are turning to the internet to protest with some scholars debating whether hacktivism is a social movement. This Dissertation uses networked social movement theory in order to establish if hacktivism is a social movement or whether it is simply a politically motivated form of cybercrime. While demonstrating hacktivism’s place in the social movement landscape this Dissertation will also analyse how hacktivism is currently regulated and whether the legislative and regulatory tools are appropriate.
Methods:
This Dissertation uses a multi-method approach to establish whether hacktivism could be considered to be a social movement. The first method used is a rhetorical analysis of the Twitter accounts from active hacktivist accounts. Tweets posted by these accounts are coded using Stewart’s functional approach to rhetoric used by social movements (1980) using MAXQDA’s content analysis software. The second method used is a descriptive statistical analysis of a number of publicly available datasets (Zone H; the Cambridge Computer Crime Database; DCMS’s Cyber Security Breaches Surveys from 2017-2021; an AnonOps Internet Relay Chat Channel; a sentiment analysis; the hack aggregator ‘Hackmageddon’) to establish hacktivism’s similarities and differences to both cybercrime and social movements.
Results and Conclusions::
This Dissertation found that hacktivism is substantially different to cybercrime despite it being regulated as such based on the methods, targets and ideologies. Additionally, the Dissertation found that hacktivism could be considered to be a social movement based on similarities in their communications and motivations as well as the online parallels hacktivism has to social movement methods. The dissertation also found that due to the similarities hacktivism shares with traditional offline protests and hacktivism, the UK should look at the offline parallels when regulating hacktivism to ensure that the human rights of those taking part in hacktivist methods are not being quashed and are being upheld
Hackers: cybercriminals or not?
Treball Final de Grau en Criminologia i Seguretat. Codi: CS1044. Curs: 2018-2019The development and constant evolution of new technologies (ICTs) has
originated a society that is constantly connected to the Internet. Obviously, this offers
advantages, but it also creates important problems. There is always a fraction of people
in all societies who act inappropriately, break the law or use illicit means to take
advantage of others. The Internet provides a place for cybercriminals and allows them
to exist and flourish. These recent years, issues concerning cyber security have received
significant attention and have become a priority for many governments, organizations,
and industries. Today, the technological advance is continuous and this brings crime
new opportunities. One of this is the unauthorized access to computer networks. The
current study focuses on this cybercrime, the hackers and the image that society has
about them. In particular, a view of hackers that it is intended to distinguish them from
cybercriminals and to assist law enforcement in understanding the way hackers think.
The paper starts with the definition and history about hackers to continue with computer
crimes from a criminology perspective and the way hackers are seen among people.
Hacktivism, which is a new way of protest using the Internet, is addressed as well. Also,
the paper presents laws, applicable to the computer crime, and highlights the issues
about tracking and tracing these types of crimes by comparing United States and Spain
Justifying Uncivil Disobedience
A prominent way of justifying civil disobedience is to postulate a pro tanto duty to obey the
law and to argue that the considerations that ground this duty sometimes justify forms of civil disobedience. However, this view entails that certain kinds of uncivil disobedience are also justified. Thus, either a) civil disobedience is never justified or b) uncivil disobedience is sometimes justified. Since a) is implausible, we should accept b). I respond to the objection that this ignores the fact that civil disobedience enjoys a special normative status on account of instantiating certain special features: nonviolence, publicity, the acceptance of legal consequences, and conscientiousness. I then show that my view is superior to two rivals: the view that we should expand the notion of civility and that civil disobedience, expansively construed, is uniquely appropriate; and the view that uncivil disobedience is justifiable in but only in unfavorable conditions
Recommended from our members
Identifying and Preventing Large-scale Internet Abuse
The widespread access to the Internet and the ubiquity of web-based services make it easy to communicate and interact globally. Unfortunately, the software and protocols implementing the functionality of these services are often vulnerable to attacks. In turn, an attacker can exploit them to compromise, take over, and abuse the services for her own nefarious purposes. In this dissertation, we aim to better understand such attacks, and we develop methods and algorithms to detect and prevent them, which we evaluate on large-scale datasets.First, we detail Meerkat, a system to detect a visible way in which websites are being compromised, namely website defacements. They can inflict significant harm on the websites’ operators through the loss of sales, the loss in reputation, or because of legal ramifications. Meerkat requires no prior knowledge about the websites’ content or their structure, but only the Uniform Resource Identifier (URI) at which they can be reached. By design, Meerkat mimics how a human analyst decides if a website was defaced when viewing it in a browser, by using computer vision techniques. Thus, it tackles the problem of detecting website defacements through their attention-seeking nature, their goal and purpose, rather than code or data artifacts that they might exhibit. In turn, it is much harder for an attacker to evade our system, as she needs to change her modus operandi. When Meerkat detects a website as defaced, the website can automatically be put into maintenance mode or restored to a known good state.An attacker, however, is not limited to abuse a compromised website in a way that is visible to the website’s visitors. Instead, she can misuse the website to infect its visitors with malicious software (malware). Although malware is well studied, identifying malicious websites remains a major challenge in today’s Internet. Second, we introduce Delta, a novel, purely static analysis approach that extracts change-related features between two versions of the same website, uses machine learning to derive a model of website changes, detects if an introduced change was malicious or benign, identifies the underlying infection vector based on clustering, and generates an identifying signature. Furthermore, due to the way Delta clusters campaigns, it can uncover infection campaigns that leverage specific vulnerable applications as a distribution channel, and it can greatly reduce the human labor necessary to uncover the application responsible for a service’s compromise.Third, we investigate the practicality and impact of domain takeover attacks, which an attacker can similarly abuse to spread misinformation or malware, and we present a defense on how such takeover attacks can be rendered toothless. Specifically, the new elasticity of Internet resources, in particular Internet protocol (IP) addresses in the context of Infrastructure-as-a-Service cloud service providers, combined with previously made protocol assumptions can lead to security issues. In Cloud Strife, we show that this dynamic component paired with recent developments in trust-based ecosystems (e.g., Transport Layer Security (TLS) certificates) creates so far unknown attack vectors. For example, a substantial number of stale domain name system (DNS) records points to readily available IP addresses in clouds, yet, they are still actively attempted to be accessed. Often, these records belong to discontinued services that were previously hosted in the cloud. We demonstrate that it is practical, and time and cost-efficient for attackers to allocate the IP addresses to which stale DNS records point. Further considering the ubiquity of domain validation in trust ecosystems, an attacker can impersonate the service by obtaining and using a valid certificate that is trusted by all major operating systems and browsers, which severely increases the attackers’ capabilities. The attacker can then also exploit residual trust in the domain name for phishing, receiving and sending emails, or possibly distributing code to clients that load remote code from the domain (e.g., loading of native code by mobile apps, or JavaScript libraries by websites). To prevent such attacks, we introduce a new authentication method for trust-based domain validation that mitigates staleness issues without incurring additional certificate requester effort by incorporating existing trust into the validation process.Finally, the analyses of Delta, Meerkat, and Cloud Strife have made use of large-scale measurements to assess our approaches’ impact and viability. Indeed, security research in general has made extensive use of exhaustive Internet-wide scans over the recent years, as they can provide significant insights into the state of security of the Internet (e.g., if classes of devices are behaving maliciously, or if they might be insecure and could turn malicious in an instant). However, the address space of the Internet’s core addressing protocol (Internet Protocol version 4; IPv4) is exhausted, and a migration to its successor (Internet Protocol version 6; IPv6), the only accepted long-term solution, is inevitable. In turn, to better understand the security of devices connected to the Internet, in particular Internet of Things devices, it is imperative to include IPv6 addresses in security evaluations and scans. Unfortunately, it is practically infeasible to iterate through the entire IPv6 address space, as it is 296 times larger than the IPv4 address space. Without enumerating hosts prior to scanning, we will be unable to retain visibility into the overall security of Internet-connected devices in the future, and we will be unable to detect and prevent their abuse or compromise. To mitigate this blind spot, we introduce a novel technique to enumerate part of the IPv6 address space by walking DNSSEC-signed IPv6 reverse zones. We show (i) that enumerating active IPv6 hosts is practical without a preferential network position contrary to common belief, (ii) that the security of active IPv6 hosts is currently still lagging behind the security state of IPv4 hosts, and (iii) that unintended default IPv6 connectivity is a major security issue
- …