12,069 research outputs found

    Code wars: steganography, signals intelligence, and terrorism

    Get PDF
    This paper describes and discusses the process of secret communication known as steganography. The argument advanced here is that terrorists are unlikely to be employing digital steganography to facilitate secret intra-group communication as has been claimed. This is because terrorist use of digital steganography is both technically and operationally implausible. The position adopted in this paper is that terrorists are likely to employ low-tech steganography such as semagrams and null ciphers instead

    Solutions to Detect and Analyze Online Radicalization : A Survey

    Full text link
    Online Radicalization (also called Cyber-Terrorism or Extremism or Cyber-Racism or Cyber- Hate) is widespread and has become a major and growing concern to the society, governments and law enforcement agencies around the world. Research shows that various platforms on the Internet (low barrier to publish content, allows anonymity, provides exposure to millions of users and a potential of a very quick and widespread diffusion of message) such as YouTube (a popular video sharing website), Twitter (an online micro-blogging service), Facebook (a popular social networking website), online discussion forums and blogosphere are being misused for malicious intent. Such platforms are being used to form hate groups, racist communities, spread extremist agenda, incite anger or violence, promote radicalization, recruit members and create virtual organi- zations and communities. Automatic detection of online radicalization is a technically challenging problem because of the vast amount of the data, unstructured and noisy user-generated content, dynamically changing content and adversary behavior. There are several solutions proposed in the literature aiming to combat and counter cyber-hate and cyber-extremism. In this survey, we review solutions to detect and analyze online radicalization. We review 40 papers published at 12 venues from June 2003 to November 2011. We present a novel classification scheme to classify these papers. We analyze these techniques, perform trend analysis, discuss limitations of existing techniques and find out research gaps

    State of the art 2015: a literature review of social media intelligence capabilities for counter-terrorism

    Get PDF
    Overview This paper is a review of how information and insight can be drawn from open social media sources. It focuses on the specific research techniques that have emerged, the capabilities they provide, the possible insights they offer, and the ethical and legal questions they raise. These techniques are considered relevant and valuable in so far as they can help to maintain public safety by preventing terrorism, preparing for it, protecting the public from it and pursuing its perpetrators. The report also considers how far this can be achieved against the backdrop of radically changing technology and public attitudes towards surveillance. This is an updated version of a 2013 report paper on the same subject, State of the Art. Since 2013, there have been significant changes in social media, how it is used by terrorist groups, and the methods being developed to make sense of it.  The paper is structured as follows: Part 1 is an overview of social media use, focused on how it is used by groups of interest to those involved in counter-terrorism. This includes new sections on trends of social media platforms; and a new section on Islamic State (IS). Part 2 provides an introduction to the key approaches of social media intelligence (henceforth ‘SOCMINT’) for counter-terrorism. Part 3 sets out a series of SOCMINT techniques. For each technique a series of capabilities and insights are considered, the validity and reliability of the method is considered, and how they might be applied to counter-terrorism work explored. Part 4 outlines a number of important legal, ethical and practical considerations when undertaking SOCMINT work

    Words are Malleable: Computing Semantic Shifts in Political and Media Discourse

    Get PDF
    Recently, researchers started to pay attention to the detection of temporal shifts in the meaning of words. However, most (if not all) of these approaches restricted their efforts to uncovering change over time, thus neglecting other valuable dimensions such as social or political variability. We propose an approach for detecting semantic shifts between different viewpoints--broadly defined as a set of texts that share a specific metadata feature, which can be a time-period, but also a social entity such as a political party. For each viewpoint, we learn a semantic space in which each word is represented as a low dimensional neural embedded vector. The challenge is to compare the meaning of a word in one space to its meaning in another space and measure the size of the semantic shifts. We compare the effectiveness of a measure based on optimal transformations between the two spaces with a measure based on the similarity of the neighbors of the word in the respective spaces. Our experiments demonstrate that the combination of these two performs best. We show that the semantic shifts not only occur over time, but also along different viewpoints in a short period of time. For evaluation, we demonstrate how this approach captures meaningful semantic shifts and can help improve other tasks such as the contrastive viewpoint summarization and ideology detection (measured as classification accuracy) in political texts. We also show that the two laws of semantic change which were empirically shown to hold for temporal shifts also hold for shifts across viewpoints. These laws state that frequent words are less likely to shift meaning while words with many senses are more likely to do so.Comment: In Proceedings of the 26th ACM International on Conference on Information and Knowledge Management (CIKM2017

    Unapparent information revelation for counterterrorism: Visualizing associations using a hybrid graph-based approach

    Get PDF
    Unapparent Information Revelation refers to the task in the text mining of a document collection of revealing interesting information other than that which is explicitly stated. It focuses on detecting possible links between concepts across multiple text documents by generating a graph that matches the evidence trail found in the documents. A Concept Chain Graph is a statistical technique to find links in snippets of information where singularly each small piece appears to be unconnected.In relation to algorithm performance, Latent Semantic Indexing and the Contextual Network Graph are found to be comparable to the Concept Chain Graph.These aspects are explored and discussed.In this paper,a review is performed on these three similarly grounded approaches. The Concept Chain Graph is proposed as being suited to extracting interesting relations among concepts that co-occur within text collections due to its prominent ability to construct a directed graph, representing the evidence trail. It is the baseline study for our hybrid Concept Chain Graph approac

    Losing the War Against Dirty Money: Rethinking Global Standards on Preventing Money Laundering and Terrorism Financing

    Get PDF
    Following a brief overview in Part I.A of the overall system to prevent money laundering, Part I.B describes the role of the private sector, which is to identify customers, create a profile of their legitimate activities, keep detailed records of clients and their transactions, monitor their transactions to see if they conform to their profile, examine further any unusual transactions, and report to the government any suspicious transactions. Part I.C continues the description of the preventive measures system by describing the government\u27s role, which is to assist the private sector in identifying suspicious transactions, ensure compliance with the preventive measures requirements, and analyze suspicious transaction reports to determine those that should be investigated. Parts I.D and I.E examine the effectiveness of this system. Part I.D discusses successes and failures in the private sector\u27s role. Borrowing from theory concerning the effectiveness of private sector unfunded mandates, this Part reviews why many aspects of the system are failing, focusing on the subjectivity of the mandate, the disincentives to comply, and the lack of comprehensive data on client identification and transactions. It notes that the system includes an inherent contradiction: the public sector is tasked with informing the private sector how best to detect launderers and terrorists, but to do so could act as a road map on how to avoid detection should such information fall into the wrong hands. Part I.D discusses how financial institutions do not and cannot use scientifically tested statistical means to determine if a particular client or set of transactions is more likely than others to indicate criminal activity. Part I.D then turns to a discussion of a few issues regarding the impact the system has but that are not related to effectiveness, followed by a summary and analysis of how flaws might be addressed. Part I.E continues by discussing the successes and failures in the public sector\u27s role. It reviews why the system is failing, focusing on the lack of assistance to the private sector in and the lack of necessary data on client identification and transactions. It also discusses how financial intelligence units, like financial institutions, do not and cannot use scientifically tested statistical means to determine probabilities of criminal activity. Part I concludes with a summary and analysis tying both private and public roles together. Part II then turns to a review of certain current techniques for selecting income tax returns for audit. After an overview of the system, Part II first discusses the limited role of the private sector in providing tax administrators with information, comparing this to the far greater role the private sector plays in implementing preventive measures. Next, this Part turns to consider how tax administrators, particularly the U.S. Internal Revenue Service, select taxpayers for audit, comparing this to the role of both the private and public sectors in implementing preventive measures. It focuses on how some tax administrations use scientifically tested statistical means to determine probabilities of tax evasion. Part II then suggests how flaws in both private and public roles of implementing money laundering and terrorism financing preventive measures might be theoretically addressed by borrowing from the experience of tax administration. Part II concludes with a short summary and analysis that relates these conclusions to the preventive measures system. Referring to the analyses in Parts I and II, Part III suggests changes to the current preventive measures standard. It suggests that financial intelligence units should be uniquely tasked with analyzing and selecting clients and transactions for further investigation for money laundering and terrorism financing. The private sector\u27s role should be restricted to identifying customers, creating an initial profile of their legitimate activities, and reporting such information and all client transactions to financial intelligence units

    Privacy & law enforcement

    Get PDF

    Automatically Detecting the Resonance of Terrorist Movement Frames on the Web

    Get PDF
    The ever-increasing use of the internet by terrorist groups as a platform for the dissemination of radical, violent ideologies is well documented. The internet has, in this way, become a breeding ground for potential lone-wolf terrorists; that is, individuals who commit acts of terror inspired by the ideological rhetoric emitted by terrorist organizations. These individuals are characterized by their lack of formal affiliation with terror organizations, making them difficult to intercept with traditional intelligence techniques. The radicalization of individuals on the internet poses a considerable threat to law enforcement and national security officials. This new medium of radicalization, however, also presents new opportunities for the interdiction of lone wolf terrorism. This dissertation is an account of the development and evaluation of an information technology (IT) framework for detecting potentially radicalized individuals on social media sites and Web fora. Unifying Collective Action Framing Theory (CAFT) and a radicalization model of lone wolf terrorism, this dissertation analyzes a corpus of propaganda documents produced by several, radically different, terror organizations. This analysis provides the building blocks to define a knowledge model of terrorist ideological framing that is implemented as a Semantic Web Ontology. Using several techniques for ontology guided information extraction, the resultant ontology can be accurately processed from textual data sources. This dissertation subsequently defines several techniques that leverage the populated ontological representation for automatically identifying individuals who are potentially radicalized to one or more terrorist ideologies based on their postings on social media and other Web fora. The dissertation also discusses how the ontology can be queried using intuitive structured query languages to infer triggering events in the news. The prototype system is evaluated in the context of classification and is shown to provide state of the art results. The main outputs of this research are (1) an ontological model of terrorist ideologies (2) an information extraction framework capable of identifying and extracting terrorist ideologies from text, (3) a classification methodology for classifying Web content as resonating the ideology of one or more terrorist groups and (4) a methodology for rapidly identifying news content of relevance to one or more terrorist groups

    A systematic survey of online data mining technology intended for law enforcement

    Get PDF
    As an increasing amount of crime takes on a digital aspect, law enforcement bodies must tackle an online environment generating huge volumes of data. With manual inspections becoming increasingly infeasible, law enforcement bodies are optimising online investigations through data-mining technologies. Such technologies must be well designed and rigorously grounded, yet no survey of the online data-mining literature exists which examines their techniques, applications and rigour. This article remedies this gap through a systematic mapping study describing online data-mining literature which visibly targets law enforcement applications, using evidence-based practices in survey making to produce a replicable analysis which can be methodologically examined for deficiencies
    • 

    corecore