2,081 research outputs found
Prevent Tragedies: A case study in female-targeted strategic communications in the United Kingdom’s Prevent counter-terrorism policy
While international revolutionary groups have frequently attracted international support, the declaration of the caliphate by Abu Bakr al-Baghdadi in 2014 and the subsequent growth of foreign fighters leaving their home countries to fight in Syria created a significant concern for Western governments. The United Kingdom was a major source of this foreign fighter flow, becoming a significant concern in 2014 and by 2015 accounting for some 700-760 fighters with the majority affiliated to the Islamic State and with a growing amount of females joining the group. While Prevent, the preventative pillar of the United Kingdom’s counter-terrorism strategy, was in 2014 already well accustomed to intervening in cases of male radicalization, it was not well prepared to handle female radicalization. This article provides a case study of the UK police response to the above concerns. In 2014 the Metropolitan Police and Counter-Terrorism Policing HQ began work on the Prevent Tragedies campaign, a strategic communications campaign. The campaign sought to encourage women, primarily mothers, to talk with younger women and discourage them from travelling to Syria. It also sought to make these women aware of the government’s Prevent policy, and to encourage them to submit reports to Prevent should be they concerned about the radicalization of persons close to them. Using documents obtained by Freedom of Information requests, and material gathered from the Prevent Tragedies website, this article explores how the idea of the “mother” as a nurturing and caring subject was utilized to try and counter female radicalization. It analyses how stereotypical ideas about pacific femininity and female political naivety were utilized to further the narrative of “groomed” women who were unaware of the brutal nature of Islamic state, and therefore could not have ideologically supported the organization when they travelled to Syria. While this undermines ideological support for Islamic State, it simultaneously draws on – and exposes – a current in U.K. counter-terrorism that underplays female radicalism, hampering our full understanding of gendered radicalization
Investigating Prompt Engineering in Diffusion Models
With the spread of the use of Text2Img diffusion models such as DALL-E 2,
Imagen, Mid Journey and Stable Diffusion, one challenge that artists face is
selecting the right prompts to achieve the desired artistic output. We present
techniques for measuring the effect that specific words and phrases in prompts
have, and (in the Appendix) present guidance on the selection of prompts to
produce desired effects.Comment: Paper submitted for Creativity and Design workshop at NeurIPS 2022.
(4 pages including references + 7 page appendix). We would like to thank
Google and the ML Developer Programs Team for their assistance and compute
credits used in the experiments for this pape
Unsupervised Natural Question Answering with a Small Model
The recent (2019-02) demonstration of the power of huge language models such
as GPT-2 to memorise the answers to factoid questions raises questions about
the extent to which knowledge is being embedded directly within these large
models. This short paper describes an architecture through which much smaller
models can also answer such questions - by making use of 'raw' external
knowledge. The contribution of this work is that the methods presented here
rely on unsupervised learning techniques, complementing the unsupervised
training of the Language Model. The goal of this line of research is to be able
to add knowledge explicitly, without extensive training.Comment: Accepted paper for FEVER workshop at EMNLP-IJCNLP 2019. (4 pages +
references
Paraphrasing with Large Language Models
Recently, large language models such as GPT-2 have shown themselves to be
extremely adept at text generation and have also been able to achieve
high-quality results in many downstream NLP tasks such as text classification,
sentiment analysis and question answering with the aid of fine-tuning. We
present a useful technique for using a large language model to perform the task
of paraphrasing on a variety of texts and subjects. Our approach is
demonstrated to be capable of generating paraphrases not only at a sentence
level but also for longer spans of text such as paragraphs without needing to
break the text into smaller chunks.Comment: Accepted paper for WNGT workshop at EMNLP-IJCNLP 2019. (7 pages
including references and supplemental material
Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation
The TextGraphs-13 Shared Task on Explanation Regeneration asked participants
to develop methods to reconstruct gold explanations for elementary science
questions. Red Dragon AI's entries used the language of the questions and
explanation text directly, rather than a constructing a separate graph-like
representation. Our leaderboard submission placed us 3rd in the competition,
but we present here three methods of increasing sophistication, each of which
scored successively higher on the test set after the competition close.Comment: Accepted paper for TextGraphs-13 workshop at EMNLP-IJCNLP 2019. (5
pages including references
- …