821 research outputs found
Exploring Online Fraudsters’ Decision-Making Processes
A growing body of evidence suggests the situational context influences the social engineer (SE) characteristics and tactics offenders (i.e., fraudsters) deploy during the development of an online fraud event. Several attempts have been made to examine online the macro-social development of an online fraud event. Nevertheless, macro-level social examinations have been largely unsuccessful in combating online fraud because offenders and victims, including offender victims, are not computers; therefore, offenders’ interactions, motives, and tactics are very difficult to surmise. To address online fraud, three independent studies were conducted to explore what is known about online fraudsters and investigate what is not accounted. Specifically, a scoping review of offenders SE characteristics and tactics is conducted. In addition, two empirical investigations examining linguistic cues used by offender and offender victims are conducted. for that present day literature or governmental reports do not address. Together, these studies examine the influence of the situational context on offenders’ decision-making process, like their SE characteristics and tactics. The results and limitations associated with each study, along with recommendations for further research are discussed
Artificial intelligence and UK national security: Policy considerations
RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security.
The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data
Man vs machine – Detecting deception in online reviews
This study focused on three main research objectives: analyzing the methods used to identify deceptive online consumer reviews, evaluating insights provided by multi-method automated approaches based on individual and aggregated review data, and formulating a review interpretation framework for identifying deception. The theoretical framework is based on two critical deception-related models, information manipulation theory and self-presentation theory. The findings confirm the interchangeable characteristics of the various automated text analysis methods in drawing insights about review characteristics and underline their significant complementary aspects. An integrative multi-method model that approaches the data at the individual and aggregate level provides more complex insights regarding the quantity and quality of review information, sentiment, cues about its relevance and contextual information, perceptual aspects, and cognitive material
Recommended from our members
The Design, Development and Validation of a Persuasive Content Generator
This paper addresses the automatic generation of persuasive content to influence users’ attitude and behaviour. Our research extends current approaches by leveraging individuals’ social media profiles and activity to personalize the persuasive content. Unlike most other implemented persuasive technology, our system is generic and can be adapted to any domain where collections of electronic text are available. Using the Yale Attitude Change approach, we describe: the multi-layered Pyramid of Individualization model; the design, development, and validation of integrated software that can generate individualized persuasive content based on a user’s social media profile and activity. Results indicate the proposed system can create personalized information that (a) matches readers’ interests, (b) is tailored to their ability to understand the information, and (c) is supported by trustable sources
Argumentation models and their use in corpus annotation: practice, prospects, and challenges
The study of argumentation is transversal to several research domains, from philosophy to linguistics, from the law to computer science and artificial intelligence. In discourse analysis, several distinct models have been proposed to harness argumentation, each with a different focus or aim. To analyze the use of argumentation in natural language, several corpora annotation efforts have been carried out, with a more or less explicit grounding on one of such theoretical argumentation models. In fact, given the recent growing interest in argument mining applications, argument-annotated corpora are crucial to train machine learning models in a supervised way. However, the proliferation of such corpora has led to a wide disparity in the granularity of the argument annotations employed. In this paper, we review the most relevant theoretical argumentation models, after which we survey argument annotation projects closely following those theoretical models. We also highlight the main simplifications that are often introduced in practice. Furthermore, we glimpse other annotation efforts that are not so theoretically grounded but instead follow a shallower approach. It turns out that most argument annotation projects make their own assumptions and simplifications, both in terms of the textual genre they focus on and in terms of adapting the adopted theoretical argumentation model for their own agenda. Issues of compatibility among argument-annotated corpora are discussed by looking at the problem from a syntactical, semantic, and practical perspective. Finally, we discuss current and prospective applications of models that take advantage of argument-annotated corpora
A Modular and Adaptive System for Business Email Compromise Detection
The growing sophistication of Business Email Compromise (BEC) and spear
phishing attacks poses significant challenges to organizations worldwide. The
techniques featured in traditional spam and phishing detection are insufficient
due to the tailored nature of modern BEC attacks as they often blend in with
the regular benign traffic. Recent advances in machine learning, particularly
in Natural Language Understanding (NLU), offer a promising avenue for combating
such attacks but in a practical system, due to limitations such as data
availability, operational costs, verdict explainability requirements or a need
to robustly evolve the system, it is essential to combine multiple approaches
together. We present CAPE, a comprehensive and efficient system for BEC
detection that has been proven in a production environment for a period of over
two years. Rather than being a single model, CAPE is a system that combines
independent ML models and algorithms detecting BEC-related behaviors across
various email modalities such as text, images, metadata and the email's
communication context. This decomposition makes CAPE's verdicts naturally
explainable. In the paper, we describe the design principles and constraints
behind its architecture, as well as the challenges of model design, evaluation
and adapting the system continuously through a Bayesian approach that combines
limited data with domain knowledge. Furthermore, we elaborate on several
specific behavioral detectors, such as those based on Transformer neural
architectures
Combating Fake News on Social Media: A Framework, Review, and Future Opportunities
Social media platforms facilitate the sharing of a vast magnitude of information in split seconds among users. However, some false information is also widely spread, generally referred to as “fake news”. This can have major negative impacts on individuals and societies. Unfortunately, people are often not able to correctly identify fake news from truth. Therefore, there is an urgent need to find effective mechanisms to fight fake news on social media. To this end, this paper adapts the Straub Model of Security Action Cycle to the context of combating fake news on social media. It uses the adapted framework to classify the vast literature on fake news to action cycle phases (i.e., deterrence, prevention, detection, and mitigation/remedy). Based on a systematic and inter-disciplinary review of the relevant literature, we analyze the status and challenges in each stage of combating fake news, followed by introducing future research directions. These efforts allow the development of a holistic view of the research frontier on fighting fake news online. We conclude that this is a multidisciplinary issue; and as such, a collaborative effort from different fields is needed to effectively address this problem
- …