1,077 research outputs found
A Survey of Operations Research and Analytics Literature Related to Anti-Human Trafficking
Human trafficking is a compound social, economic, and human rights issue
occurring in all regions of the world. Understanding and addressing such a
complex crime requires effort from multiple domains and perspectives. As of
this writing, no systematic review exists of the Operations Research and
Analytics literature applied to the domain of human trafficking. The purpose of
this work is to fill this gap through a systematic literature review. Studies
matching our search criteria were found ranging from 2010 to March 2021. These
studies were gathered and analyzed to help answer the following three research
questions: (i) What aspects of human trafficking are being studied by
Operations Research and Analytics researchers? (ii) What Operations Research
and Analytics methods are being applied in the anti-human trafficking domain?
and (iii) What are the existing research gaps associated with (i) and (ii)? By
answering these questions, we illuminate the extent to which these topics have
been addressed in the literature, as well as inform future research
opportunities in applying analytical methods to advance the fight against human
trafficking.Comment: 28 pages, 6 Figures, 2 Table
Mining complex trees for hidden fruit : a graphâbased computational solution to detect latent criminal networks : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Technology at Massey University, Albany, New Zealand.
The detection of crime is a complex and difficult endeavour. Public and private organisations â focusing on law enforcement, intelligence, and compliance â commonly apply the rational isolated actor approach premised on observability and materiality. This is manifested largely as conducting entity-level risk management sourcing âleadsâ from reactive covert human intelligence sources and/or proactive sources by applying simple rules-based models. Focusing on discrete observable and material actors simply ignores that criminal activity exists within a complex system deriving its fundamental structural fabric from the complex interactions between actors - with those most unobservable likely to be both criminally proficient and influential. The graph-based computational solution developed to detect latent criminal networks is a response to the inadequacy of the rational isolated actor approach that ignores the connectedness and complexity of criminality.
The core computational solution, written in the R language, consists of novel entity resolution, link discovery, and knowledge discovery technology. Entity resolution enables the fusion of multiple datasets with high accuracy (mean F-measure of 0.986 versus competitors 0.872), generating a graph-based expressive view of the problem. Link discovery is comprised of link prediction and link inference, enabling the high-performance detection (accuracy of ~0.8 versus relevant published models ~0.45) of unobserved relationships such as identity fraud. Knowledge discovery uses the fused graph generated and applies the âGraphExtractâ algorithm to create a set of subgraphs representing latent functional criminal groups, and a mesoscopic graph representing how this set of criminal groups are interconnected. Latent knowledge is generated from a range of metrics including the âSuper-brokerâ metric and attitude prediction.
The computational solution has been evaluated on a range of datasets that mimic an applied setting, demonstrating a scalable (tested on ~18 million node graphs) and performant (~33 hours runtime on a non-distributed platform) solution that successfully detects relevant latent functional criminal groups in around 90% of cases sampled and enables the contextual understanding of the broader criminal system through the mesoscopic graph and associated metadata. The augmented data assets generated provide a multi-perspective systems view of criminal activity that enable advanced informed decision making across the microscopic mesoscopic macroscopic spectrum
Named Entity Resolution in Personal Knowledge Graphs
Entity Resolution (ER) is the problem of determining when two entities refer
to the same underlying entity. The problem has been studied for over 50 years,
and most recently, has taken on new importance in an era of large,
heterogeneous 'knowledge graphs' published on the Web and used widely in
domains as wide ranging as social media, e-commerce and search. This chapter
will discuss the specific problem of named ER in the context of personal
knowledge graphs (PKGs). We begin with a formal definition of the problem, and
the components necessary for doing high-quality and efficient ER. We also
discuss some challenges that are expected to arise for Web-scale data. Next, we
provide a brief literature review, with a special focus on how existing
techniques can potentially apply to PKGs. We conclude the chapter by covering
some applications, as well as promising directions for future research.Comment: To appear as a book chapter by the same name in an upcoming (Oct.
2023) book `Personal Knowledge Graphs (PKGs): Methodology, tools and
applications' edited by Tiwari et a
LearnFCA: A Fuzzy FCA and Probability Based Approach for Learning and Classification
Formal concept analysis(FCA) is a mathematical theory based on lattice and order theory used for data analysis and knowledge representation. Over the past several years, many of its extensions have been proposed and applied in several domains including data mining, machine learning, knowledge management, semantic web, software development, chemistry ,biology, medicine, data analytics, biology and ontology engineering.
This thesis reviews the state-of-the-art of theory of Formal Concept Analysis(FCA) and its various extensions that have been developed and well-studied in the past several years. We discuss their historical roots, reproduce the original definitions and derivations with illustrative examples. Further, we provide a literature review of itâs applications and various approaches adopted by researchers in the areas of dataanalysis, knowledge management with emphasis to data-learning and classification problems.
We propose LearnFCA, a novel approach based on FuzzyFCA and probability theory for learning and classification problems. LearnFCA uses an enhanced version of FuzzyLattice which has been developed to store class labels and probability vectors and has the capability to be used for classifying instances with encoded and unlabelled features. We evaluate LearnFCA on encodings from three datasets - mnist, omniglot and cancer images with interesting results and varying degrees of success.
Adviser: Dr Jitender Deogu
Goal-driven Elaboration of Crime Scripts
This research investigates a crime modelling technique known as crime scripting. Crime scripts are generated by crime analysts to improve the understanding of security incidents, and in particularly, the criminal modus operandi (i.e., how crimes occur) to help identify cost-effective crime prevention measures. This thesis makes four contributions in this area. First, a systematic review of the crime scripting literature that provides a comprehensive and up-to-date understanding of crime scripting practice, and identifies potential issues with current crime scripting methods. Second, a comparative analysis of crime scripts which reveals differences and similarities between the scripts generated by different analysts, and confirms the limitations of intuitive approaches to crime scripting. Third, an experimental study, which shows that the content of crime scripts is influenced by what scripters know about the future use of their scripts. And fourth, a novel crime scripting framework inspired from business process modelling and goal-based modelling techniques. This framework aims to help researchers and practitioners better understand the activities involved in the development of crime scripts, and guide them in the creation of scripts and facilitate the identification of suitable crime prevention measures
Blockchain for the Healthcare Supply Chain: A Systematic Literature Review
A supply chain (SC) is a network of interests, information, and materials involved in processes that produce value for customers. The implementation of blockchain technology in SC management in healthcare has had results. This review aims to summarize how blockchain technology has been used to address SC challenges in healthcare, specifically for drugs, medical devices (DMDs), and blood, organs, and tissues (BOTs). A systematic review was conducted by following the PRISMA guidelines and searching the PubMed and Proquest databases. English-language studies were included, while non-primary studies, as well as surveys, were excluded. After full-text assessment, 28 articles met the criteria for inclusion. Of these, 15 (54%) were classified as simulation studies, 12 (43%) were classified as theoretical, and only one was classified as a real case study. Most of the articles (n = 23, 82%) included the adoption of smart contracts. The findings of this systematic review indicated a significant but immature interest in the topic, with diverse ideas and methodologies, but without effective real-life applications
LEARNFCA: A FUZZY FCA AND PROBABILITY BASED APPROACH FOR LEARNING AND CLASSIFICATION
Formal concept analysis(FCA) is a mathematical theory based on lattice and order theory used for data analysis and knowledge representation. Over the past several years, many of its extensions have been proposed and applied in several domains including data mining, machine learning, knowledge management, semantic web, software development, chemistry ,biology, medicine, data analytics, biology and ontology engineering.
This thesis reviews the state-of-the-art of theory of Formal Concept Analysis(FCA) and its various extensions that have been developed and well-studied in the past several years. We discuss their historical roots, reproduce the original definitions and derivations with illustrative examples. Further, we provide a literature review of itâs applications and various approaches adopted by researchers in the areas of dataanalysis, knowledge management with emphasis to data-learning and classification problems.
We propose LearnFCA, a novel approach based on FuzzyFCA and probability theory for learning and classification problems. LearnFCA uses an enhanced version of FuzzyLattice which has been developed to store class labels and probability vectors and has the capability to be used for classifying instances with encoded and unlabelled features. We evaluate LearnFCA on encodings from three datasets - mnist, omniglot and cancer images with interesting results and varying degrees of success.
Adviser: Jitender Deogu
Novel Methods for Forensic Multimedia Data Analysis: Part I
The increased usage of digital media in daily life has resulted in the demand for novel multimedia data analysis techniques that can help to use these data for forensic purposes. Processing of such data for police investigation and as evidence in a court of law, such that data interpretation is reliable, trustworthy, and efficient in terms of human time and other resources required, will help greatly to speed up investigation and make investigation more effective. If such data are to be used as evidence in a court of law, techniques that can confirm origin and integrity are necessary. In this chapter, we are proposing a new concept for new multimedia processing techniques for varied multimedia sources. We describe the background and motivation for our work. The overall system architecture is explained. We present the data to be used. After a review of the state of the art of related work of the multimedia data we consider in this work, we describe the method and techniques we are developing that go beyond the state of the art. The work will be continued in a Chapter Part II of this topic
Recommended from our members
Dealings on the Dark Web: An Examination of the Trust, Consumer Satisfaction, and the Efficacy of Interventions Against a Dark Web Cryptomarket
Abstract
Objective. The overarching goal of this thesis is to better understand not only the network dynamics which undergird the function and operation of cryptomarkets but the nature of consumer satisfaction and trust on these platforms. More specifically, I endeavour to push the cryptomarket literature beyond its current theoretical and methodological limits by documenting the network structure of a cryptomarket, the factors which predicts for vendor trust, the efficacy of targeted strategies on the transactional network of a cryptomarket, and the dynamics which facilitate consumer satisfaction despite information asymmetry. Moreover, we also aim to test the generalizability of findings made in prior cryptomarket studies (Duxbury and Haynie, 2017; 2020; Norbutas, 2018).
Methods. I realize the aims of this research by using a buyer-seller dataset from the Abraxas cryptomarket (Branwen et al., 2015). Given the differences between the topics and the research questions featured, this thesis employs a variety of methodological techniques. Chapter two uses a combination of descriptive network analysis, community detection analysis, statistical modelling, and trajectory modelling. Chapter three utilizes three text analytic strategies: descriptive text analysis, sentiment analysis, and textual feature extraction. Finally, chapter four employs sequential node deletion pursuant to six law enforcement strategies: lead k (degree centrality), eccentricity, unique items bought/sold, cumulative reputation score, total purchase price, and random targeting.
Results. Social network analysis of the Abraxas cryptomarket revealed a large and diffuse network where the majority of buyers purchased from a small cohort of vendors. This theme of preferential selection of vendors on the part of buyers is repeated in other findings within this study. More generally, the Abraxas transactional network can then be viewed as set of transactional islands as opposed to a large, densely connected conglomeration of vendors and buyers. With regard buyer feedback, buyers are generally pleased with their transactions on Abraxas as long as the product arrives on time and is as advertised. In general, vendors have a relatively low bar to achieve when it comes to satisfying their customers. Based on the results of the sequential node deletion, random targeting was found to be ineffective across the five outcome measures, producing minimal and a slow disruptive effect. Finally, these strategies are based on a power law where a small percentage of deleted nodes is responsible for an outsized proportion of the disruptive impact.
Conclusion. As with all applied research examining emergent phenomena, this thesis lends itself to a more refined understanding of dark web cryptomarkets. While the results and conclusions drawn from these results are not perfectly generalizable to all cryptomarkets, they should serve to inform law enforcement on the dynamics which undergird these markets. To this extent, a sombre consideration of trust, consumer satisfaction, and tactical effectiveness of interventions is a necessary step towards the development of more effective countermeasures against these illicit online marketplaces. For law enforcement to be more effective against cryptomarkets, it is advised that an evidence-based approach be taken
Genes and Gene Networks Related to Age-associated Learning Impairments
The incidence of cognitive impairments, including age-associated spatial learning impairment (ASLI), has risen dramatically in past decades due to increasing human longevity. To better understand the genes and gene networks involved in ASLI, data from a number of past gene expression microarray studies in rats are integrated and used to perform a meta- and network analysis. Results from the data selection and preprocessing steps show that for effective downstream analysis to take place both batch effects and outlier samples must be properly removed. The meta-analysis undertaken in this research has identified significant differentially expressed genes across both age and ASLI in rats. Knowledge based gene network analysis shows that these genes affect many key functions and pathways in aged compared to young rats. The resulting changes might manifest as various neurodegenerative diseases/disorders or syndromic memory impairments at old age. Other changes might result in altered synaptic plasticity, thereby leading to normal, non-syndromic learning impairments such as ASLI.
Next, I employ the weighted gene co-expression network analysis (WGCNA) on the datasets. I identify several reproducible network modules each highly significant with genes functioning in specific biological functional categories. It identifies a âlearning and memoryâ specific module containing many potential key ASLI hub genes. Functions of these ASLI hub genes link a different set of mechanisms to learning and memory formation, which meta-analysis was unable to detect. This study generates some new hypotheses related to the new candidate genes and networks in ASLI, which could be investigated through future research
- âŠ