305 research outputs found

    Real Time Crime Prediction Using Social Media

    Get PDF
    There is no doubt that crime is on the increase and has a detrimental influence on a nation's economy despite several attempts of studies on crime prediction to minimise crime rates. Historically, data mining techniques for crime prediction models often rely on historical information and its mostly country specific. In fact, only a few of the earlier studies on crime prediction follow standard data mining procedure. Hence, considering the current worldwide crime trend in which criminals routinely publish their criminal intent on social media and ask others to see and/or engage in different crimes, an alternative, and more dynamic strategy is needed. The goal of this research is to improve the performance of crime prediction models. Thus, this thesis explores the potential of using information on social media (Twitter) for crime prediction in combination with historical crime data. It also figures out, using data mining techniques, the most relevant feature engineering needed for United Kingdom dataset which could improve crime prediction model performance. Additionally, this study presents a function that could be used by every state in the United Kingdom for data cleansing, pre-processing and feature engineering. A shinny App was also use to display the tweets sentiment trends to prevent crime in near-real time.Exploratory analysis is essential for revealing the necessary data pre-processing and feature engineering needed prior to feeding the data into the machine learning model for efficient result. Based on earlier documented studies available, this is the first research to do a full exploratory analysis of historical British crime statistics using stop and search historical dataset. Also, based on the findings from the exploratory study, an algorithm was created to clean the data, and prepare it for further analysis and model creation. This is an enormous success because it provides a perfect dataset for future research, particularly for non-experts to utilise in constructing models to forecast crime or conducting investigations in around 32 police districts of the United Kingdom.Moreover, this study is the first study to present a complete collection of geo-spatial parameters for training a crime prediction model by combining demographic data from the same source in the United Kingdom with hourly sentiment polarity that was not restricted to Twitter keyword search. Six unique base models that were frequently mentioned in the previous literature was selected and used to train stop-and-search historical crime dataset and evaluated on test data and finally validated with dataset from London and Kent crime datasets.Two different datasets were created from twitter and historical data (historical crime data with twitter sentiment score and historical data without twitter sentiment score). Six of the most prevalent machine learning classifiers (Random Forest, Decision Tree, K-nearest model, support vector machine, neural network and naïve bayes) were trained and tested on these datasets. Additionally, hyperparameters of each of the six models developed were tweaked using random grid search. Voting classifiers and logistic regression stacked ensemble of different models were also trained and tested on the same datasets to enhance the individual model performance.In addition, two combinations of stack ensembles of multiple models were constructed to enhance and choose the most suitable models for crime prediction, and based on their performance, the appropriate prediction model for the UK dataset would be selected. In terms of how the research may be interpreted, it differs from most earlier studies that employed Twitter data in that several methodologies were used to show how each attribute contributed to the construction of the model, and the findings were discussed and interpreted in the context of the study. Further, a shiny app visualisation tool was designed to display the tweets’ sentiment score, the text, the users’ screen name, and the tweets’ vicinity which allows the investigation of any criminal actions in near-real time. The evaluation of the models revealed that Random Forest, Decision Tree, and K nearest neighbour outperformed other models. However, decision trees and Random Forests perform better consistently when evaluated on test data

    Full Issue 8.3

    Get PDF

    Reasoning in criminal intelligence analysis through an argumentation theory-based framework

    Get PDF
    This thesis provides an in-depth analysis of criminal intelligence analysts’ analytical reasoning process and offers an argumentation theory-based framework as a means to support that reasoning process in software applications. Researchers have extensively researched specific areas of criminal intelligence analysts’ sensemaking and reasoning processes over the decades. However, the research is fractured across different research studies and those research studies often have high-level descriptions of how criminal intelligence analysts formulate their rationale (argument). This thesis addresses this gap by offering low level descriptions on how the reasoning-formulation process takes place. It is presented as a single framework, with supporting templates, to inform the software implementation process. Knowledge from nine experienced criminal intelligence analysts from West Midlands Police and Belgium’s Local and Federal Police forces were elicited through a semi-structured interview for study 1 and the Critical Decision Method (CDM), as part of the Cognitive Task Analysis (CTA) approach, was used for study 2 and study 3. The data analysis for study 1 made use of the Qualitative Conventional Content Analysis approach. The data analysis for study 2 made use of a mixed method approach, consisting out of Qualitative Directed Content Analysis and the Emerging Theme Approach. The data analysis for study 3 made use of the Qualitative Directed Content Analysis approach. The results from the three studies along with the concepts from the existing literature informed the construction of the argumentation theory-based framework. The evaluation study for the framework’s components made use of Paper Prototype Testing as a participatory design method over an electronic medium. The low-fidelity prototype was constructed by turning the frameworks’ components into software widgets that resembled widgets on a software application’s toolbar. Eight experienced criminal intelligence analysts from West Midlands Police and Belgium’s Local and Federal Police forces took part in the evaluation study. Participants had to construct their rationale using the available components as part of a simulated robbery crime scenario, which used real anonymised crime data from West Midlands Police force. The evaluation study made use of a Likert scale questionnaire to capture the participant’s views on how the frameworks’ components aided participants with; understanding what was going on in the analysis, lines-of-enquiry and; the changes in their level of confidence pertaining to their rationale. A non-parametric, one sample z-test was used for reporting the statistical results. The significance is at 5% (α=0.05) against a median of 3 for the z-test, where μ =3 represents neutral. The participants reported a positive experience with the framework’s components and results show that the framework’s components aided them with formulating their rationale and understanding how confident they were during different phases of constructing their rationale

    A deep learning framework for contingent liabilities risk management : predicting Brazilian labor court decisions

    Get PDF
    Estimar o resultado de um processo em litígio é crucial para muitas organizações. Uma aplicação específica são os "Passivos Contingenciais", que se referem a passivos que podem ou não ocorrer dependendo do resultado de um processo judicial em litígio. A metodologia tradicional para estimar essa probabilidade baseia-se na opinião de um advogado quem determina a possibilidade de um processo judicial ser perdido a partir de uma avaliação quantitativa. Esta tese apresenta a um modelo matemático baseado numa arquitetura de Deep Learning cujo objetivo é estimar a probabilidade de ganho ou perda de um processo de litígio, principalmente para ser utilizada na estimação de Passivos Contingenciais. A arquitetura, diferentemente do método tradicional, oferece um maior grau de confiança ao prever o resultado de um processo legal em termos de probabilidade e com um tempo de processamento de segundos. Além do resultado primário, a arquitetura estima uma amostra dos casos mais semelhantes ao processo estimado, que servem de apoio para a realização de estratégias de litígio. Nossa arquitetura foi testada em duas bases de dados de processos legais: (1) o Tribunal Europeu de Direitos Humanos (ECHR) e (2) o 4º Tribunal Regional do Trabalho brasileiro (4TRT). Ela estimou de acordo com nosso conhecimento, o melhor desempenho já publicado (precisão = 0,906) na base de dados da ECHR, uma coleção amplamente utilizada de processos legais, e é o primeiro trabalho a aplicar essa metodologia em um tribunal de trabalho brasileiro. Os resultados mostram que a arquitetura é uma alternativa adequada a ser utilizada contra o método tradicional de estimação do desfecho de um processo em litígio realizado por advogados. Finalmente, validamos nossos resultados com especialistas que confirmaram as possibilidades promissoras da arquitetura. Assim, nos incentivamos os académicos a continuar desenvolvendo pesquisas sobre modelagem matemática na área jurídica, pois é um tema emergente com um futuro promissor e aos usuários a utilizar ferramentas baseadas como a desenvolvida em nosso trabalho, pois fornecem vantagens substanciais em termos de precisão e velocidade sobre os métodos convencionais.Estimating the likely outcome of a litigation process is crucial for many organizations. A specific application is the “Contingents Liabilities,” which refers to liabilities that may or may not occur depending on the result of a pending litigation process (lawsuit). The traditional methodology for estimating this likelihood is based on the opinion from the lawyer’s experience which is based on a qualitative appreciation. This dissertation presents a mathematical modeling framework based on a Deep Learning architecture that estimates the probability outcome of a litigation process (accepted & not accepted) with a particular use on Contingent Liabilities. The framework offers a degree of confidence by describing how likely an event will occur in terms of probability and provides results in seconds. Besides the primary outcome, it offers a sample of the most similar cases to the estimated lawsuit that serve as support to perform litigation strategies. We tested our framework in two litigation process databases from: (1) the European Court of Human Rights (ECHR) and (2) the Brazilian 4th regional labor court. Our framework achieved to our knowledge the best-published performance (precision = 0.906) on the ECHR database, a widely used collection of litigation processes, and it is the first to be applied in a Brazilian labor court. Results show that the framework is a suitable alternative to be used against the traditional method of estimating the verdict outcome from a pending litigation performed by lawyers. Finally, we validated our results with experts who confirmed the promising possibilities of the framework. We encourage academics to continue developing research on mathematical modeling in the legal area as it is an emerging topic with a promising future and practitioners to use tools based as the proposed, as they provides substantial advantages in terms of accuracy and speed over conventional methods

    Graph machine learning approaches to classifying the building and ground relationship Architectural 3D topological model to retrieve similar architectural precendents

    Get PDF
    Architects struggle to choose the best form of how the building meets the ground and may benefit from a suggestion based on precedents. A precedent suggestion may help architects decide how the building should meet the ground. Machine learning (ML), as a part of artificial intelligence (AI), can play a role in the following scenario to determine the most appropriate relationship from a set of examples provided by trained architects. A key feature of the system involves its classification of three-dimensional (3D) prototypes of architectural precedent models using a topological graph instead of two-dimensional (2D) images to classify the models. This classified model then predicts and retrieves similar architecture precedents to enable the designer to develop or reconsider their design. The research methodology uses mixed methods research. A qualitative interview validates the taxonomy collected in the literature review and image sorting survey to study the similarity of human classification of the building and ground relationship (BGR). Moreover, the researcher leverages the use of two primary technologies in the development of the BGR tool. First, a software library enhances the representation of 3D models by using non-manifold topology (Topologic). The second phase involves an end-to-end deep graph convolutional neural network (DGCNN). This study employs a two-stage experimental workflow. The first step sees a sizable synthetic database of building relationships and ground topologies created by generative simulation for a 3D prototype of architectural precedents. These topologies then undergo conversion into semantically rich topological dual graphs. Second, the prototype architectural graphs are imported to the DGCNN model for graph classification. This experiment's results show that this approach can recognise architectural forms using more semantically relevant and structured data and that using a unique data set prevents direct comparison. Our experiments have shown that the proposed workflow achieves highly accurate results that align with DGCNN’s performance on benchmark graphs. Additionally, the study demonstrates the effectiveness of using different machine learning approaches, such as Deep Graph Library (DGL) and Unsupervised Graph Level Representation Learning (UGLRL). This research demonstrates the potential of AI to help designers identify the topology of architectural solutions and place them within the most relevant architectural canons

    BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference

    Get PDF

    Evidential Reasoning & Analytical Techniques In Criminal Pre-Trial Fact Investigation

    Get PDF
    This thesis is the work of the author and is concerned with the development of a neo-Wigmorean approach to evidential reasoning in police investigation. The thesis evolved out of dissatisfaction with cardinal aspects of traditional approaches to police investigation, practice and training. Five main weaknesses were identified: Firstly, a lack of a theoretical foundation for police training and practice in the investigation of crime and evidence management; secondly, evidence was treated on the basis of its source rather than it's inherent capacity for generating questions; thirdly, the role of inductive elimination was underused and misunderstood; fourthly, concentration on single, isolated cases rather than on the investigation of multiple cases and, fifthly, the credentials of evidence were often assumed rather than considered, assessed and reasoned within the context of argumentation. Inspiration from three sources were used to develop the work: Firstly, John Henry Wigmore provided new insights into the nature of evidential reasoning and formal methods for the construction of arguments; secondly, developments in biochemistry provided new insights into natural methods of storing and using information; thirdly, the science of complexity provided new insights into the complex nature of collections of data that could be developed into complex systems of information and evidence. This thesis is an application of a general methodology supported by new diagnostic and analytical techniques. The methodology was embodied in a software system called Forensic Led Intelligence System: FLINTS. My standpoint is that of a forensic investigator with an interest in how evidential reasoning can improve the operation we call investigation. New areas of evidential reasoning are in progress and these are discussed including a new application in software designed by the author: MAVERICK. There are three main themes; Firstly, how a broadened conception of evidential reasoning supported by new diagnostic and analytical techniques can improve the investigation and discovery process. Secondly, an explanation of how a greater understanding of the roles and effects of different styles of reasoning can assist the user; and thirdly; a range of concepts and tools are presented for the combination, comparison, construction and presentation of evidence in imaginative ways. Taken together these are intended to provide examples of a new approach to the science of evidential reasoning. Originality will be in four key areas; 1. Extending and developing Wigmorean techniques to police investigation and evidence management. 2. Developing existing approaches in single case analysis and introducing an intellectual model for multi case analysis. 3. Introducing a new model for police training in investigative evidential reasoning. 4. Introducing a new software system to manage evidence in multi case approaches using forensic scientific evidence. FLINTS

    Rethinking Injury Events. Explorations in Spatial Aspects and Situational Prevention Strategies

    Get PDF
    This dissertation employs a holistic approach to injuries in everyday settings. It examines spatial aspects of adolescents’ injury events in residential situations, school situations, and suicidal situations, seeking to throw light on any reciprocal influence between situated activity and the physical environment in such events. Thus far, research has generally neglected to pay sufficient attention to everyday injuries and the more mundane sites where they occur. Previous studies on the topic have, moreover, been predominantly mono-disciplinary. Due to the complexity of injury research more broadly and injury prevention more specifically, this dissertation makes a conscious effort to go beyond such limitations. Applying an interdisciplinary and transdisciplinary focus, it also aims to contribute to research on social sustainability more in general.\ua0\ua0The more theoretical aspects of the research are geared to providing a better understanding of injury events as something explicable and situated, that is to say, as neither random nor unpreventable. Towards this end, core concepts of architectural research are brought to bear on the interrelationship between humans, objects, and contexts (cf. Love, 2002), defined for the purposes of this dissertation as socio-spatial practice. From this perspective, injury events are then looked at as something resulting from the convergence of factors addressed by the key concepts just named, as something caused by elements traceable to routine or situational activities (cf. Cohen & Felson, 1979; Wikstr\uf6m, 2011). Analysing injury events within this conceptual framework, the causal mechanisms and emergent processes behind injury events can be not only identified, but also prevented, through the use of situational prevention strategies. What this implies is the translation of, mainly, the Crime Prevention through Environmental Design (CPTED) approach into Injury Prevention through Environmental Design (IPTED). The research here is conducted using a mixed-method approach producing qualitative findings and quantitative data, so as to bridge the gap between the “how” and the “why” (cf. Clarke et al., 2015:13f.; Katz, 2001).\ua0\ua0The results put forth in this dissertation suggest situational prevention specifically aimed at spatial aspects to be a promising approach to injury prevention, having the capability to reduce the occurrence of injury events. In private residential settings, however, the strategy showed itself to be more limited in its efficiency, being more effective when applied in semi-private settings such as building entrances/ lobbies. A still more effective context for it was found to be institutional settings: in them the spatial aspect appeared to be of great importance in relation to injury situations and the degree of visibility. In schools, for instance, the results pointed out to a close relationship between the injury situation, the spatial organization, and the social organization. In such settings, certain injuries tended to cluster spatially due to the organization of day-to-day activities. Finally, the results suggest that also suicides and suicide attempts in semi-public and public spaces could be significantly reduced through carefully thought-out environmental interventions. At the same time, there remains a need for further analysis of the events and places involved in suicides and their attempts, to fully understand who commit them in these settings and why
    • …
    corecore