24,539 research outputs found

    Expressiveness and Instrumentality of Crime Scene Behavior in Spanish Homicides

    Get PDF
    One of the current trends in the study of criminal profiling consists of developing theoretical and methodological typologies to offer information of operational use in police investigations. The objective of this work was to verify the validity of the instrumental/expressive model, so as to establish homicide typologies based on modus operandi relationships, characteristics of the victims, and characteristics of perpetrators. The sample consisted of 448 homicide cases registered in the database of the Homicide Revision Project of the Office of Coordination and Studies of the Spanish Secretary of State and Security. Through multidimensional scaling and cluster analysis, three expressive homicide subtypes were identified (expressive-impulsive, expressive-distancing, and expressive-family), as well as two instrumental homicide subtypes (instrumental-opportunist and instrumental-gratification). The expressive homicide typologies accounted for almost 95% of all of the studied cases, and most of the homicides occurring in Spain were found to take place between individuals who know one another (friends, family members, intimate couples/ex-couples). The findings from this study suggest that the instrumental/expressive model may be a useful framework for understanding the psychological processes underlying homicides, based on the study of relationships between the crime and aggressor characteristics, which may be very helpful in the prioritization of suspect

    Computerized crime linkage systems: A critical review and research agenda

    Get PDF
    Computerized crime linkage systems are meant to assist the police in determining whether crimes have been committed by the same offender. In this article, the authors assess these systems critically and identify four assumptions that affect the effectiveness of these systems. These assumptions are that (a) data in the systems can be coded reliably, (b) data in the systems are accurate, (c) violent serial offenders exhibit consistent but distinctive patterns of behavior, and (d) analysts have the ability to use the data in the systems to link crimes accurately. The authors argue that there is no compelling empirical support for any of the four assumptions, and they outline a research agenda for testing each assumption. Until evidence supporting these assumptions becomes available, the value of linkage systems will remain open to debate

    On Using Gait in Forensic Biometrics

    No full text
    Given the continuing advances in gait biometrics, it appears prudent to investigate the translation of these techniques for forensic use. We address the question as to the confidence that might be given between any two such measurements. We use the locations of ankle, knee and hip to derive a measure of the match between walking subjects in image sequences. The Instantaneous Posture Match algorithm, using Harr templates, kinematics and anthropomorphic knowledge is used to determine their location. This is demonstrated using real CCTV recorded at Gatwick Airport, laboratory images from the multi-view CASIA-B dataset and an example of real scene of crime video. To access the measurement confidence we study the mean intra- and inter-match scores as a function of database size. These measures converge to constant and separate values, indicating that the match measure derived from individual comparisons is considerably smaller than the average match measure from a population

    NetLSD: Hearing the Shape of a Graph

    Full text link
    Comparison among graphs is ubiquitous in graph analytics. However, it is a hard task in terms of the expressiveness of the employed similarity measure and the efficiency of its computation. Ideally, graph comparison should be invariant to the order of nodes and the sizes of compared graphs, adaptive to the scale of graph patterns, and scalable. Unfortunately, these properties have not been addressed together. Graph comparisons still rely on direct approaches, graph kernels, or representation-based methods, which are all inefficient and impractical for large graph collections. In this paper, we propose the Network Laplacian Spectral Descriptor (NetLSD): the first, to our knowledge, permutation- and size-invariant, scale-adaptive, and efficiently computable graph representation method that allows for straightforward comparisons of large graphs. NetLSD extracts a compact signature that inherits the formal properties of the Laplacian spectrum, specifically its heat or wave kernel; thus, it hears the shape of a graph. Our evaluation on a variety of real-world graphs demonstrates that it outperforms previous works in both expressiveness and efficiency.Comment: KDD '18: The 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, August 19--23, 2018, London, United Kingdo

    A Statistical Approach to Crime Linkage

    Full text link
    The object of this paper is to develop a statistical approach to criminal linkage analysis that discovers and groups crime events that share a common offender and prioritizes suspects for further investigation. Bayes factors are used to describe the strength of evidence that two crimes are linked. Using concepts from agglomerative hierarchical clustering, the Bayes factors for crime pairs are combined to provide similarity measures for comparing two crime series. This facilitates crime series clustering, crime series identification, and suspect prioritization. The ability of our models to make correct linkages and predictions is demonstrated under a variety of real-world scenarios with a large number of solved and unsolved breaking and entering crimes. For example, a na\"ive Bayes model for pairwise case linkage can identify 82\% of actual linkages with a 5\% false positive rate. For crime series identification, 77\%-89\% of the additional crimes in a crime series can be identified from a ranked list of 50 incidents

    Criminal networks analysis in missing data scenarios through graph distances

    Get PDF
    Data collected in criminal investigations may suffer from issues like: (i) incompleteness, due to the covert nature of criminal organizations; (ii) incorrectness, caused by either unintentional data collection errors or intentional deception by criminals; (iii) inconsistency, when the same information is collected into law enforcement databases multiple times, or in different formats. In this paper we analyze nine real criminal networks of different nature (i.e., Mafia networks, criminal street gangs and terrorist organizations) in order to quantify the impact of incomplete data, and to determine which network type is most affected by it. The networks are firstly pruned using two specific methods: (i) random edge removal, simulating the scenario in which the Law Enforcement Agencies fail to intercept some calls, or to spot sporadic meetings among suspects; (ii) node removal, modeling the situation in which some suspects cannot be intercepted or investigated. Finally we compute spectral distances (i.e., Adjacency, Laplacian and normalized Laplacian Spectral Distances) and matrix distances (i.e., Root Euclidean Distance) between the complete and pruned networks, which we compare using statistical analysis. Our investigation identifies two main features: first, the overall understanding of the criminal networks remains high even with incomplete data on criminal interactions (i.e., when 10% of edges are removed); second, removing even a small fraction of suspects not investigated (i.e., 2% of nodes are removed) may lead to significant misinterpretation of the overall network. Copyright

    A Survey of Social Network Forensics

    Get PDF
    Social networks in any form, specifically online social networks (OSNs), are becoming a part of our everyday life in this new millennium especially with the advanced and simple communication technologies through easily accessible devices such as smartphones and tablets. The data generated through the use of these technologies need to be analyzed for forensic purposes when criminal and terrorist activities are involved. In order to deal with the forensic implications of social networks, current research on both digital forensics and social networks need to be incorporated and understood. This will help digital forensics investigators to predict, detect and even prevent any criminal activities in different forms. It will also help researchers to develop new models / techniques in the future. This paper provides literature review of the social network forensics methods, models, and techniques in order to provide an overview to the researchers for their future works as well as the law enforcement investigators for their investigations when crimes are committed in the cyber space. It also provides awareness and defense methods for OSN users in order to protect them against to social attacks

    The civilizing process in London’s Old Bailey

    Get PDF
    The jury trial is a critical point where the state and its citizens come together to define the limits of acceptable behavior. Here we present a large-scale quantitative analysis of trial transcripts from the Old Bailey that reveal a major transition in the nature of this defining moment. By coarse-graining the spoken word testimony into synonym sets and dividing the trials based on indictment, we demonstrate the emergence of semantically distinct violent and nonviolent trial genres. We show that although in the late 18th century the semantic content of trials for violent offenses is functionally indistinguishable from that for nonviolent ones, a long-term, secular trend drives the system toward increasingly clear distinctions between violent and nonviolent acts. We separate this process into the shifting patterns that drive it, determine the relative effects of bureaucratic change and broader cultural shifts, and identify the synonym sets most responsible for the eventual genre distinguishability. This work provides a new window onto the cultural and institutional changes that accompany the monopolization of violence by the state, described in qualitative historical analysis as the civilizing process

    Mining complex trees for hidden fruit : a graph–based computational solution to detect latent criminal networks : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Technology at Massey University, Albany, New Zealand.

    Get PDF
    The detection of crime is a complex and difficult endeavour. Public and private organisations – focusing on law enforcement, intelligence, and compliance – commonly apply the rational isolated actor approach premised on observability and materiality. This is manifested largely as conducting entity-level risk management sourcing ‘leads’ from reactive covert human intelligence sources and/or proactive sources by applying simple rules-based models. Focusing on discrete observable and material actors simply ignores that criminal activity exists within a complex system deriving its fundamental structural fabric from the complex interactions between actors - with those most unobservable likely to be both criminally proficient and influential. The graph-based computational solution developed to detect latent criminal networks is a response to the inadequacy of the rational isolated actor approach that ignores the connectedness and complexity of criminality. The core computational solution, written in the R language, consists of novel entity resolution, link discovery, and knowledge discovery technology. Entity resolution enables the fusion of multiple datasets with high accuracy (mean F-measure of 0.986 versus competitors 0.872), generating a graph-based expressive view of the problem. Link discovery is comprised of link prediction and link inference, enabling the high-performance detection (accuracy of ~0.8 versus relevant published models ~0.45) of unobserved relationships such as identity fraud. Knowledge discovery uses the fused graph generated and applies the “GraphExtract” algorithm to create a set of subgraphs representing latent functional criminal groups, and a mesoscopic graph representing how this set of criminal groups are interconnected. Latent knowledge is generated from a range of metrics including the “Super-broker” metric and attitude prediction. The computational solution has been evaluated on a range of datasets that mimic an applied setting, demonstrating a scalable (tested on ~18 million node graphs) and performant (~33 hours runtime on a non-distributed platform) solution that successfully detects relevant latent functional criminal groups in around 90% of cases sampled and enables the contextual understanding of the broader criminal system through the mesoscopic graph and associated metadata. The augmented data assets generated provide a multi-perspective systems view of criminal activity that enable advanced informed decision making across the microscopic mesoscopic macroscopic spectrum
    • 

    corecore