211 research outputs found

    Shoeprint analysis: A GIS application in forensic evidence

    Get PDF
    The overall intent of this study is to illustrate how GIS and crime mapping methods can be applied to forensic evidence to better understand and comprehend spatial patterns that exist in these data. This study bridges common crime mapping principles such as hot spot mapping, exploratory data analysis, and spatial statistics to spatial forensic evidence investigation. In particular, forensic shoeprint evidence is examined and spatial relationships are analyzed using both exploratory and confirmatory statistical analysis. It is found that crime mapping principles can be indirectly related to shoeprint evidence mapping. Exploratory spatial data analysis is extremely helpful in breaking up large sets of shoeprint evidence into smaller and manageable sets for spatial forensic analysis. This work is one of few studies to incorporate shoeprint evident in a crime mapping context. With that in mind the author hopes that this study has shed some light on this subject to advance these methods in this field

    Pattern matching of footwear Impressions

    Get PDF
    One of the most frequently secured types of evidence at crime scenes are footware impressions. Identifying the brand and model of the footware can be crucial to narrowing the search for suspects. This is done by forensic experts by comparing the evidence found at the crime scene with a huge list of reference impressions. In order to support the forensic experts an automatic retrieval of the most likely matches is desired.In this thesis different techniques are evaluated to recognize and match footwear impressions, using reference and real crime scene shoeprint images. Due to the conditions in which the shoeprints are found (partial occlusions, variation in shape) a translation, rotation and scale invariant system is needed. A VLAD (Vector of Locally Aggregated Descriptors) encoder is used to clustering descriptors obtained using different approaches, such as SIFT (Scale-Invariant Feature Transform), Dense SIFT in a Triplet CNN (Convolutional Neural Network). These last two approaches provide the best performance results when the parameters are correctly adjusted, using the Cumulative Matching Characteristic curve to evaluate it.En esta tesis se evalúan diferentes técnicas para reconocer y emparejar impresiones de calzado, utilizando imágenes de referencia y de escenas reales de crimen. Debido a las condiciones en que se encuentran las impresiones (oclusiones parciales, variaciones de forma) se necesita un sistema invariante ante translación, rotación y escalado. Para ello se utiliza un codificador VLAD (Vector of Locally Aggregated Descriptors) para agrupar descriptores obtenidos en diferentes enfoques, como SIFT (Scale-Invariant Feature Transform), Dense SIFT y Triplet CNN (Convolutional Neural Network). Estos dos últimos enfoques proporcionan los mejores resultados una vez los parámetros se han ajustado correctamente, utilizando la curva CMC (Characteristic Matching Curve) para realizar la evaluación.En aquesta tesi s'avaluen diferents tècniques per reconèixer i aparellar impressions de calçat, utilitzant imatges de referència i d'escenes reals de crim. Degut a les condicions en què es troben les impressions (oclusions parcials, variació de forma ) es necessita un sistema invariant davant translació, rotació i escalat. Per això s'utilitza un codificador VLAD (Vector of Locally Aggregated Descriptors) per agrupar descriptors obtinguts en diferents enfocaments, com SIFT (Scale-Invariant Feature Transform), Dense SIFT i Triplet CNN (Convolutional Neural Network). Aquests dos últims enfocaments proporcionen els millors resultats un cop els paràmetres s'han ajustat correctament, utilitzant la corba CMC (Characteristic Matching Curve) per realitzar l'avaluació

    Deep Learning Analysis and Age Prediction from Shoeprints

    Full text link
    Human walking and gaits involve several complex body parts and are influenced by personality, mood, social and cultural traits, and aging. These factors are reflected in shoeprints, which in turn can be used to predict age, a problem not systematically addressed using any computational approach. We collected 100,000 shoeprints of subjects ranging from 7 to 80 years old and used the data to develop a deep learning end-to-end model ShoeNet to analyze age-related patterns and predict age. The model integrates various convolutional neural network models together using a skip mechanism to extract age-related features, especially in pressure and abrasion regions from pair-wise shoeprints. The results show that 40.23% of the subjects had prediction errors within 5-years of age and the prediction accuracy for gender classification reached 86.07%. Interestingly, the age-related features mostly reside in the asymmetric differences between left and right shoeprints. The analysis also reveals interesting age-related and gender-related patterns in the pressure distributions on shoeprints; in particular, the pressure forces spread from the middle of the toe toward outside regions over age with gender-specific variations on heel regions. Such statistics provide insight into new methods for forensic investigations, medical studies of gait-pattern disorders, biometrics, and sport studies.Comment: 24 pages, 20 Figure

    Walk This Way: Footwear Recognition Using Images & Neural Networks

    Get PDF
    Footwear prints are one of the most commonly recovered in criminal investigations. They can be used to discover a criminal's identity and to connect various crimes. Nowadays, footwear recognition techniques take time to be processed due to the use of current methods to extract the shoe print layout such as platter castings, gel lifting, and 3D-imaging techniques. Traditional techniques are prone to human error and waste valuable investigative time, which can be a problem for timely investigations. In terms of 3D-imaging techniques, one of the issues is that footwear prints can be blurred or missing, which renders their recognition and comparison inaccurate by completely automated approaches. Hence, this research investigates a footwear recognition model based on camera RGB images of the shoe print taken directly from the investigation site to reduce the time and cost required for the investigative process. First, the model extracts the layout information of the evidence shoe print using known image processing techniques. The layout information is then sent to a hierarchical network of neural networks. Each layer of this network is examined in an attempt to process and recognize footwear features to eliminate and narrow down the possible matches until returning the final result to the investigator

    Quantifying the similarity of 2D images using edge pixels: an application to the forensic comparison of footwear impressions

    Get PDF
    We propose a novel method to quantify the similarity between an impression (Q) from an unknown source and a test impression (K) from a known source. Using the property of geometrical congruence in the impressions, the degree of correspondence is quantified using ideas from graph theory and maximum clique (MC). The algorithm uses the x and y coordinates of the edges in the images as the data. We focus on local areas in Q and the corresponding regions in K and extract features for comparison. Using pairs of images with known origin, we train a random forest to classify pairs into mates and non-mates. We collected impressions from 60 pairs of shoes of the same brand and model, worn over six months. Using a different set of very similar shoes, we evaluated the performance of the algorithm in terms of the accuracy with which it correctly classified images into source classes. Using classification error rates and ROC curves, we compare the proposed method to other algorithms in the literature and show that for these data, our method shows good classification performance relative to other methods. The algorithm can be implemented with the R package shoeprintr

    Ultrasonic scanner for footprint identification

    Get PDF
    Scanner includes transducer, acoustical drive, acoustical receiver, X and Y position indicators, and cathode-ray tube. Transducer sends ultrasonic pulses into shoe sole or shoeprint. Reflected signals are picked up by acoustic receiver and fed to cathode-ray tube. Resulting display intensity is directly proportional to reflected signal magnitude

    Calculating and understanding the value of any type of match evidence when there are potential testing errors

    Get PDF
    It is well known that Bayes’ theorem (with likelihood ratios) can be used to calculate the impact of evidence, such as a ‘match’ of some feature of a person. Typically the feature of interest is the DNA profile, but the method applies in principle to any feature of a person or object, including not just DNA, fingerprints, or footprints, but also more basic features such as skin colour, height, hair colour or even name. Notwithstanding concerns about the extensiveness of databases of such features, a serious challenge to the use of Bayes in such legal contexts is that its standard formulaic representations are not readily understandable to non-statisticians. Attempts to get round this problem usually involve representations based around some variation of an event tree. While this approach works well in explaining the most trivial instance of Bayes’ theorem (involving a single hypothesis and a single piece of evidence) it does not scale up to realistic situations. In particular, even with a single piece of match evidence, if we wish to incorporate the possibility that there are potential errors (both false positives and false negatives) introduced at any stage in the investigative process, matters become very complex. As a result we have observed expert witnesses (in different areas of speciality) routinely ignore the possibility of errors when presenting their evidence. To counter this, we produce what we believe is the first full probabilistic solution of the simple case of generic match evidence incorporating both classes of testing errors. Unfortunately, the resultant event tree solution is too complex for intuitive comprehension. And, crucially, the event tree also fails to represent the causal information that underpins the argument. In contrast, we also present a simple-to-construct graphical Bayesian Network (BN) solution that automatically performs the calculations and may also be intuitively simpler to understand. Although there have been multiple previous applications of BNs for analysing forensic evidence—including very detailed models for the DNA matching problem, these models have not widely penetrated the expert witness community. Nor have they addressed the basic generic match problem incorporating the two types of testing error. Hence we believe our basic BN solution provides an important mechanism for convincing experts—and eventually the legal community—that it is possible to rigorously analyse and communicate the full impact of match evidence on a case, in the presence of possible error

    Quantitative assessment of the discrimination potential of class and randomly acquired characteristics for crime scene quality shoeprints

    Get PDF
    Footwear evidence has tremendous forensic value; it can focus a criminal investigation, link suspects to scenes, help reconstruct a series of events, or otherwise provide information vital to the successful resolution of a case. When considering the specific utility of a linkage, the strength of the connection between the source footwear and an impression left at the scene of a crime varies with the known rarity of the shoeprint itself, which is a function of the class characteristics, as well as the complexity, clarity, and quality of randomly acquired characteristics (RACs) available for analysis. To help elucidate the discrimination potential of footwear as a source of forensic evidence, the aim of this research was three-fold.;The first (and most time consuming obstacle) of this study was data acquisition. In order to efficiently process footwear exemplar inputs and extract meaningful data, including information about randomly acquired characteristics, a semi-automated image processing chain was developed. To date, 1,000 shoes have been fully processed, yielding a total of 57,426 RACs characterized in terms of position (theta, r, rnorm), shape (circle, line/curve, triangle, irregular) and complex perimeter (e.g., Fourier descriptor). A plot of each feature versus position allowed for the creation of a heat map detailing coincidental RAC co-occurrence in position and shape. Results indicate that random chance association is as high as 1:756 for lines/curves and as low as 1:9,571 for triangular-shaped features. However, when a detailed analysis of the RAC\u27s geometry is evaluated, each feature is distinguishable.;The second goal of this project was to ascertain the baseline performance of an automated footwear classification algorithm. A brief literature review reveals more than a dozen different approaches to automated shoeprint classification over the last decade. Unfortunately, despite the multitude of options and reports on algorithm inter-comparisons, few studies have assessed accuracy for crime-scene-like prints. To remedy this deficit, this research quantitatively assessed the baseline performance of a single metric, known as Phase Only Correlation (POC), on both high quality and crime-scene-like prints. The objective was to determine the baseline performance for high quality exemplars with high signal-to-noise ratios, and then determine the degree to which this performance declined as a function of variations in mixed media (blood and dust), transfer mechanisms (gel lifters), enhancement techniques (digital and chemical) and substrates (ceramic tiles, vinyl tiles, and paper). The results indicate probabilities greater than 0.850 (and as high as 0.989) that known matches will exhibit stochastic dominance, and probabilities of 0.99 with high quality exemplars (Handiprints or outsole edge images).;The third and final aim of this research was to mathematically evaluate the frequency and similarity of RACs in high quality exemplars versus crime-scene-like impressions as a function of RAC shape, perimeter, and area. This was accomplished using wet-residue impressions (created in the laboratory, but generated in a manner intended to replicate crime-scene-like prints). These impressions were processed in the same manner as their high quality exemplar mates, allowing for the determination of RAC loss and correlation of the entire RAC map between crime scene and high quality images. Results show that the unpredictable nature of crime scene print deposition causes RAC loss that varies from 33-100% with an average loss of 85%, and that up to 10% of the crime scene impressions fully lacked any identifiable RACs. Despite the loss of features present in the crime-scene-like impressions, there was a 0.74 probability that the actual shoe\u27s high quality RAC map would rank higher in an ordered list than a known non-match map when queried with the crime-scene-like print. Moreover, this was true despite the fact that 64% of the crime-scene-like impressions exhibit 10 or fewer RACs
    corecore