89 research outputs found

    Consistent Correspondences for Shape and Image Problems

    Get PDF
    Establish consistent correspondences between different objects is a classic problem in computer science/vision. It helps to match highly similar objects in both 3D and 2D domain. Inthe 3D domain, finding consistent correspondences has been studying for more than 20 yearsand it is still a hot topic. In 2D domain, consistent correspondences can also help in puzzlesolving. However, only a few works are focused on this approach. In this thesis, we focuson finding consistent correspondences and extend to develop robust matching techniques inboth 3D shape segments and 2D puzzle solving. In the 3D domain, segment-wise matching isan important research problem that supports higher-level understanding of shapes in geometryprocessing. Many existing segment-wise matching techniques assume perfect input segmentation and would suffer from imperfect or over-segmented input. To handle this shortcoming,we propose multi-layer graphs (MLGs) to represent possible arrangements of partially mergedsegments of input shapes. We then adapt the diffusion pruning technique on the MLGs to findconsistent segment-wise matching. To obtain high-quality matching, we develop our own voting step which is able to remove inconsistent results, for finding hierarchically consistent correspondences as final output. We evaluate our technique with both quantitative and qualitativeexperiments on both man-made and deformable shapes. Experimental results demonstrate theeffectiveness of our technique when compared to two state-of-art methods. In the 2D domain,solving jigsaw puzzles is also a classic problem in computer vision with various applications.Over the past decades, many useful approaches have been introduced. Most existing worksuse edge-wise similarity measures for assembling puzzles with square pieces of the same size, and recent work innovates to use the loop constraint to improve efficiency and accuracy. Weobserve that most existing techniques cannot be easily extended to puzzles with rectangularpieces of arbitrary sizes, and no existing loop constraints can be used to model such challenging scenarios. We propose new matching approaches based on sub-edges/corners, modelledusing the MatchLift or diffusion framework to solve square puzzles with cycle consistency.We demonstrate the robustness of our approaches by comparing our methods with state-of-artmethods. We also show how puzzles with rectangular pieces of arbitrary sizes, or puzzles withtriangular and square pieces can be solved by our techniques

    Statistical Assessment of the Significance of Fracture Fits in Trace Evidence

    Get PDF
    Fracture fits are often regarded as the highest degree of association of trace materials due to the common belief that inherently random fracturing events produce individualizing patterns. Often referred to as physical matches, fracture matches, or physical fits, these assessments consist of the realignment of two or more items with distinctive features and edge morphologies to demonstrate they were once part of the same object. Separated materials may provide a valuable link between items, individuals, or locations in forensic casework in a variety of criminal situations. Physical fit examinations require the use of the examiner’s judgment, which rarely can be supported by a quantifiable uncertainty or vastly reported error rates. Therefore, there is a need to develop, validate, and standardize fracture fit examination methodology and respective interpretation protocols. This research aimed to develop systematic methods of examination and quantitative measures to assess the significance of trace evidence physical fits. This was facilitated through four main objectives: 1) an in-depth review manuscript consisting of 112 case reports, fractography studies, and quantitative-based studies to provide an organized summary establishing the current physical fit research base, 2) a pilot inter-laboratory study of a systematic, score-based technique previously developed by our research group for evaluation of duct tape physical fit pairs and referred as the Edge Similarity Score (ESS), 3) the initial expansion of ESS methodology into textile materials, and 4) an expanded optimization and evaluation study of X-ray Fluorescence (XRF) Spectroscopy for electrical tape backing analysis, for implementation in an amorphous material of which physical fits may not be feasible due to lack of distinctive features. Objective 1 was completed through a large-scale literature review and manuscript compilation of 112 fracture fit reports and research studies. Literature was evaluated in three overall categories: case reports, fractography or qualitative-based studies, and quantitative-based studies. In addition, 12 standard operating protocols (SOP) provided by various state and federal-level forensic laboratories were reviewed to provide an assessment of current physical fit practice. A review manuscript was submitted to Forensic Science International and has been accepted for publication. This manuscript provides for the first time, a literature review of physical fits of trace materials and served as the basis for this project. The pilot inter-laboratory study (Objective 2) consisted of three study kits, each consisting of 7 duct tape comparison pairs with a ground truth of 4 matching pairs (3 of expected M+ qualifier range, 1 of the more difficult M- range) and 3 non-matching pairs (NM). The kits were distributed as a Round Robin study resulting in 16 overall participants and 112 physical fit comparisons. Prior to kit distribution, a consensus on each sample’s ESS was reached between 4 examiners with an agreement criterion of better than ± 10% ESS. Along with the physical comparison pairs, the study iii included a brief, post-study survey allowing the distributors to receive feedback on the participants’ opinions on method ease of use and practicality. No misclassifications were observed across all study kits. The majority (86.6%) of reported ESS scores were within ± 20 ESS compared to consensus values determined before the administration of the test. Accuracy ranged from 88% to 100%, depending on the criteria used for evaluation of the error rates. In addition, on average, 77% of ESS attributed no significant differences from the respective pre-distribution, consensus mean scores when subjected to ANOVA-Dunnett’s analysis using the level of difficulty as blocking variables. These differences were more often observed on sets of higher difficulty (M-, 5 out of 16 participants, or 31%) than on lower difficulty sets (M+ or M-, 3 out of 16 participants, or 19%). Three main observations were derived from the participant results: 1) overall good agreement between ESS reported by examiners was observed, 2) the ESS score represented a good indicator of the quality of the match and rendered low percent of error rates on conclusions 3) those examiners that did not participate in formal method training tended to have ESS falling outside of expected pre-distribution ranges. This interlaboratory study serves as an important precedent, as it represents the largest inter-laboratory study ever reported using a quantitative assessment of physical fits of duct tapes. In addition, the study provides valuable insights to move forward with the standardization of protocols of examination and interpretation. Objective 3 consisted of a preliminary study on the assessment of 274 total comparisons of stabbed (N=100) and hand-torn (N=174) textile pairs as completed by two examiners. The first 74 comparisons resulted in a high incidence of false exclusions (63%) on textiles prone to distortion, revealing the need to assess suitability prior to physical fit examination of fabrics. For the remaining dataset, five clothing items were subject to fracture of various textile composition and construction. The overall set consisted of 100 comparison pairs, 20 per textile item, 10 each per separation method of stabbed or hand-torn fractured edges, each examined by two analysts. Examiners determined ESS through the analysis of 10 bins of equal divisions of the total fracture edge length. A weighted ESS was also determined with the addition of three optional weighting factors per bin due to the continuation of a pattern, separation characteristics (i.e. damage or protrusions/gaps), or partial pattern fluorescence across the fractured edges. With the addition of a weighted ESS, a rarity ratio was determined as the ratio between the weighted ESS and non-weighted ESS. In addition, the frequency of occurrence of all noted distinctive characteristics leading to the addition of a weighting factor by the examiner was determined. Overall, 93% accuracy was observed for the hand-torn set while 95% accuracy was observed for the stabbed set. Higher misclassification in the hand-torn set was observed in textile items of either 100% polyester composition or jersey knit construction, as higher elasticity led to greater fracture edge distortion. In addition, higher misclassification was observed in the stabbed set for those textiles of no pattern as the stabbed edges led to straight, featureless bins often only associated due to pattern continuation. The results of this study are anticipated to provide valuable knowledge for the future development of protocols for evaluation of relevant features of textile fractures and assessments of the suitability for fracture fit comparisons. Finally, the XRF methodology optimization and evaluation study (Objective 4) expanded upon our group’s previous discrimination studies by broadening the total sample set of characterized iv tapes and evaluating the use of spectral overlay, spectral contrast angle, and Quadratic Discriminant Analysis (QDA) for the comparison of XRF spectra. The expanded sample set consisted of 114 samples, 94 from different sources, and 20 from the same roll. Twenty sections from the same roll were used to assess intra-roll variability, and for each sample, replicate measurements on different locations of the tape were analyzed (n=3) to assess the intra-sample variability. Inter-source variability was evaluated through 94 rolls of tapes of a variety of labeled brands, manufacturers, and product names. Parameter optimization included a comparison of atmospheric conditions, collection times, and instrumental filters. A study of the effects of adhesive and backing thickness on spectrum collection revealed key implications to the method that required modification to the sample support material Figures of merit assessed included accuracy and discrimination over time, precision, sensitivity, and selectivity. One of the most important contributions of this study is the proposal of alternative objective methods of spectral comparisons. The performance of different methods for comparing and contrasting spectra was evaluated. The optimization of this method was part of an assessment to incorporate XRF to a forensic laboratory protocol for rapid, highly informative elemental analysis of electrical tape backings and to expand examiners’ casework capabilities in the circumstance that a physical fit conclusion is limited due to the amorphous nature of electrical tape backings. Overall, this work strengthens the fracture fit research base by further developing quantitative methodologies for duct tape and textile materials and initiating widespread distribution of the technique through an inter-laboratory study to begin steps towards laboratory implementation. Additional projects established the current state of forensic physical fit to provide the foundation from which future quantitative work such as the studies presented here must grow and provided highly sensitive techniques of analysis for materials that present limited fracture fit capabilities

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∌ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    Cognitive Maps

    Get PDF
    undefine

    Electronic Imaging & the Visual Arts. EVA 2019 Florence

    Get PDF
    The Publication is following the yearly Editions of EVA FLORENCE. The State of Art is presented regarding the Application of Technologies (in particular of digital type) to Cultural Heritage. The more recent results of the Researches in the considered Area are presented. Information Technologies of interest for Culture Heritage are presented: multimedia systems, data-bases, data protection, access to digital content, Virtual Galleries. Particular reference is reserved to digital images (Electronic Imaging & the Visual Arts), regarding Cultural Institutions (Museums, Libraries, Palace - Monuments, Archaeological Sites). The International Conference includes the following Sessions: Strategic Issues; New Science and Culture Developments & Applications; New Technical Developments & Applications; Cultural Activities – Real and Virtual Galleries and Related Initiatives, Access to the Culture Information. One Workshop regards Innovation and Enterprise. The more recent results of the Researches at national and international level are reported in the Area of Technologies and Culture Heritage, also with experimental demonstrations of developed Activities

    Crowdsource Annotation and Automatic Reconstruction of Online Discussion Threads

    Get PDF
    Modern communication relies on electronic messages organized in the form of discussion threads. Emails, IMs, SMS, website comments, and forums are all composed of threads, which consist of individual user messages connected by metadata and discourse coherence to messages from other users. Threads are used to display user messages effectively in a GUI such as an email client, providing a background context for understanding a single message. Many messages are meaningless without the context provided by their thread. However, a number of factors may result in missing thread structure, ranging from user mistake (replying to the wrong message), to missing metadata (some email clients do not produce/save headers that fully encapsulate thread structure; and, conversion of archived threads from over repository to another may also result in lost metadata), to covert use (users may avoid metadata to render discussions difficult for third parties to understand). In the field of security, law enforcement agencies may obtain vast collections of discussion turns that require automatic thread reconstruction to understand. For example, the Enron Email Corpus, obtained by the Federal Energy Regulatory Commission during its investigation of the Enron Corporation, has no inherent thread structure. In this thesis, we will use natural language processing approaches to reconstruct threads from message content. Reconstruction based on message content sidesteps the problem of missing metadata, permitting post hoc reorganization and discussion understanding. We will investigate corpora of email threads and Wikipedia discussions. However, there is a scarcity of annotated corpora for this task. For example, the Enron Emails Corpus contains no inherent thread structure. Therefore, we also investigate issues faced when creating crowdsourced datasets and learning statistical models of them. Several of our findings are applicable for other natural language machine classification tasks, beyond thread reconstruction. We will divide our investigation of discussion thread reconstruction into two parts. First, we explore techniques needed to create a corpus for our thread reconstruction research. Like other NLP pairwise classification tasks such as Wikipedia discussion turn/edit alignment and sentence pair text similarity rating, email thread disentanglement is a heavily class-imbalanced problem, and although the advent of crowdsourcing has reduced annotation costs, the common practice of crowdsourcing redundancy is too expensive for class-imbalanced tasks. As the first contribution of this thesis, we evaluate alternative strategies for reducing crowdsourcing annotation redundancy for class-imbalanced NLP tasks. We also examine techniques to learn the best machine classifier from our crowdsourced labels. In order to reduce noise in training data, most natural language crowdsourcing annotation tasks gather redundant labels and aggregate them into an integrated label, which is provided to the classifier. However, aggregation discards potentially useful information from linguistically ambiguous instances. For the second contribution of this thesis, we show that, for four of five natural language tasks, filtering of the training dataset based on crowdsource annotation item agreement improves task performance, while soft labeling based on crowdsource annotations does not improve task performance. Second, we investigate thread reconstruction as divided into the tasks of thread disentanglement and adjacency recognition. We present the Enron Threads Corpus, a newly-extracted corpus of 70,178 multi-email threads with emails from the Enron Email Corpus. In the original Enron Emails Corpus, emails are not sorted by thread. To disentangle these threads, and as the third contribution of this thesis, we perform pairwise classification, using text similarity measures on non-quoted texts in emails. We show that i) content text similarity metrics outperform style and structure text similarity metrics in both a class-balanced and class-imbalanced setting, and ii) although feature performance is dependent on the semantic similarity of the corpus, content features are still effective even when controlling for semantic similarity. To reconstruct threads, it is also necessary to identify adjacency relations among pairs. For the forum of Wikipedia discussions, metadata is not available, and dialogue act typologies, helpful for other domains, are inapplicable. As our fourth contribution, via our experiments, we show that adjacency pair recognition can be performed using lexical pair features, without a dialogue act typology or metadata, and that this is robust to controlling for topic bias of the discussions. Yet, lexical pair features do not effectively model the lexical semantic relations between adjacency pairs. To model lexical semantic relations, and as our fifth contribution, we perform adjacency recognition using extracted keyphrases enhanced with semantically related terms. While this technique outperforms a most frequent class baseline, it fails to outperform lexical pair features or tf-idf weighted cosine similarity. Our investigation shows that this is the result of poor word sense disambiguation and poor keyphrase extraction causing spurious false positive semantic connections. In concluding this thesis, we also reflect on open issues and unanswered questions remaining after our research contributions, discuss applications for thread reconstruction, and suggest some directions for future work

    Forging a Stable Relationship?: Bridging the Law and Forensic Science Divide in the Academy

    Get PDF
    The marriage of law and science has most often been represented as discordant. While the law/science divide meme is hardly novel, concerns over the potentially deleterious coupling within the criminal justice system may have reached fever pitch. There is a growing chorus of disapproval addressed to ‘forensic science’, accompanied by the denigration of legal professionals for being unable or unwilling to forge a symbiotic relationship with forensic scientists. The 2009 National Academy of Sciences Report on forensic science heralds the latest call for greater collaboration between ‘law’ and ‘science’, particularly in Higher Education Institutions (HEIs) yet little reaction has been apparent amid law and science faculties. To investigate the potential for interdisciplinary cooperation, the authors received funding for a project: ‘Lowering the Drawbridges: Forensic and Legal Education in the 21st Century’, hoping to stimulate both law and forensic science educators to seek mutually beneficial solutions to common educational problems and build vital connections in the academy. A workshop held in the UK, attended by academics and practitioners from scientific, policing, and legal backgrounds marked the commencement of the project. This paper outlines some of the workshop conclusions to elucidate areas of dissent and consensus, and where further dialogue is required, but aims to strike a note of optimism that the ‘cultural divide’ should not be taken to be so wide as to be beyond the legal and forensic science academy to bridge. The authors seek to demonstrate that legal and forensic science educators can work cooperatively to respond to critics and forge new paths in learning and teaching, creating an opportunity to take stock and enrich our discipline as well as answer critics. As Latham (2010:34) exhorts, we are not interested in turning lawyers into scientists and vice versa, but building a foundation upon which they can build during their professional lives: “Instead of melding the two cultures, we need to establish conditions of cooperation, mutual respect, and mutual reliance between them.” Law and forensic science educators should, and can assist with the building of a mutual understanding between forensic scientists and legal professionals, a significant step on the road to answering calls for the professions to minimise some of the risks associated with the use of forensic science in the criminal process. REFERENCES Latham, S.R. 2010, ‘Law between the cultures: C.P.Snow’s The Two Cultures and the problem of scientific illiteracy in law’ 32 Technology in Society, 31-34. KEYWORDS forensic science education legal education law/science divid
    • 

    corecore