1,074 research outputs found

    Image Processing Applications in Real Life: 2D Fragmented Image and Document Reassembly and Frequency Division Multiplexed Imaging

    Get PDF
    In this era of modern technology, image processing is one the most studied disciplines of signal processing and its applications can be found in every aspect of our daily life. In this work three main applications for image processing has been studied. In chapter 1, frequency division multiplexed imaging (FDMI), a novel idea in the field of computational photography, has been introduced. Using FDMI, multiple images are captured simultaneously in a single shot and can later be extracted from the multiplexed image. This is achieved by spatially modulating the images so that they are placed at different locations in the Fourier domain. Finally, a Texas Instruments digital micromirror device (DMD) based implementation of FDMI is presented and results are shown. Chapter 2 discusses the problem of image reassembly which is to restore an image back to its original form from its pieces after it has been fragmented due to different destructive reasons. We propose an efficient algorithm for 2D image fragment reassembly problem based on solving a variation of Longest Common Subsequence (LCS) problem. Our processing pipeline has three steps. First, the boundary of each fragment is extracted automatically; second, a novel boundary matching is performed by solving LCS to identify the best possible adjacency relationship among image fragment pairs; finally, a multi-piece global alignment is used to filter out incorrect pairwise matches and compose the final image. We perform experiments on complicated image fragment datasets and compare our results with existing methods to show the improved efficiency and robustness of our method. The problem of reassembling a hand-torn or machine-shredded document back to its original form is another useful version of the image reassembly problem. Reassembling a shredded document is different from reassembling an ordinary image because the geometric shape of fragments do not carry a lot of valuable information if the document has been machine-shredded rather than hand-torn. On the other hand, matching words and context can be used as an additional tool to help improve the task of reassembly. In the final chapter, document reassembly problem has been addressed through solving a graph optimization problem

    Semi-automatic Solving of "Jigsaw puzzles" for Material Reconstruction of Dead Sea Scrolls

    Get PDF
    Digital solving of jigsaw puzzles have been well researched throughout the years and multiple approaches to solve them have been proposed. But these approaches have not been applied to reconstructing ancient manuscripts out of transient material such as leather or parchment. The literature describes ways to reconstruct ancient artefacts but they describe the process for more durable objects like pottery. In this thesis we explore the usability of the existing state-of-the-art methods for the purpose of aiding reconstructing of the Dead Sea Scrolls, also known as Qumran scrolls. Our experiments show that the existing methods as such do not provide good results in this domain, but with modifications provide help through a semi-automated reconstruction process. We expect these modifications and the software that was created as a by-product of this thesis to ease the researchers' work by automating the previously laborious manual work

    RECOVERY OF DOCUMENT TEXT FROM TORN FRAGMENTS USING IMAGE PROCESSING

    Get PDF
    Recovery of document from its torn or damaged fragments play an important role in the field of forensics and archival study. Reconstruction of the torn papers manually with the help of glue and tapes etc., is tedious, time consuming and not satisfactory. For torn images reconstruction we go for image mosaicing, where we reconstruct the image using features (corners) and RANSAC with homography.But for the torn fragments there is no such similarity portion between fragments. Hence we propose a new process to recover the original document form its torn pieces by using the Binary image processing techniques with region properties of the torn pieces. Our mehodology for recovery of torn pieces can be solved in three simple stages. Initially the torn pieces of the document are acquired as input. The torn pieces are straightening to axis using HORIZON function and they are concatenated. The torn fragments are segmented based on the regionpropertiethen concatenated the segmented images. Finally by creating mask the concatenated images are going to combined

    Solving Jigsaw Puzzles By the Graph Connection Laplacian

    Full text link
    We propose a novel mathematical framework to address the problem of automatically solving large jigsaw puzzles. This problem assumes a large image, which is cut into equal square pieces that are arbitrarily rotated and shuffled, and asks to recover the original image given the transformed pieces. The main contribution of this work is a method for recovering the rotations of the pieces when both shuffles and rotations are unknown. A major challenge of this procedure is estimating the graph connection Laplacian without the knowledge of shuffles. We guarantee some robustness of the latter estimate to measurement errors. A careful combination of our proposed method for estimating rotations with any existing method for estimating shuffles results in a practical solution for the jigsaw puzzle problem. Numerical experiments demonstrate the competitive accuracy of this solution, its robustness to corruption and its computational advantage for large puzzles

    Combinational Method for Shredded Document Reconstruction

    Get PDF
    Background:Shredded document reconstruction can provided necessary information in forensic investigations but is currently time consuming and requires significant human labor. Objective:Over the past decade researchers have been improving automated reconstruction techniques but it is still far from a solved problem. Results:In this paper we propose a combinational method for reconstructing documents that are shredded by hand and by machine. Our proposed method is based on both character identification and feature matching techniques. Conclusion: Practical results of this hybrid approach are excellent. . The preliminary results reported in this paper, which take into account a limited amount of shredded pieces (10-15), demonstrate that proposed approach produces interesting results for the problem of document reconstruction

    Evaluating the Validity and Reliability of Textile and Paper Fracture Characteristics in Forensic Comparative Analysis

    Get PDF
    In a comparative forensic analysis, an examiner can report that a physical fit exists between two torn or separated items when they realign in a manner unlikely to be replicated. Due to the common belief that it is unlikely that two unrelated fractured objects would match with distinctive characteristics, a physical fit represents the highest degree of association between two items. Nonetheless, despite the probative value that this evidence could have to a trier of fact, few studies have demonstrated such assumptions\u27 scientific validity and reliability. Moreover, there is a lack of consensus-based standard protocols for physical fit comparisons, making it difficult to demonstrate the basis for the features that constitute a “fit.” Since these analyses rely entirely on human judgment, they are highly subjective, which could be problematic in the absence of harmonized examination and interpretation criteria protocols. As a result, organizations like the National Institute of Justice and NIST-OSAC have identified the need for developing standardized methods and assessing potential error sources in this field. This research aims to address these gaps as applied to physical fits of textiles and paper. Here, standard criteria and prominent features for each material are defined to conduct physical fit examinations in a more reproducible manner. Additionally, a quantitative metric is used to quantify what constitutes a physical fit when conducting comparative analyses of textiles and paper, further increasing the validity and reliability of this methodology and providing a manner of assessing the weight of this evidence when presented in the courtroom. The first aim of this research involved the development of an objective and systematic method of quantifying the similarity between fractured textile samples. This was done by identifying relevant macroscopic and microscopic characteristics in the comparative analysis of a fractured textile dataset. Additionally, factors that affect the suitability of certain types of textiles for physical fit analysis were evaluated. Finally, the systematic score metric was implemented to quantify and document the quality of a physical fit and estimate error rates. The second objective of this study consisted of establishing the scientific foundations of individuality concerning the orientation of microfibers in fractured paper edges. In comparative analysis of paper, it is assumed that the microfibers deposited across the surface of paper are randomly oriented, a key feature for addressing the individuality of paper physical fits. However, this hypothesis has not been tested. This research evaluated the rarity and occurrence of microfiber alignments on fractured documents. It also quantified the comparative features of scissor-cut and hand-torn paper and the respective performance rates. Finally, the comparative analysis of textile and paper physical fits was validated through ground truth datasets and inter-examiner and intra-examiner variability studies. A ground truth blind dataset of known fits and known non-fits was created for 700 textile samples with various fiber types, weave patterns, and separation methods. Also, a set of 260 paper items, including 100 stamps and 160 office paper samples, were examined. The paper specimens contained handwritten or printed entries on two paper types and were separated by scissor-cut or hand-torn methods. This proposed research provides the criminal justice system with a valuable body of knowledge and a more objective and methodical assessment of the evidential value of physical fits of textiles, paper, and postage stamps

    An Emergent Space for Distributed Data with Hidden Internal Order through Manifold Learning

    Full text link
    Manifold-learning techniques are routinely used in mining complex spatiotemporal data to extract useful, parsimonious data representations/parametrizations; these are, in turn, useful in nonlinear model identification tasks. We focus here on the case of time series data that can ultimately be modelled as a spatially distributed system (e.g. a partial differential equation, PDE), but where we do not know the space in which this PDE should be formulated. Hence, even the spatial coordinates for the distributed system themselves need to be identified - to emerge from - the data mining process. We will first validate this emergent space reconstruction for time series sampled without space labels in known PDEs; this brings up the issue of observability of physical space from temporal observation data, and the transition from spatially resolved to lumped (order-parameter-based) representations by tuning the scale of the data mining kernels. We will then present actual emergent space discovery illustrations. Our illustrative examples include chimera states (states of coexisting coherent and incoherent dynamics), and chaotic as well as quasiperiodic spatiotemporal dynamics, arising in partial differential equations and/or in heterogeneous networks. We also discuss how data-driven spatial coordinates can be extracted in ways invariant to the nature of the measuring instrument. Such gauge-invariant data mining can go beyond the fusion of heterogeneous observations of the same system, to the possible matching of apparently different systems
    corecore