563 research outputs found

    Image Processing Applications in Real Life: 2D Fragmented Image and Document Reassembly and Frequency Division Multiplexed Imaging

    Get PDF
    In this era of modern technology, image processing is one the most studied disciplines of signal processing and its applications can be found in every aspect of our daily life. In this work three main applications for image processing has been studied. In chapter 1, frequency division multiplexed imaging (FDMI), a novel idea in the field of computational photography, has been introduced. Using FDMI, multiple images are captured simultaneously in a single shot and can later be extracted from the multiplexed image. This is achieved by spatially modulating the images so that they are placed at different locations in the Fourier domain. Finally, a Texas Instruments digital micromirror device (DMD) based implementation of FDMI is presented and results are shown. Chapter 2 discusses the problem of image reassembly which is to restore an image back to its original form from its pieces after it has been fragmented due to different destructive reasons. We propose an efficient algorithm for 2D image fragment reassembly problem based on solving a variation of Longest Common Subsequence (LCS) problem. Our processing pipeline has three steps. First, the boundary of each fragment is extracted automatically; second, a novel boundary matching is performed by solving LCS to identify the best possible adjacency relationship among image fragment pairs; finally, a multi-piece global alignment is used to filter out incorrect pairwise matches and compose the final image. We perform experiments on complicated image fragment datasets and compare our results with existing methods to show the improved efficiency and robustness of our method. The problem of reassembling a hand-torn or machine-shredded document back to its original form is another useful version of the image reassembly problem. Reassembling a shredded document is different from reassembling an ordinary image because the geometric shape of fragments do not carry a lot of valuable information if the document has been machine-shredded rather than hand-torn. On the other hand, matching words and context can be used as an additional tool to help improve the task of reassembly. In the final chapter, document reassembly problem has been addressed through solving a graph optimization problem

    Effective 3D Geometric Matching for Data Restoration and Its Forensic Application

    Get PDF
    3D geometric matching is the technique to detect the similar patterns among multiple objects. It is an important and fundamental problem and can facilitate many tasks in computer graphics and vision, including shape comparison and retrieval, data fusion, scene understanding and object recognition, and data restoration. For example, 3D scans of an object from different angles are matched and stitched together to form the complete geometry. In medical image analysis, the motion of deforming organs is modeled and predicted by matching a series of CT images. This problem is challenging and remains unsolved, especially when the similar patterns are 1) small and lack geometric saliency; 2) incomplete due to the occlusion of the scanning and damage of the data. We study the reliable matching algorithm that can tackle the above difficulties and its application in data restoration. Data restoration is the problem to restore the fragmented or damaged model to its original complete state. It is a new area and has direct applications in many scientific fields such as Forensics and Archeology. In this dissertation, we study novel effective geometric matching algorithms, including curve matching, surface matching, pairwise matching, multi-piece matching and template matching. We demonstrate its applications in an integrated digital pipeline of skull reassembly, skull completion, and facial reconstruction, which is developed to facilitate the state-of-the-art forensic skull/facial reconstruction processing pipeline in law enforcement

    Report on shape analysis and matching and on semantic matching

    No full text
    In GRAVITATE, two disparate specialities will come together in one working platform for the archaeologist: the fields of shape analysis, and of metadata search. These fields are relatively disjoint at the moment, and the research and development challenge of GRAVITATE is precisely to merge them for our chosen tasks. As shown in chapter 7 the small amount of literature that already attempts join 3D geometry and semantics is not related to the cultural heritage domain. Therefore, after the project is done, there should be a clear ‘before-GRAVITATE’ and ‘after-GRAVITATE’ split in how these two aspects of a cultural heritage artefact are treated.This state of the art report (SOTA) is ‘before-GRAVITATE’. Shape analysis and metadata description are described separately, as currently in the literature and we end the report with common recommendations in chapter 8 on possible or plausible cross-connections that suggest themselves. These considerations will be refined for the Roadmap for Research deliverable.Within the project, a jargon is developing in which ‘geometry’ stands for the physical properties of an artefact (not only its shape, but also its colour and material) and ‘metadata’ is used as a general shorthand for the semantic description of the provenance, location, ownership, classification, use etc. of the artefact. As we proceed in the project, we will find a need to refine those broad divisions, and find intermediate classes (such as a semantic description of certain colour patterns), but for now the terminology is convenient – not least because it highlights the interesting area where both aspects meet.On the ‘geometry’ side, the GRAVITATE partners are UVA, Technion, CNR/IMATI; on the metadata side, IT Innovation, British Museum and Cyprus Institute; the latter two of course also playing the role of internal users, and representatives of the Cultural Heritage (CH) data and target user’s group. CNR/IMATI’s experience in shape analysis and similarity will be an important bridge between the two worlds for geometry and metadata. The authorship and styles of this SOTA reflect these specialisms: the first part (chapters 3 and 4) purely by the geometry partners (mostly IMATI and UVA), the second part (chapters 5 and 6) by the metadata partners, especially IT Innovation while the joint overview on 3D geometry and semantics is mainly by IT Innovation and IMATI. The common section on Perspectives was written with the contribution of all

    A Survey of Geometric Analysis in Cultural Heritage

    Get PDF
    We present a review of recent techniques for performing geometric analysis in cultural heritage (CH) applications. The survey is aimed at researchers in the areas of computer graphics, computer vision and CH computing, as well as to scholars and practitioners in the CH field. The problems considered include shape perception enhancement, restoration and preservation support, monitoring over time, object interpretation and collection analysis. All of these problems typically rely on an understanding of the structure of the shapes in question at both a local and global level. In this survey, we discuss the different problem forms and review the main solution methods, aided by classification criteria based on the geometric scale at which the analysis is performed and the cardinality of the relationships among object parts exploited during the analysis. We finalize the report by discussing open problems and future perspectives

    An Investigation into the identification, reconstruction, and evidential value of thumbnail cache file fragments in unallocated space

    Get PDF
    ©Cranfield UniversityThis thesis establishes the evidential value of thumbnail cache file fragments identified in unallocated space. A set of criteria to evaluate the evidential value of thumbnail cache artefacts were created by researching the evidential constraints present in Forensic Computing. The criteria were used to evaluate the evidential value of live system thumbnail caches and thumbnail cache file fragments identified in unallocated space. Thumbnail caches can contain visual thumbnails and associated metadata which may be useful to an analyst during an investigation; the information stored in the cache may provide information on the contents of files and any user or system behaviour which interacted with the file. There is a standard definition of the purpose of a thumbnail cache, but not the structure or implementation; this research has shown that this has led to some thumbnail caches storing a variety of other artefacts such as network place names. The growing interest in privacy and security has led to an increase in user’s attempting to remove evidence of their activities; information removed by the user may still be available in unallocated space. This research adapted popular methods for the identification of contiguous files to enable the identification of single cluster sized fragments in Windows 7, Ubuntu, and Kubuntu. Of the four methods tested, none were able to identify each of the classifications with no false positive results; this result led to the creation of a new approach which improved the identification of thumbnail cache file fragments. After the identification phase, further research was conducted into the reassembly of file fragments; this reassembly was based solely on the potential thumbnail cache file fragments and structural and syntactical information. In both the identification and reassembly phases of this research image only file fragments proved the most challenging resulting in a potential area of continued future research. Finally this research compared the evidential value of live system thumbnail caches with identified and reassembled fragments. It was determined that both types of thumbnail cache artefacts can provide unique information which may assist with a digital investigation. ii This research has produced a set of criteria for determining the evidential value of thumbnail cache artefacts; it has also identified the structure and related user and system behaviour of popular operating system thumbnail cache implementations. This research has also adapted contiguous file identification techniques to single fragment identification and has developed an improved method for thumbnail cache file fragment identification. Finally this research has produced a proof of concept software tool for the automated identification and reassembly of thumbnail cache file fragments

    Analysis of 3D objects at multiple scales (application to shape matching)

    Get PDF
    Depuis quelques années, l évolution des techniques d acquisition a entraîné une généralisation de l utilisation d objets 3D très dense, représentés par des nuages de points de plusieurs millions de sommets. Au vu de la complexité de ces données, il est souvent nécessaire de les analyser pour en extraire les structures les plus pertinentes, potentiellement définies à plusieurs échelles. Parmi les nombreuses méthodes traditionnellement utilisées pour analyser des signaux numériques, l analyse dite scale-space est aujourd hui un standard pour l étude des courbes et des images. Cependant, son adaptation aux données 3D pose des problèmes d instabilité et nécessite une information de connectivité, qui n est pas directement définie dans les cas des nuages de points. Dans cette thèse, nous présentons une suite d outils mathématiques pour l analyse des objets 3D, sous le nom de Growing Least Squares (GLS). Nous proposons de représenter la géométrie décrite par un nuage de points via une primitive du second ordre ajustée par une minimisation aux moindres carrés, et cela à pour plusieurs échelles. Cette description est ensuite derivée analytiquement pour extraire de manière continue les structures les plus pertinentes à la fois en espace et en échelle. Nous montrons par plusieurs exemples et comparaisons que cette représentation et les outils associés définissent une solution efficace pour l analyse des nuages de points à plusieurs échelles. Un défi intéressant est l analyse d objets 3D acquis dans le cadre de l étude du patrimoine culturel. Dans cette thèse, nous nous étudions les données générées par l acquisition des fragments des statues entourant par le passé le Phare d Alexandrie, Septième Merveille du Monde. Plus précisément, nous nous intéressons au réassemblage d objets fracturés en peu de fragments (une dizaine), mais avec de nombreuses parties manquantes ou fortement dégradées par l action du temps. Nous proposons un formalisme pour la conception de systèmes d assemblage virtuel semi-automatiques, permettant de combiner à la fois les connaissances des archéologues et la précision des algorithmes d assemblage. Nous présentons deux systèmes basés sur cette conception, et nous montrons leur efficacité dans des cas concrets.Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Amongthe wide variety of methods proposed to analyze digital signals, the scale-space analysis istoday a standard for the study of 2D curves and images. However, its adaptation to 3D dataleads to instabilities and requires connectivity information, which is not directly availablewhen dealing with point sets.In this thesis, we present a new multi-scale analysis framework that we call the GrowingLeast Squares (GLS). It consists of a robust local geometric descriptor that can be evaluatedon point sets at multiple scales using an efficient second-order fitting procedure. We proposeto analytically differentiate this descriptor to extract continuously the pertinent structuresin scale-space. We show that this representation and the associated toolbox define an effi-cient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.A challenging application is the analysis of acquired 3D objects coming from the CulturalHeritage field. In this thesis, we study a real-world dataset composed of the fragments ofthe statues that were surrounding the legendary Alexandria Lighthouse. In particular, wefocus on the problem of fractured object reassembly, consisting of few fragments (up to aboutten), but with missing parts due to erosion or deterioration. We propose a semi-automaticformalism to combine both the archaeologist s knowledge and the accuracy of geometricmatching algorithms during the reassembly process. We use it to design two systems, andwe show their efficiency in concrete cases.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    Analysis of 3D objects at multiple scales (application to shape matching)

    Get PDF
    Depuis quelques années, l évolution des techniques d acquisition a entraîné une généralisation de l utilisation d objets 3D très dense, représentés par des nuages de points de plusieurs millions de sommets. Au vu de la complexité de ces données, il est souvent nécessaire de les analyser pour en extraire les structures les plus pertinentes, potentiellement définies à plusieurs échelles. Parmi les nombreuses méthodes traditionnellement utilisées pour analyser des signaux numériques, l analyse dite scale-space est aujourd hui un standard pour l étude des courbes et des images. Cependant, son adaptation aux données 3D pose des problèmes d instabilité et nécessite une information de connectivité, qui n est pas directement définie dans les cas des nuages de points. Dans cette thèse, nous présentons une suite d outils mathématiques pour l analyse des objets 3D, sous le nom de Growing Least Squares (GLS). Nous proposons de représenter la géométrie décrite par un nuage de points via une primitive du second ordre ajustée par une minimisation aux moindres carrés, et cela à pour plusieurs échelles. Cette description est ensuite derivée analytiquement pour extraire de manière continue les structures les plus pertinentes à la fois en espace et en échelle. Nous montrons par plusieurs exemples et comparaisons que cette représentation et les outils associés définissent une solution efficace pour l analyse des nuages de points à plusieurs échelles. Un défi intéressant est l analyse d objets 3D acquis dans le cadre de l étude du patrimoine culturel. Dans cette thèse, nous nous étudions les données générées par l acquisition des fragments des statues entourant par le passé le Phare d Alexandrie, Septième Merveille du Monde. Plus précisément, nous nous intéressons au réassemblage d objets fracturés en peu de fragments (une dizaine), mais avec de nombreuses parties manquantes ou fortement dégradées par l action du temps. Nous proposons un formalisme pour la conception de systèmes d assemblage virtuel semi-automatiques, permettant de combiner à la fois les connaissances des archéologues et la précision des algorithmes d assemblage. Nous présentons deux systèmes basés sur cette conception, et nous montrons leur efficacité dans des cas concrets.Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Amongthe wide variety of methods proposed to analyze digital signals, the scale-space analysis istoday a standard for the study of 2D curves and images. However, its adaptation to 3D dataleads to instabilities and requires connectivity information, which is not directly availablewhen dealing with point sets.In this thesis, we present a new multi-scale analysis framework that we call the GrowingLeast Squares (GLS). It consists of a robust local geometric descriptor that can be evaluatedon point sets at multiple scales using an efficient second-order fitting procedure. We proposeto analytically differentiate this descriptor to extract continuously the pertinent structuresin scale-space. We show that this representation and the associated toolbox define an effi-cient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.A challenging application is the analysis of acquired 3D objects coming from the CulturalHeritage field. In this thesis, we study a real-world dataset composed of the fragments ofthe statues that were surrounding the legendary Alexandria Lighthouse. In particular, wefocus on the problem of fractured object reassembly, consisting of few fragments (up to aboutten), but with missing parts due to erosion or deterioration. We propose a semi-automaticformalism to combine both the archaeologist s knowledge and the accuracy of geometricmatching algorithms during the reassembly process. We use it to design two systems, andwe show their efficiency in concrete cases.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    Enhancing Total Hip Replacement Complications Diagnosis: A Deep Learning Approach with Clinical Knowledge Integration

    Get PDF
    The increased rate of Total Hip Replacement (THR) for relieving hip pain and improving the quality of life has been accompanied by a rise in associated post-operative complications, which are evaluated and monitored mainly through clinical assessment of the X-ray images. The current clinical practice depends on the manual identification of important regions and the analysis of different features in arthroplasty X-ray images which can lead to subjectivity, prone to human error and delay diagnosis. Deep Learning (DL) based techniques showed outstanding outcomes across various image analysis tasks. However, the success of these networks is subjected to the availability of a very large, accurately annotated and well-balanced dataset - a constraint that is considered a main challenge for many medical image analysis tasks including THR. This thesis focuses on automating the analysis of THR X-ray images to aid in the diagnosis and treatment planning of various THR complications. THR X-ray images including post-operation images and after Peri-Prosthetic Femur Fracture (PFF) images of a wide range of implants and various positioning and orientations, are collected to this end. Different Convolutional Neural Network (CNN) architectures are explored for PFF classification to observe how these networks perform in the presence of class imbalance and a limited number of data and with complex image patterns, either using full X-ray images or Region of Interest (ROI) images. This demonstrates that typical CNN-based methods succeeded in detecting PFF with DenseNet achieving an F1 score of 95%, while exhibiting low performance in the classification of PFF types, achieving an F1 score of 54% with GoogleNet, Resnet and DenseNet. This lower performance is attributed to the increased complexity of the task and the imbalanced distribution of the classes. To this end, the incorporation of THR medical knowledge with DL model is investigated. The segmentation of the femoral implant component and the detection of important landmarks are formulated as simultaneous tasks within multi-task CNN that combines segmentation maps of implant with the regression of shape parameters derived from the Statistical Shape Model (SSM). Compared to the state-of-the-art, this integrated approach improves the estimation of the implant shape by a 6% dice score, making the segmentation realistic and allowing automatic detection of the important landmarks which can help in detecting many THR complications. For PFF diagnosis, the incorporation of the clinical process of interpreting THR X-ray images with CNN is developed. For this purpose, the process of clinical interpretation of PFF X-ray images is defined and the method is designed accordingly. Four feature extraction components are trained to construct features from distinctive regions of the X-ray image that are defined automatically. The extracted features are fused to classify the X-ray image into a specific fracture type. The developed approach improved PFF diagnosis by approximately 8% AUC score compared to state-of-the-art methods, signifying notable clinical advancement. Finally, the virtual pre-operative planning of bone fracture reduction surgery is explored which is important to reduce surgery time and minimize potential risks. The main obstacle toward the planning task is to define the matching between fragments. Therefore, 3D puzzle-solving method is formulated by introducing a new fragment representation and feature extraction method that improves the matching between fragments. The initial evaluation of the method demonstrates promising performance for the virtual reassembly of broken objects

    Advanced Techniques for Improving the Efficacy of Digital Forensics Investigations

    Get PDF
    Digital forensics is the science concerned with discovering, preserving, and analyzing evidence on digital devices. The intent is to be able to determine what events have taken place, when they occurred, who performed them, and how they were performed. In order for an investigation to be effective, it must exhibit several characteristics. The results produced must be reliable, or else the theory of events based on the results will be flawed. The investigation must be comprehensive, meaning that it must analyze all targets which may contain evidence of forensic interest. Since any investigation must be performed within the constraints of available time, storage, manpower, and computation, investigative techniques must be efficient. Finally, an investigation must provide a coherent view of the events under question using the evidence gathered. Unfortunately the set of currently available tools and techniques used in digital forensic investigations does a poor job of supporting these characteristics. Many tools used contain bugs which generate inaccurate results; there are many types of devices and data for which no analysis techniques exist; most existing tools are woefully inefficient, failing to take advantage of modern hardware; and the task of aggregating data into a coherent picture of events is largely left to the investigator to perform manually. To remedy this situation, we developed a set of techniques to facilitate more effective investigations. To improve reliability, we developed the Forensic Discovery Auditing Module, a mechanism for auditing and enforcing controls on accesses to evidence. To improve comprehensiveness, we developed ramparser, a tool for deep parsing of Linux RAM images, which provides previously inaccessible data on the live state of a machine. To improve efficiency, we developed a set of performance optimizations, and applied them to the Scalpel file carver, creating order of magnitude improvements to processing speed and storage requirements. Last, to facilitate more coherent investigations, we developed the Forensic Automated Coherence Engine, which generates a high-level view of a system from the data generated by low-level forensics tools. Together, these techniques significantly improve the effectiveness of digital forensic investigations conducted using them
    corecore