17,361 research outputs found

    Effective 3D Geometric Matching for Data Restoration and Its Forensic Application

    Get PDF
    3D geometric matching is the technique to detect the similar patterns among multiple objects. It is an important and fundamental problem and can facilitate many tasks in computer graphics and vision, including shape comparison and retrieval, data fusion, scene understanding and object recognition, and data restoration. For example, 3D scans of an object from different angles are matched and stitched together to form the complete geometry. In medical image analysis, the motion of deforming organs is modeled and predicted by matching a series of CT images. This problem is challenging and remains unsolved, especially when the similar patterns are 1) small and lack geometric saliency; 2) incomplete due to the occlusion of the scanning and damage of the data. We study the reliable matching algorithm that can tackle the above difficulties and its application in data restoration. Data restoration is the problem to restore the fragmented or damaged model to its original complete state. It is a new area and has direct applications in many scientific fields such as Forensics and Archeology. In this dissertation, we study novel effective geometric matching algorithms, including curve matching, surface matching, pairwise matching, multi-piece matching and template matching. We demonstrate its applications in an integrated digital pipeline of skull reassembly, skull completion, and facial reconstruction, which is developed to facilitate the state-of-the-art forensic skull/facial reconstruction processing pipeline in law enforcement

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Exact Analytic Continuation with Respect to the Replica Number in the Discrete Random Energy Model of Finite System Size

    Full text link
    An expression for the moment of partition function valid for any finite system size NN and complex power n((n)>0)n (\Re(n)>0) is obtained for a simple spin glass model termed the {\em discrete random energy model} (DREM). We investigate the behavior of the moment in the thermodynamic limit NN \to \infty using this expression, and find that a phase transition occurs at a certain real replica number when the temperature is sufficiently low, directly clarifying the scenario of replica symmetry breaking of DREM in the replica number space {\em without using the replica trick}. The validity of the expression is numerically confirmed.Comment: 31 pages, 8 eps figure

    Detecting adversarial manipulation using inductive Venn-ABERS predictors

    Get PDF
    Inductive Venn-ABERS predictors (IVAPs) are a type of probabilistic predictors with the theoretical guarantee that their predictions are perfectly calibrated. In this paper, we propose to exploit this calibration property for the detection of adversarial examples in binary classification tasks. By rejecting predictions if the uncertainty of the IVAP is too high, we obtain an algorithm that is both accurate on the original test set and resistant to adversarial examples. This robustness is observed on adversarials for the underlying model as well as adversarials that were generated by taking the IVAP into account. The method appears to offer competitive robustness compared to the state-of-the-art in adversarial defense yet it is computationally much more tractable

    Operational dilemmas in safety-critical industries:the tension between organizational reputational concerns and the effective communication of risk

    Get PDF
    The file attached to this record is the author's final peer reviewed version.Organizations involved in safety-critical operations often deal with operational tensions especially when involved in safety-critical incidents that is likely to violate safety. In this paper, we set out to understand how the disclosures of safety-critical incidents take place in the face of reputational tension. Based on the case of the Nigerian National Petroleum Corporation (NNPC), we draw on image repair theory (IRT) and information manipulation theory (IMT) and adopt discourse analysis as a method of analysing safety-critical incident press releases and reports from the NNPC. We found NNPC deploying image repair as part of incident disclosures to deflect attention, evade blame and avoid issuing apologies. This is supported by the by violation of the conversational maxims. The paper provides a theoretical model for discursively assessing the practices of incident information disclosure by an organization in the face of reputational tension, and further assesses the risk communication implications of such practices

    Knowledge-based support in Non-Destructive Testing for health monitoring of aircraft structures

    Get PDF
    Maintenance manuals include general methods and procedures for industrial maintenance and they contain information about principles of maintenance methods. Particularly, Non-Destructive Testing (NDT) methods are important for the detection of aeronautical defects and they can be used for various kinds of material and in different environments. Conventional non-destructive evaluation inspections are done at periodic maintenance checks. Usually, the list of tools used in a maintenance program is simply located in the introduction of manuals, without any precision as regards to their characteristics, except for a short description of the manufacturer and tasks in which they are employed. Improving the identification concepts of the maintenance tools is needed to manage the set of equipments and establish a system of equivalence: it is necessary to have a consistent maintenance conceptualization, flexible enough to fit all current equipment, but also all those likely to be added/used in the future. Our contribution is related to the formal specification of the system of functional equivalences that can facilitate the maintenance activities with means to determine whether a tool can be substituted for another by observing their key parameters in the identified characteristics. Reasoning mechanisms of conceptual graphs constitute the baseline elements to measure the fit or unfit between an equipment model and a maintenance activity model. Graph operations are used for processing answers to a query and this graph-based approach to the search method is in-line with the logical view of information retrieval. The methodology described supports knowledge formalization and capitalization of experienced NDT practitioners. As a result, it enables the selection of a NDT technique and outlines its capabilities with acceptable alternatives

    Adaptive Image Restoration: Perception Based Neural Nework Models and Algorithms.

    Get PDF
    Abstract This thesis describes research into the field of image restoration. Restoration is a process by which an image suffering some form of distortion or degradation can be recovered to its original form. Two primary concepts within this field have been investigated. The first concept is the use of a Hopfield neural network to implement the constrained least square error method of image restoration. In this thesis, the author reviews previous neural network restoration algorithms in the literature and builds on these algorithms to develop a new faster version of the Hopfield neural network algorithm for image restoration. The versatility of the neural network approach is then extended by the author to deal with the cases of spatially variant distortion and adaptive regularisation. It is found that using the Hopfield-based neural network approach, an image suffering spatially variant degradation can be accurately restored without a substantial penalty in restoration time. In addition, the adaptive regularisation restoration technique presented in this thesis is shown to produce superior results when compared to non-adaptive techniques and is particularly effective when applied to the difficult, yet important, problem of semi-blind deconvolution. The second concept investigated in this thesis, is the difficult problem of incorporating concepts involved in human visual perception into image restoration techniques. In this thesis, the author develops a novel image error measure which compares two images based on the differences between local regional statistics rather than pixel level differences. This measure more closely corresponds to the way humans perceive the differences between two images. Two restoration algorithms are developed by the author based on versions of the novel image error measure. It is shown that the algorithms which utilise this error measure have improved performance and produce visually more pleasing images in the cases of colour and grayscale images under high noise conditions. Most importantly, the perception based algorithms are shown to be extremely tolerant of faults in the restoration algorithm and hence are very robust. A number of experiments have been performed to demonstrate the performance of the various algorithms presented

    Bayesian multi-modal model comparison: a case study on the generators of the spike and the wave in generalized spike–wave complexes

    Get PDF
    We present a novel approach to assess the networks involved in the generation of spontaneous pathological brain activity based on multi-modal imaging data. We propose to use probabilistic fMRI-constrained EEG source reconstruction as a complement to EEG-correlated fMRI analysis to disambiguate between networks that co-occur at the fMRI time resolution. The method is based on Bayesian model comparison, where the different models correspond to different combinations of fMRI-activated (or deactivated) cortical clusters. By computing the model evidence (or marginal likelihood) of each and every candidate source space partition, we can infer the most probable set of fMRI regions that has generated a given EEG scalp data window. We illustrate the method using EEG-correlated fMRI data acquired in a patient with ictal generalized spike–wave (GSW) discharges, to examine whether different networks are involved in the generation of the spike and the wave components, respectively. To this effect, we compared a family of 128 EEG source models, based on the combinations of seven regions haemodynamically involved (deactivated) during a prolonged ictal GSW discharge, namely: bilateral precuneus, bilateral medial frontal gyrus, bilateral middle temporal gyrus, and right cuneus. Bayesian model comparison has revealed the most likely model associated with the spike component to consist of a prefrontal region and bilateral temporal–parietal regions and the most likely model associated with the wave component to comprise the same temporal–parietal regions only. The result supports the hypothesis of different neurophysiological mechanisms underlying the generation of the spike versus wave components of GSW discharges
    corecore