2,738 research outputs found

    Structural matching by discrete relaxation

    Get PDF
    This paper describes a Bayesian framework for performing relational graph matching by discrete relaxation. Our basic aim is to draw on this framework to provide a comparative evaluation of a number of contrasting approaches to relational matching. Broadly speaking there are two main aspects to this study. Firstly we locus on the issue of how relational inexactness may be quantified. We illustrate that several popular relational distance measures can be recovered as specific limiting cases of the Bayesian consistency measure. The second aspect of our comparison concerns the way in which structural inexactness is controlled. We investigate three different realizations ai the matching process which draw on contrasting control models. The main conclusion of our study is that the active process of graph-editing outperforms the alternatives in terms of its ability to effectively control a large population of contaminating clutter

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Advances in computational modelling for personalised medicine after myocardial infarction

    Get PDF
    Myocardial infarction (MI) is a leading cause of premature morbidity and mortality worldwide. Determining which patients will experience heart failure and sudden cardiac death after an acute MI is notoriously difficult for clinicians. The extent of heart damage after an acute MI is informed by cardiac imaging, typically using echocardiography or sometimes, cardiac magnetic resonance (CMR). These scans provide complex data sets that are only partially exploited by clinicians in daily practice, implying potential for improved risk assessment. Computational modelling of left ventricular (LV) function can bridge the gap towards personalised medicine using cardiac imaging in patients with post-MI. Several novel biomechanical parameters have theoretical prognostic value and may be useful to reflect the biomechanical effects of novel preventive therapy for adverse remodelling post-MI. These parameters include myocardial contractility (regional and global), stiffness and stress. Further, the parameters can be delineated spatially to correspond with infarct pathology and the remote zone. While these parameters hold promise, there are challenges for translating MI modelling into clinical practice, including model uncertainty, validation and verification, as well as time-efficient processing. More research is needed to (1) simplify imaging with CMR in patients with post-MI, while preserving diagnostic accuracy and patient tolerance (2) to assess and validate novel biomechanical parameters against established prognostic biomarkers, such as LV ejection fraction and infarct size. Accessible software packages with minimal user interaction are also needed. Translating benefits to patients will be achieved through a multidisciplinary approach including clinicians, mathematicians, statisticians and industry partners

    Two Kinds of ’Christian Philosophy’

    Get PDF
    It is controversial whether ”Christian Philosophy’ is a useful or even consistent notion. After providing some historical background to the problem, I will distinguish and explicate two possible understandings of ”Christian Philosophy’ which should be kept apart: a ”Thomistic’ and an ”Augustinian’ one, of which the latter has garnered more attention in the recent literature. A sketch of the most prominent current ”Augustinian’ position leads to some considerations for why a ”Thomistic’ understanding of ”Christian Philosophy’ has more to recommend it, if the term is regarded as useful at all

    Using the Sharp Operator for edge detection and nonlinear diffusion

    Get PDF
    In this paper we investigate the use of the sharp function known from functional analysis in image processing. The sharp function gives a measure of the variations of a function and can be used as an edge detector. We extend the classical notion of the sharp function for measuring anisotropic behaviour and give a fast anisotropic edge detection variant inspired by the sharp function. We show that these edge detection results are useful to steer isotropic and anisotropic nonlinear diffusion filters for image enhancement

    Bayesian Inference with Combined Dynamic and Sparsity Models: Application in 3D Electrophysiological Imaging

    Get PDF
    Data-driven inference is widely encountered in various scientific domains to convert the observed measurements into information that cannot be directly observed about a system. Despite the quickly-developing sensor and imaging technologies, in many domains, data collection remains an expensive endeavor due to financial and physical constraints. To overcome the limits in data and to reduce the demand on expensive data collection, it is important to incorporate prior information in order to place the data-driven inference in a domain-relevant context and to improve its accuracy. Two sources of assumptions have been used successfully in many inverse problem applications. One is the temporal dynamics of the system (dynamic structure). The other is the low-dimensional structure of a system (sparsity structure). In existing work, these two structures have often been explored separately, while in most high-dimensional dynamic system they are commonly co-existing and contain complementary information. In this work, our main focus is to build a robustness inference framework to combine dynamic and sparsity constraints. The driving application in this work is a biomedical inverse problem of electrophysiological (EP) imaging, which noninvasively and quantitatively reconstruct transmural action potentials from body-surface voltage data with the goal to improve cardiac disease prevention, diagnosis, and treatment. The general framework can be extended to a variety of applications that deal with the inference of high-dimensional dynamic systems
    • …
    corecore