278 research outputs found

    Discontinuity preserving image registration for breathing induced sliding organ motion

    Get PDF
    Image registration is a powerful tool in medical image analysis and facilitates the clinical routine in several aspects. It became an indispensable device for many medical applications including image-guided therapy systems. The basic goal of image registration is to spatially align two images that show a similar region of interest. More speci๏ฟฝcally, a displacement ๏ฟฝeld respectively a transformation is estimated, that relates the positions of the pixels or feature points in one image to the corresponding positions in the other one. The so gained alignment of the images assists the doctor in comparing and diagnosing them. There exist di๏ฟฝerent kinds of image registration methods, those which are capable to estimate a rigid transformation or more generally an a๏ฟฝne transformation between the images and those which are able to capture a more complex motion by estimating a non-rigid transformation. There are many well established non-rigid registration methods, but those which are able to preserve discontinuities in the displacement ๏ฟฝeld are rather rare. These discontinuities appear in particular at organ boundaries during the breathing induced organ motion. In this thesis, we make use of the idea to combine motion segmentation with registration to tackle the problem of preserving the discontinuities in the resulting displacement ๏ฟฝeld. We introduce a binary function to represent the motion segmentation and the proposed discontinuity preserving non-rigid registration method is then formulated in a variational framework. Thus, an energy functional is de๏ฟฝned and its minimisation with respect to the displacement ๏ฟฝeld and the motion segmentation will lead to the desired result. In theory, one can prove that for the motion segmentation a global minimiser of the energy functional can be found, if the displacement ๏ฟฝeld is given. The overall minimisation problem, however, is non-convex and a suitable optimisation strategy has to be considered. Furthermore, depending on whether we use the pure L1-norm or an approximation of it in the formulation of the energy functional, we use di๏ฟฝerent numerical methods to solve the minimisation problem. More speci๏ฟฝcally, when using an approximation of the L1-norm, the minimisation of the energy functional with respect to the displacement ๏ฟฝeld is performed through Brox et al.'s ๏ฟฝxed point iteration scheme, and the minimisation with respect to the motion segmentation with the dual algorithm of Chambolle. On the other hand, when we make use of the pure L1-norm in the energy functional, the primal-dual algorithm of Chambolle and Pock is used for both, the minimisation with respect to the displacement ๏ฟฝeld and the motion segmentation. This approach is clearly faster compared to the one using the approximation of the L1-norm and also theoretically more appealing. Finally, to support the registration method during the minimisation process, we incorporate additionally in a later approach the information of certain landmark positions into the formulation of the energy functional, that makes use of the pure L1-norm. Similarly as before, the primal-dual algorithm of Chambolle and Pock is then used for both, the minimisation with respect to the displacement ๏ฟฝeld and the motion segmentation. All the proposed non-rigid discontinuity preserving registration methods delivered promising results for experiments with synthetic images and real MR images of breathing induced liver motion

    Trends in Mathematical Imaging and Surface Processing

    Get PDF
    Motivated both by industrial applications and the challenge of new problems, one observes an increasing interest in the field of image and surface processing over the last years. It has become clear that even though the applications areas differ significantly the methodological overlap is enormous. Even if contributions to the field come from almost any discipline in mathematics, a major role is played by partial differential equations and in particular by geometric and variational modeling and by their numerical counterparts. The aim of the workshop was to gather a group of leading experts coming from mathematics, engineering and computer graphics to cover the main developments

    Dense Vision in Image-guided Surgery

    Get PDF
    Image-guided surgery needs an efficient and effective camera tracking system in order to perform augmented reality for overlaying preoperative models or label cancerous tissues on the 2D video images of the surgical scene. Tracking in endoscopic/laparoscopic scenes however is an extremely difficult task primarily due to tissue deformation, instrument invasion into the surgical scene and the presence of specular highlights. State of the art feature-based SLAM systems such as PTAM fail in tracking such scenes since the number of good features to track is very limited. When the scene is smoky and when there are instrument motions, it will cause feature-based tracking to fail immediately. The work of this thesis provides a systematic approach to this problem using dense vision. We initially attempted to register a 3D preoperative model with multiple 2D endoscopic/laparoscopic images using a dense method but this approach did not perform well. We subsequently proposed stereo reconstruction to directly obtain the 3D structure of the scene. By using the dense reconstructed model together with robust estimation, we demonstrate that dense stereo tracking can be incredibly robust even within extremely challenging endoscopic/laparoscopic scenes. Several validation experiments have been conducted in this thesis. The proposed stereo reconstruction algorithm has turned out to be the state of the art method for several publicly available ground truth datasets. Furthermore, the proposed robust dense stereo tracking algorithm has been proved highly accurate in synthetic environment (< 0.1 mm RMSE) and qualitatively extremely robust when being applied to real scenes in RALP prostatectomy surgery. This is an important step toward achieving accurate image-guided laparoscopic surgery.Open Acces

    Optical flow estimation using steered-L1 norm

    Get PDF
    Motion is a very important part of understanding the visual picture of the surrounding environment. In image processing it involves the estimation of displacements for image points in an image sequence. In this context dense optical flow estimation is concerned with the computation of pixel displacements in a sequence of images, therefore it has been used widely in the field of image processing and computer vision. A lot of research was dedicated to enable an accurate and fast motion computation in image sequences. Despite the recent advances in the computation of optical flow, there is still room for improvements and optical flow algorithms still suffer from several issues, such as motion discontinuities, occlusion handling, and robustness to illumination changes. This thesis includes an investigation for the topic of optical flow and its applications. It addresses several issues in the computation of dense optical flow and proposes solutions. Specifically, this thesis is divided into two main parts dedicated to address two main areas of interest in optical flow. In the first part, image registration using optical flow is investigated. Both local and global image registration has been used for image registration. An image registration based on an improved version of the combined Local-global method of optical flow computation is proposed. A bi-lateral filter was used in this optical flow method to improve the edge preserving performance. It is shown that image registration via this method gives more robust results compared to the local and the global optical flow methods previously investigated. The second part of this thesis encompasses the main contribution of this research which is an improved total variation L1 norm. A smoothness term is used in the optical flow energy function to regularise this function. The L1 is a plausible choice for such a term because of its performance in preserving edges, however this term is known to be isotropic and hence decreases the penalisation near motion boundaries in all directions. The proposed improved L1 (termed here as the steered-L1 norm) smoothness term demonstrates similar performance across motion boundaries but improves the penalisation performance along such boundaries

    Non-rigid medical image registration with extended free form deformations: modelling general tissue transitions

    Get PDF
    Image registration seeks pointwise correspondences between the same or analogous objects in different images. Conventional registration methods generally impose continuity and smoothness throughout the image. However, there are cases in which the deformations may involve discontinuities. In general, the discontinuities can be of different types, depending on the physical properties of the tissue transitions involved and boundary conditions. For instance, in the respiratory motion the lungs slide along the thoracic cage following the tangential direction of their interface. In the normal direction, however, the lungs and the thoracic cage are constrained to be always in contact but they have different material properties producing different compression or expansion rates. In the literature, there is no generic method, which handles different types of discontinuities and considers their directional dependence. The aim of this thesis is to develop a general registration framework that is able to correctly model different types of tissue transitions with a general formalism. This has led to the development of the eXtended Free Form Deformation (XFFD) registration method. XFFD borrows the concept of the interpolation method from the eXtended Finite Element method (XFEM) to incorporate discontinuities by enriching B-spline basis functions, coupled with extra degrees of freedom. XFFD can handle different types of discontinuities and encodes their directional-dependence without any additional constraints. XFFD has been evaluated on digital phantoms, publicly available 3D liver and lung CT images. The experiments show that XFFD improves on previous methods and that it is important to employ the correct model that corresponds to the discontinuity type involved at the tissue transition. The effect of using incorrect models is more evident in the strain, which measures mechanical properties of the tissues

    Fast catheter segmentation and tracking based on x-ray fluoroscopic and echocardiographic modalities for catheter-based cardiac minimally invasive interventions

    Get PDF
    X-ray fluoroscopy and echocardiography imaging (ultrasound, US) are two imaging modalities that are widely used in cardiac catheterization. For these modalities, a fast, accurate and stable algorithm for the detection and tracking of catheters is required to allow clinicians to observe the catheter location in real-time. Currently X-ray fluoroscopy is routinely used as the standard modality in catheter ablation interventions. However, it lacks the ability to visualize soft tissue and uses harmful radiation. US does not have these limitations but often contains acoustic artifacts and has a small field of view. These make the detection and tracking of the catheter in US very challenging. The first contribution in this thesis is a framework which combines Kalman filter and discrete optimization for multiple catheter segmentation and tracking in X-ray images. Kalman filter is used to identify the whole catheter from a single point detected on the catheter in the first frame of a sequence of x-ray images. An energy-based formulation is developed that can be used to track the catheters in the following frames. We also propose a discrete optimization for minimizing the energy function in each frame of the X-ray image sequence. Our approach is robust to tangential motion of the catheter and combines the tubular and salient feature measurements into a single robust and efficient framework. The second contribution is an algorithm for catheter extraction in 3D ultrasound images based on (a) the registration between the X-ray and ultrasound images and (b) the segmentation of the catheter in X-ray images. The search space for the catheter extraction in the ultrasound images is constrained to lie on or close to a curved surface in the ultrasound volume. The curved surface corresponds to the back-projection of the extracted catheter from the X-ray image to the ultrasound volume. Blob-like features are detected in the US images and organized in a graphical model. The extracted catheter is modelled as the optimal path in this graphical model. Both contributions allow the use of ultrasound imaging for the improved visualization of soft tissue. However, X-ray imaging is still required for each ultrasound frame and the amount of X-ray exposure has not been reduced. The final contribution in this thesis is a system that can track the catheter in ultrasound volumes automatically without the need for X-ray imaging during the tracking. Instead X-ray imaging is only required for the system initialization and for recovery from tracking failures. This allows a significant reduction in the amount of X-ray exposure for patient and clinicians.Open Acces

    ์›€์ง์ด๋Š” ๋‹จ์ผ ์นด๋ฉ”๋ผ๋ฅผ ์ด์šฉํ•œ 3์ฐจ์› ๋ณต์›๊ณผ ๋””๋ธ”๋Ÿฌ๋ง, ์ดˆํ•ด์ƒ๋„ ๋ณต์›์˜ ๋™์‹œ์  ์ˆ˜ํ–‰ ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2013. 8. ์ด๊ฒฝ๋ฌด.์˜์ƒ ๊ธฐ๋ฐ˜ 3์ฐจ์› ๋ณต์›์€ ์ปดํ“จํ„ฐ ๋น„์ „์˜ ๊ธฐ๋ณธ์ ์ธ ์—ฐ๊ตฌ ์ฃผ์ œ ๊ฐ€์šด๋ฐ ํ•˜๋‚˜๋กœ ์ตœ๊ทผ ๋ช‡ ๋…„๊ฐ„ ๋งŽ์€ ๋ฐœ์ „์ด ์žˆ์–ด์™”๋‹ค. ํŠนํžˆ ์ž๋™ ๋กœ๋ด‡์„ ์œ„ํ•œ ๋„ค๋น„๊ฒŒ์ด์…˜ ๋ฐ ํœด๋Œ€ ๊ธฐ๊ธฐ๋ฅผ ์ด์šฉํ•œ ์ฆ๊ฐ• ํ˜„์‹ค ๋“ฑ์— ๋„๋ฆฌ ํ™œ์šฉ๋  ์ˆ˜ ์žˆ๋Š” ๋‹จ์ผ ์นด๋ฉ”๋ผ๋ฅผ ์ด์šฉํ•œ 3์ฐจ์› ๋ณต์› ๊ธฐ๋ฒ•์€ ๋ณต์›์˜ ์ •ํ™•๋„, ๋ณต์› ๊ฐ€๋Šฅ ๋ฒ”์œ„ ๋ฐ ์ฒ˜๋ฆฌ ์†๋„ ์ธก๋ฉด์—์„œ ๋งŽ์€ ์‹ค์šฉ ๊ฐ€๋Šฅ์„ฑ์„ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ทธ ์„ฑ๋Šฅ์€ ์—ฌ์ „ํžˆ ์กฐ์‹ฌ์Šค๋ ˆ ์ดฌ์˜๋œ ๋†’์€ ํ’ˆ์งˆ์˜ ์ž…๋ ฅ ์˜์ƒ์— ๋Œ€ํ•ด์„œ๋งŒ ์‹œํ—˜๋˜๊ณ  ์žˆ๋‹ค. ์›€์ง์ด๋Š” ๋‹จ์ผ ์นด๋ฉ”๋ผ๋ฅผ ์ด์šฉํ•œ 3์ฐจ์› ๋ณต์›์˜ ์‹ค์ œ ๋™์ž‘ ํ™˜๊ฒฝ์—์„œ๋Š” ์ž…๋ ฅ ์˜์ƒ์ด ํ™”์†Œ ์žก์Œ์ด๋‚˜ ์›€์ง์ž„์— ์˜ํ•œ ๋ฒˆ์ง ๋“ฑ์— ์˜ํ•˜์—ฌ ์†์ƒ๋  ์ˆ˜ ์žˆ๊ณ , ์˜์ƒ์˜ ํ•ด์ƒ๋„ ๋˜ํ•œ ์ •ํ™•ํ•œ ์นด๋ฉ”๋ผ ์œ„์น˜ ์ธ์‹ ๋ฐ 3์ฐจ์› ๋ณต์›์„ ์œ„ํ•ด์„œ๋Š” ์ถฉ๋ถ„ํžˆ ๋†’์ง€ ์•Š์„ ์ˆ˜ ์žˆ๋‹ค. ๋งŽ์€ ์—ฐ๊ตฌ์—์„œ ๊ณ ์„ฑ๋Šฅ ์˜์ƒ ํ™”์งˆ ํ–ฅ์ƒ ๊ธฐ๋ฒ•๋“ค์ด ์ œ์•ˆ๋˜์–ด ์™”์ง€๋งŒ ์ด๋“ค์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋†’์€ ๊ณ„์‚ฐ ๋น„์šฉ์„ ํ•„์š”๋กœ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์‹ค์‹œ๊ฐ„ ๋™์ž‘ ๋Šฅ๋ ฅ์ด ์ค‘์š”ํ•œ ๋‹จ์ผ ์นด๋ฉ”๋ผ ๊ธฐ๋ฐ˜ 3์ฐจ์› ๋ณต์›์— ์‚ฌ์šฉ๋˜๊ธฐ์—๋Š” ๋ถ€์ ํ•ฉํ•˜๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋ณด๋‹ค ์ •ํ™•ํ•˜๊ณ  ์•ˆ์ •๋œ ๋ณต์›์„ ์œ„ํ•˜์—ฌ ์˜์ƒ ๊ฐœ์„ ์ด ๊ฒฐํ•ฉ๋œ ์ƒˆ๋กœ์šด ๋‹จ์ผ ์นด๋ฉ”๋ผ ๊ธฐ๋ฐ˜ 3์ฐจ์› ๋ณต์› ๊ธฐ๋ฒ•์„ ๋‹ค๋ฃฌ๋‹ค. ์ด๋ฅผ ์œ„ํ•˜์—ฌ ์˜์ƒ ํ’ˆ์งˆ์ด ์ €ํ•˜๋˜๋Š” ์ค‘์š”ํ•œ ๋‘ ์š”์ธ์ธ ์›€์ง์ž„์— ์˜ํ•œ ์˜์ƒ ๋ฒˆ์ง๊ณผ ๋‚ฎ์€ ํ•ด์ƒ๋„ ๋ฌธ์ œ๊ฐ€ ๊ฐ๊ฐ ์  ๊ธฐ๋ฐ˜ ๋ณต์› ๋ฐ ์กฐ๋ฐ€ ๋ณต์› ๊ธฐ๋ฒ•๋“ค๊ณผ ๊ฒฐํ•ฉ๋œ๋‹ค. ์˜์ƒ ํ’ˆ์งˆ ์ €ํ•˜๋ฅผ ํฌํ•จํ•œ ์˜์ƒ ํš๋“ ๊ณผ์ •์€ ์นด๋ฉ”๋ผ ๋ฐ ์žฅ๋ฉด์˜ 3์ฐจ์› ๊ธฐํ•˜ ๊ตฌ์กฐ์™€ ๊ด€์ธก๋œ ์˜์ƒ ์‚ฌ์ด์˜ ๊ด€๊ณ„๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ชจ๋ธ๋ง ํ•  ์ˆ˜ ์žˆ๊ณ , ์ด๋Ÿฌํ•œ ์˜์ƒ ํ’ˆ์งˆ ์ €ํ•˜ ๊ณผ์ •์„ ๊ณ ๋ คํ•จ์œผ๋กœ์จ ์ •ํ™•ํ•œ 3์ฐจ์› ๋ณต์›์„ ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€๋Šฅํ•ด์ง„๋‹ค. ๋˜ํ•œ, ์˜์ƒ ๋ฒˆ์ง ์ œ๊ฑฐ๋ฅผ ์œ„ํ•œ ๋ฒˆ์ง ์ปค๋„ ๋˜๋Š” ์˜์ƒ์˜ ์ดˆํ•ด์ƒ๋„ ๋ณต์›์„ ์œ„ํ•œ ํ™”์†Œ ๋Œ€์‘ ์ •๋ณด ๋“ฑ์ด 3์ฐจ์› ๋ณต์› ๊ณผ์ •๊ณผ ๋™์‹œ์— ์–ป์–ด์ง€๋Š”๊ฒƒ์ด ๊ฐ€๋Šฅํ•˜์—ฌ, ์˜์ƒ ๊ฐœ์„ ์ด ๋ณด๋‹ค ๊ฐ„ํŽธํ•˜๊ณ  ๋น ๋ฅด๊ฒŒ ์ˆ˜ํ–‰๋  ์ˆ˜ ์žˆ๋‹ค. ์ œ์•ˆ๋˜๋Š” ๊ธฐ๋ฒ•์€ 3์ฐจ์› ๋ณต์›๊ณผ ์˜์ƒ ๊ฐœ์„  ๋ฌธ์ œ๋ฅผ ๋™์‹œ์— ํ•ด๊ฒฐํ•จ์œผ๋กœ์จ ๊ฐ๊ฐ์˜ ๊ฒฐ๊ณผ๊ฐ€ ์ƒํ˜ธ ๋ณด์™„์ ์œผ๋กœ ํ–ฅ์ƒ๋œ๋‹ค๋Š” ์ ์—์„œ ๊ทธ ์žฅ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์‹คํ—˜์  ํ‰๊ฐ€๋ฅผ ํ†ตํ•˜์—ฌ ์ œ์•ˆ๋˜๋Š” 3์ฐจ์› ๋ณต์› ๋ฐ ์˜์ƒ ๊ฐœ์„ ์˜ ํšจ๊ณผ์„ฑ์„ ์ž…์ฆํ•˜๋„๋ก ํ•œ๋‹ค.Vision-based 3D reconstruction is one of the fundamental problems in computer vision, and it has been researched intensively significantly in the last decades. In particular, 3D reconstruction using a single camera, which has a wide range of applications such as autonomous robot navigation and augmented reality, shows great possibilities in its reconstruction accuracy, scale of reconstruction coverage, and computational efficiency. However, until recently, the performances of most algorithms have been tested only with carefully recorded, high quality input sequences. In practical situations, input images for 3D reconstruction can be severely distorted due to various factors such as pixel noise and motion blur, and the resolution of images may not be high enough to achieve accurate camera localization and scene reconstruction results. Although various high-performance image enhancement methods have been proposed in many studies, the high computational costs of those methods prevent applying them to the 3D reconstruction systems where the real-time capability is an important issue. In this dissertation, novel single camera-based 3D reconstruction methods that are combined with image enhancement methods is studied to improve the accuracy and reliability of 3D reconstruction. To this end, two critical image degradations, motion blur and low image resolution, are addressed for both sparse reconstruction and dense 3D reconstruction systems, and novel integrated enhancement methods for those degradations are presented. Using the relationship between the observed images and 3D geometry of the camera and scenes, the image formation process including image degradations is modeled by the camera and scene geometry. Then, by taking the image degradation factors in consideration, accurate 3D reconstruction then is achieved. Furthermore, the information required for image enhancement, such as blur kernels for deblurring and pixel correspondences for super-resolution, is simultaneously obtained while reconstructing 3D scene, and this makes the image enhancement much simpler and faster. The proposed methods have an advantage that the results of 3D reconstruction and image enhancement are improved by each other with the simultaneous solution of these problems. Experimental evaluations demonstrate the effectiveness of the proposed 3D reconstruction and image enhancement methods.1. Introduction 2. Sparse 3D Reconstruction and Image Deblurring 3. Sparse 3D Reconstruction and Image Super-Resolution 4. Dense 3D Reconstruction and Image Deblurring 5. Dense 3D Reconstruction and Image Super-Resolution 6. Dense 3D Reconstruction, Image Deblurring, and Super-Resolution 7. ConclusionDocto

    Discrete Riemannian Calculus and A Posteriori Error Control on Shape Spaces

    Get PDF
    In this thesis, a novel discrete approximation of the curvature tensor on Riemannian manifolds is derived, efficient methods to interpolate and extrapolate images in the context of the time discrete metamorphosis model are analyzed, and an a posteriori error estimator for the binary Mumfordโ€“Shah model is examined. Departing from the variational time discretization on (possibly infinite-dimensional) Riemannian manifolds originally proposed by Rumpf and Wirth, in which a consistent time discrete approximation of geodesic curves, the logarithm, the exponential map and parallel transport is analyzed, we construct the discrete curvature tensor and prove its convergence under certain smoothness assumptions. To this end, several time discrete parallel transports are applied to suitably rescaled tangent vectors, where each parallel transport is computed using Schildโ€™s ladder. The associated convergence proof essentially relies on multiple Taylor expansions incorporating symmetry and scaling relations. In several numerical examples we validate this approach for surfaces. The by now classical flow of diffeomorphism approach allows the transport of image intensities along paths in time, which are characterized by diffeomorphisms, and the brightness of each image particle is assumed to be constant along each trajectory. As an extension, the metamorphosis model proposed by Trouvรฉ, Younes and coworkers allows for intensity variations of the image particles along the paths, which is reflected by an additional penalization term appearing in the energy functional that quantifies the squared weak material derivative. Taking into account the aforementioned time discretization, we propose a time discrete metamorphosis model in which the associated time discrete path energy consists of the sum of squared L2-mismatch functionals of successive square-integrable image intensity functions and a regularization functional for pairwise deformations. Our main contributions are the existence proof of time discrete geodesic curves in the context of this model, which are defined as minimizers of the time discrete path energy, and the proof of the Mosco-convergence of a suitable interpolation of the time discrete to the time continuous path energy with respect to the L2-topology. Using an alternating update scheme as well as a multilinear finite element respectively cubic spline discretization for the images and deformations allows to efficiently compute time discrete geodesic curves. In several numerical examples we demonstrate that time discrete geodesics can be robustly computed for gray-scale and color images. Taking into account the time discretization of the metamorphosis model we define the discrete exponential map in the space of images, which allows image extrapolation of arbitrary length for given weakly differentiable initial images and variations. To this end, starting from a suitable reformulation of the Eulerโ€“Lagrange equations characterizing the one-step extrapolation a fixed point iteration is employed to establish the existence of critical points of the Eulerโ€“Lagrange equations provided that the initial variation is small in L2. In combination with an implicit function type argument requiring H1-closeness of the initial variation one can prove the local existence as well as the local uniqueness of the discrete exponential map. The numerical algorithm for the one-step extrapolation is based on a slightly modified fixed point iteration using a spatial Galerkin scheme to obtain the optimal deformation associated with the unknown image, from which the unknown image itself can be recovered. To prove the applicability of the proposed method we compute the extrapolated image path for real image data. A common tool to segment images and shapes into multiple regions was developed by Mumford and Shah. The starting point to derive a posteriori error estimates for the binary Mumfordโ€“Shah model, which is obtained by restricting the original model to two regions, is a uniformly convex and non-constrained relaxation of the binary model following the work by Chambolle and Berkels. In particular, minimizers of the binary model can be exactly recovered from minimizers of the relaxed model via thresholding. Then, applying duality techniques proposed by Repin and Bartels allows deriving a consistent functional a posteriori error estimate for the relaxed model. Afterwards, an a posteriori error estimate for the original binary model can be computed incorporating a suitable cut-out argument in combination with the functional error estimate. To calculate minimizers of the relaxed model on an adaptive mesh described by a quadtree structure, we employ a primal-dual as well as a purely dual algorithm. The quality of the error estimator is analyzed for different gray-scale input images
    • โ€ฆ
    corecore