10 research outputs found
Image inpainting based on coherence transport with adapted distance functions
We discuss an extension of our method Image Inpainting Based on Coherence Transport. For the latter method the pixels of the inpainting domain have to be serialized into an ordered list. Up till now, to induce the serialization we have used the distance to boundary map. But there are inpainting problems where the distance to boundary serialization causes unsatisfactory inpainting results. In the present work we demonstrate cases where we can resolve the difficulties by employing other distance functions which better suit the problem at hand
Inpainting of Cyclic Data using First and Second Order Differences
Cyclic data arise in various image and signal processing applications such as
interferometric synthetic aperture radar, electroencephalogram data analysis,
and color image restoration in HSV or LCh spaces. In this paper we introduce a
variational inpainting model for cyclic data which utilizes our definition of
absolute cyclic second order differences. Based on analytical expressions for
the proximal mappings of these differences we propose a cyclic proximal point
algorithm (CPPA) for minimizing the corresponding functional. We choose
appropriate cycles to implement this algorithm in an efficient way. We further
introduce a simple strategy to initialize the unknown inpainting region.
Numerical results both for synthetic and real-world data demonstrate the
performance of our algorithm.Comment: accepted Converence Paper at EMMCVPR'1
A Second Order TV-type Approach for Inpainting and Denoising Higher Dimensional Combined Cyclic and Vector Space Data
In this paper we consider denoising and inpainting problems for higher
dimensional combined cyclic and linear space valued data. These kind of data
appear when dealing with nonlinear color spaces such as HSV, and they can be
obtained by changing the space domain of, e.g., an optical flow field to polar
coordinates. For such nonlinear data spaces, we develop algorithms for the
solution of the corresponding second order total variation (TV) type problems
for denoising, inpainting as well as the combination of both. We provide a
convergence analysis and we apply the algorithms to concrete problems.Comment: revised submitted versio
A well-posedness framework for inpainting based on coherence transport
Image inpainting is the process of touching-up damaged or unwanted portions of a picture and is an important task in image processing. For this purpose Bornemann and März [J. Math. Imaging Vis. , 28 (2007), pp. 259– 278] introduced a very efficient method called Image Inpainting Based on Coherence Transport which fills the missing region by advecting the image information along integral curves of a coherence vector field from the boundary towards the interior of the hole. The mathematical model behind this method is a first-order functional advection PDE posed on a compact domain with all inflow boundary. We show that this problem is well-posed under certain conditions
A Head-Mounted Camera System Integrates Detailed Behavioral Monitoring with Multichannel Electrophysiology in Freely Moving Mice
Breakthroughs in understanding the neural basis of natural behavior require neural recording and intervention to be paired with high-fidelity multimodal behavioral monitoring. An extensive genetic toolkit for neural circuit dissection, and well-developed neural recording technology, make the mouse a powerful model organism for systems neuroscience. However, most methods for high-bandwidth acquisition of behavioral data in mice rely upon fixed-position cameras and other off-animal devices, complicating the monitoring of animals freely engaged in natural behaviors. Here, we report the development of a lightweight head-mounted camera system combined with head-movement sensors to simultaneously monitor eye position, pupil dilation, whisking, and pinna movements along with head motion in unrestrained, freely behaving mice. The power of the combined technology is demonstrated by observations linking eye position to head orientation; whisking to non-tactile stimulation; and, in electrophysiological experiments, visual cortical activity to volitional head movements
Recommended from our members
Guidefill: GPU accelerated, artist guided geometric inpainting for 3D conversion of film
The conversion of traditional film into stereo 3D has become an important problem in the past decade. One of the main bottlenecks is a disocclusion step, which in commercial 3D conversion is usually done by teams of artists armed with a toolbox of inpainting algorithms. A current difficulty in this is that most available algorithms either are too slow for interactive use or provide no intuitive means for users to tweak the output. In this paper we present a new fast inpainting algorithm based on transporting along automatically detected splines, which the user may edit. Our algorithm is implemented on the GPU and fills the inpainting domain in successive shells that adapt their shape on the y. In order to allocate GPU resources as efficiently as possible, we propose a parallel algorithm to track the inpainting interface as it evolves, ensuring that no resources are wasted on pixels that are not currently being worked on. Theoretical analyses of the time and processor complexity of our algorithm without and with tracking (as well as numerous numerical experiments) demonstrate the merits of the latter. Our transport mechanism is similar to the one used in coherence transport [F. Bornemann and T. März, J. Math. Imaging Vision, 28 (2007), pp. 259-278; T. März, SIAM J. Imaging Sci., 4 (2011), pp. 981-1000] but improves upon it by correcting a \kinking" phenomenon whereby extrapolated isophotes may bend at the boundary of the inpainting domain. Theoretical results explaining this phenomenon and its resolution are presented. Although our method ignores texture, in many cases this is not a problem due to the thin inpainting domains in 3D conversion. Experimental results show that our method can achieve a visual quality that is competitive with the state of the art while maintaining interactive speeds and providing the user with an intuitive interface to tweak the results.The work of the first author was supported by the Cambridge Commonwealth Trust and the Cambridge Center for Analysis. The work of the third author was supported by the Leverhulme Trust project Breaking the Nonconvexity Barrier, the EPSRC grants EP/M00483X/1 and EP/N014588/1, the Cantab Capital Institute for the Mathematics of Information, the CHiPS (Horizon 2020 RISE project grant), the Global Alliance project “Statistical and Mathematical Theory of Imaging,” and the Alan Turing Institute
Recommended from our members
Analysis of Artifacts in Shell-Based Image Inpainting: Why They Occur and How to Eliminate Them
Funder: University of CambridgeAbstract: In this paper we study a class of fast geometric image inpainting methods based on the idea of filling the inpainting domain in successive shells from its boundary inwards. Image pixels are filled by assigning them a color equal to a weighted average of their already filled neighbors. However, there is flexibility in terms of the order in which pixels are filled, the weights used for averaging, and the neighborhood that is averaged over. Varying these degrees of freedom leads to different algorithms, and indeed the literature contains several methods falling into this general class. All of them are very fast, but at the same time all of them leave undesirable artifacts such as “kinking” (bending) or blurring of extrapolated isophotes. Our objective in this paper is to build a theoretical model in order to understand why these artifacts occur and what, if anything, can be done about them. Our model is based on two distinct limits: a continuum limit in which the pixel width h→0 and an asymptotic limit in which h>0 but h≪1. The former will allow us to explain “kinking” artifacts (and what to do about them) while the latter will allow us to understand blur. Both limits are derived based on a connection between the class of algorithms under consideration and stopped random walks. At the same time, we consider a semi-implicit extension in which pixels in a given shell are solved for simultaneously by solving a linear system. We prove (within the continuum limit) that this extension is able to completely eliminate kinking artifacts, which we also prove must always be present in the direct method. Finally, we show that although our results are derived in the context of inpainting, they are in fact abstract results that apply more generally. As an example, we show how our theory can also be applied to a problem in numerical linear algebra
Recommended from our members
Shell-Based Geometric Image and Video Inpainting
The subject of this thesis is a class of fast inpainting methods (image or video) based on the idea of filling the inpainting domain in successive shells from its boundary inwards. Image pixels (or video voxels) are filled by assigning them a color equal to a weighted average of either their already filled neighbors (the ``direct'' form of the method) or those neighbors plus additional neighbors within the current shell (the ``semi-implicit'' form). In the direct form, pixels (voxels) in the current shell may be filled independently, but in the semi-implicit form they are filled simultaneously by solving a linear system. We focus in this thesis mainly on the image inpainting case, where the literature contains several methods corresponding to the {\em direct} form of the method - the semi-implicit form is introduced for the first time here. These methods effectively differ only in the order in which pixels (voxels) are filled, the weights used for averaging, and the neighborhood that is averaged over. All of them are very fast, but at the same time all of them leave undesirable artifacts such as ``kinking'' (bending) or blurring of extrapolated isophotes.
This thesis has two main goals. First, we introduce new algorithms within this class, which are aimed at reducing or eliminating these artifacts, and also target a specific application - the 3D conversion of images and film. The first part of this thesis will be concerned with introducing 3D conversion as well as Guidefill, a method in the above class adapted to the inpainting problems arising in 3D conversion. However, the second and more significant goal of this thesis is to study these algorithms as a class. In particular, we develop a mathematical theory aimed at understanding the origins of artifacts mentioned. Through this, we seek is to understand which artifacts can be eliminated (and how), and which artifacts are inevitable (and why). Most of the thesis is occupied with this second goal.
Our theory is based on two separate limits - the first is a {\em continuum} limit, in which the pixel width , and in which the algorithm converges to a partial differential equation. The second is an asymptotic limit in which is very small but non-zero. This latter limit, which is based on a connection to random walks, relates the inpainted solution to a type of discrete convolution. The former is useful for studying kinking artifacts, while the latter is useful for studying blur. Although all the theoretical work has been done in the context of image inpainting, experimental evidence is presented suggesting a simple generalization to video.
Finally, in the last part of the thesis we explore shell-based video inpainting. In particular, we introduce spacetime transport, which is a natural generalization of the ideas of Guidefill and its predecessor, coherence transport, to three dimensions (two spatial dimensions plus one time dimension). Spacetime transport is shown to have much in common with shell-based image inpainting methods. In particular, kinking and blur artifacts persist, and the former of these may be alleviated in exactly the same way as in two dimensions. At the same time, spacetime transport is shown to be related to optical flow based video inpainting. In particular, a connection is derived between spacetime transport and a generalized Lucas–Kanade optical flow that does not distinguish between time and space.Cambridge Overseas Scholarshi