30 research outputs found

    real time edit propagation by efficient sampling

    No full text
    It is popular to edit the appearance of images using strokes, owing to their ease of use and convenience of conveying the user's intention. However, propagating the user inputs to the rest of the images requires solving an enormous optimization problem, which is very time consuming, thus preventing its practical use. In this paper, a two-step edit propagation scheme is proposed, first to solve edits on clusters of similar pixels and then to interpolate individual pixel edits from cluster edits. The key in our scheme is that we use efficient stroke sampling to compute the affinity between image pixels and strokes. Based on this, our clustering does not need to be stroke-adaptive and thus the number of clusters is greatly reduced, resulting in a significant speedup. The proposed method has been tested on various images, and the results show that it is more than one order of magnitude faster than existing methods, while still achieving precise results compared with the ground truth. Moreover, its efficiency is not sensitive to the number of strokes, making it suitable for performing dense edits in practice.NSFC60773026, 60873182, 60833007It is popular to edit the appearance of images using strokes, owing to their ease of use and convenience of conveying the user's intention. However, propagating the user inputs to the rest of the images requires solving an enormous optimization problem, which is very time consuming, thus preventing its practical use. In this paper, a two-step edit propagation scheme is proposed, first to solve edits on clusters of similar pixels and then to interpolate individual pixel edits from cluster edits. The key in our scheme is that we use efficient stroke sampling to compute the affinity between image pixels and strokes. Based on this, our clustering does not need to be stroke-adaptive and thus the number of clusters is greatly reduced, resulting in a significant speedup. The proposed method has been tested on various images, and the results show that it is more than one order of magnitude faster than existing methods, while still achieving precise results compared with the ground truth. Moreover, its efficiency is not sensitive to the number of strokes, making it suitable for performing dense edits in practice

    Intent-aware image cloning

    No full text
    Currently, gradient domain methods are popular for producing seamless cloning of a source image patch into a target image. However, structure conflicts between the source image patch and the target image may generate artifacts, preventing the general practices. In this paper, we tackle the challenge by incorporating the users' intent in outlining the source patch, where the boundary drawn generally has different appearances from the objects of interest. We first reveal that artifacts exist in the over-included region, the region outside the objects of interest in the source patch. Then we use the diversity from the boundary to approximately distinguish the objects from the over-included region, and design a new algorithm to make the target image adaptively take effects in blending. So the structure conflicts can be efficiently suppressed to remove the artifacts around the objects of interest in the composite result. Moreover, we develop an interpolation measure to composite the final image rather than solving a Poisson equation, and speed up the interpolation by treating pixels in clusters and using hierarchical sampling techniques. Our method is simple to use for instant and high-quality image cloning, in which users only need to outline a region of interested objects to process. Our experimental results have demonstrated the effectiveness of our cloning method.Currently, gradient domain methods are popular for producing seamless cloning of a source image patch into a target image. However, structure conflicts between the source image patch and the target image may generate artifacts, preventing the general practices. In this paper, we tackle the challenge by incorporating the users' intent in outlining the source patch, where the boundary drawn generally has different appearances from the objects of interest. We first reveal that artifacts exist in the over-included region, the region outside the objects of interest in the source patch. Then we use the diversity from the boundary to approximately distinguish the objects from the over-included region, and design a new algorithm to make the target image adaptively take effects in blending. So the structure conflicts can be efficiently suppressed to remove the artifacts around the objects of interest in the composite result. Moreover, we develop an interpolation measure to composite the final image rather than solving a Poisson equation, and speed up the interpolation by treating pixels in clusters and using hierarchical sampling techniques. Our method is simple to use for instant and high-quality image cloning, in which users only need to outline a region of interested objects to process. Our experimental results have demonstrated the effectiveness of our cloning method
    corecore