567 research outputs found

    Efficient Poisson Image Editing

    Get PDF
    Image composition refers to the process of composing two or more images to create a natural output image. It is one of the important techniques in image processing. In this paper, two efficient methods for composing color images are proposed. In the proposed methods, the Poisson equation is solved using image pyramid and divide-and-conquer methods. The proposed methods are more efficient than other existing image composition methods. They reduce the time taken in the composition process while achieving almost identical results using the previous image composition methods. In the proposed methods, the Poisson equation is solved after converting it to a linear system using different methods. The results show that the time for composing color images is decreased using the proposed methods

    Scalable 3D video of dynamic scenes

    Get PDF
    In this paper we present a scalable 3D video framework for capturing and rendering dynamic scenes. The acquisition system is based on multiple sparsely placed 3D video bricks, each comprising a projector, two grayscale cameras, and a color camera. Relying on structured light with complementary patterns, texture images and pattern-augmented views of the scene are acquired simultaneously by time-multiplexed projections and synchronized camera exposures. Using space-time stereo on the acquired pattern images, high-quality depth maps are extracted, whose corresponding surface samples are merged into a view-independent, point-based 3D data structure. This representation allows for effective photo-consistency enforcement and outlier removal, leading to a significant decrease of visual artifacts and a high resulting rendering quality using EWA volume splatting. Our framework and its view-independent representation allow for simple and straightforward editing of 3D video. In order to demonstrate its flexibility, we show compositing techniques and spatiotemporal effect

    Structure Preserving regularizer for Neural Style Transfer

    Get PDF
    The aim of the project is to generate an image in the style of the image by a well-known artist. The experiment will use artificial neural networks to transfer the style of one image onto another. In Computer Vision context: capturing the content invariant that is the style of an image and applying it on the content of another image. Initially captures the tensors that we need from the content and style image and then we pass the input image which will initially be an image with noise and our algorithm will try to minimize the loss between the input and content image and that between input and style image thus capturing the essence of both the images into one. The traditional method of style transfer generated image has an artistic effect that is the model successfully capture the style of the image but does not preserve the structural content of the image. The proposed method uses a segmented version of images to faithfully transfer the style to semantic similar content. Also, a regularizer term modified in loss function that helps in avoiding style spill over and have photographic results

    User-Assisted Image Shadow Removal

    Get PDF
    This paper presents a novel user-aided method for texture-preserving shadow removal from single images requiring simple user input. Compared with the state-of-the-art, our algorithm offers the most flexible user interaction to date and produces more accurate and robust shadow removal under thorough quantitative evaluation. Shadow masks are first detected by analysing user specified shadow feature strokes. Sample intensity profiles with variable interval and length around the shadow boundary are detected next, which avoids artefacts raised from uneven boundaries. Texture noise in samples is then removed by applying local group bilateral filtering, and initial sparse shadow scales are estimated by fitting a piece-wise curve to intensity samples. The remaining errors in estimated sparse scales are removed by local group smoothing. To relight the image, a dense scale field is produced by in-painting the sparse scales. Finally, a gradual colour correction is applied to remove artefacts due to image post-processing. Using state-of-the-art evaluation data, we quantitatively and qualitatively demonstrate our method to outperform current leading shadow removal methods

    Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality

    Get PDF
    3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character\u27s behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant\u27s pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision

    Layered Neural Rendering for Retiming People in Video

    Full text link
    We present a method for retiming people in an ordinary, natural video---manipulating and editing the time in which different motions of individuals in the video occur. We can temporally align different motions, change the speed of certain actions (speeding up/slowing down, or entirely "freezing" people), or "erase" selected people from the video altogether. We achieve these effects computationally via a dedicated learning-based layered video representation, where each frame in the video is decomposed into separate RGBA layers, representing the appearance of different people in the video. A key property of our model is that it not only disentangles the direct motions of each person in the input video, but also correlates each person automatically with the scene changes they generate---e.g., shadows, reflections, and motion of loose clothing. The layers can be individually retimed and recombined into a new video, allowing us to achieve realistic, high-quality renderings of retiming effects for real-world videos depicting complex actions and involving multiple individuals, including dancing, trampoline jumping, or group running.Comment: To appear in SIGGRAPH Asia 2020. Project webpage: https://retiming.github.io
    corecore