528,675 research outputs found

    A cortical based model of perceptual completion in the roto-translation space

    Get PDF
    We present a mathematical model of perceptual completion and formation of subjective surfaces, which is at the same time inspired by the architecture of the visual cortex, and is the lifting in the 3-dimensional rototranslation group of the phenomenological variational models based on elastica functional. The initial image is lifted by the simple cells to a surface in the rototraslation group and the completion process is modelled via a diffusion driven motion by curvature. The convergence of the motion to a minimal surface is proved. Results are presented both for modal and amodal completion in classic Kanizsa images

    CONFIGR: A Vision-Based Model for Long-Range Figure Completion

    Full text link
    CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.Air Force Office of Scientific Research (F49620-01-1-0423); National Geospatial-Intelligence Agency (NMA 201-01-1-0216); National Science Foundation (SBE-0354378); Office of Naval Research (N000014-01-1-0624

    Image Completion Based on Edge Prediction and Improved Generator

    Get PDF
    The existing image completion algorithms may result in problems of poor completion in the missing parts, excessive smoothing or chaotic structure of the completed areas, and large training cycle when processing more complex images. Therefore, a two-stage adversarial image completion model based on edge prediction and improved generator structure has been put forward to solve the existing problems. Firstly, Canny edge detection is utilized to extract the damaged edge image, to predict and to complete the edge information of the missing area of the image in the edge prediction network. Secondly, the predicted edge image is taken as a priori information by the Image completion network to complete the damaged area of the image, so as to make the structure information of the completed area more accurate. A-JPU module is designed to ensure the completion result and speed up training for existing models due to the enormous number of computations caused by the large use of extended convolution in the self-coding structure. Finally, the experimental results on Places 2 dataset show that the PSNR and SSIM of the image using the image completion model are higher and the subjective visual effect is closer to the real image than some other image completion models

    Contextual-based Image Inpainting: Infer, Match, and Translate

    Full text link
    We study the task of image inpainting, which is to fill in the missing region of an incomplete image with plausible contents. To this end, we propose a learning-based approach to generate visually coherent completion given a high-resolution image with missing components. In order to overcome the difficulty to directly learn the distribution of high-dimensional image data, we divide the task into inference and translation as two separate steps and model each step with a deep neural network. We also use simple heuristics to guide the propagation of local textures from the boundary to the hole. We show that, by using such techniques, inpainting reduces to the problem of learning two image-feature translation functions in much smaller space and hence easier to train. We evaluate our method on several public datasets and show that we generate results of better visual quality than previous state-of-the-art methods.Comment: ECCV 2018 camera read
    • …
    corecore