153 research outputs found

    影像細縫裁減法

    Get PDF
    [[sponsorship]]中華民國人工智慧學會[[conferencedate]]20121116~20121118[[iscallforpapers]]Y[[conferencelocation]]Tainan, Taiwa

    影像細縫裁減法

    Get PDF
    [[abstract]]隨著科技的發展顯示器的使用越來越普遍,配合不同大小螢幕的影像縮放技術越來越受重視,傳統的剪裁和非等比例縮放容易造成影像的變形或失真。Avidan and Shamir 提出了一個影像細縫裁減,其依據影像內容來對影像重新調整大小的方法,被認為是一個有效的解決方法,使用簡單的濾波器來找出影像中高能量的區域並保留下來,但是很多時候這種演算法並不能產生令人滿意地結果,它無法應付各種類型的影像,例如有複雜背景或是高反差顏色的影像。本篇論文希望能夠對細縫裁減演算法作改進,透過改變能量圖的計算方式,來降低複雜背景的高頻雜訊,使得在各種複雜背景的影像都能夠達到使用者期望的結果。[[sponsorship]]中華民國人工智慧學會[[conferencetype]]國際[[conferencedate]]20121116~20121118[[iscallforpapers]]Y[[conferencelocation]]臺南, 臺

    Deformation analysis and its application in image editing.

    Get PDF
    Jiang, Lei.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 68-75).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 2 --- Background and Motivation --- p.5Chapter 2.1 --- Foreshortening --- p.5Chapter 2.1.1 --- Vanishing Point --- p.6Chapter 2.1.2 --- Metric Rectification --- p.8Chapter 2.2 --- Content Aware Image Resizing --- p.11Chapter 2.3 --- Texture Deformation --- p.15Chapter 2.3.1 --- Shape from texture --- p.16Chapter 2.3.2 --- Shape from lattice --- p.18Chapter 3 --- Resizing on Facade --- p.21Chapter 3.1 --- Introduction --- p.21Chapter 3.2 --- Related Work --- p.23Chapter 3.3 --- Algorithm --- p.24Chapter 3.3.1 --- Facade Detection --- p.25Chapter 3.3.2 --- Facade Resizing --- p.32Chapter 3.4 --- Results --- p.34Chapter 4 --- Cell Texture Editing --- p.42Chapter 4.1 --- Introduction --- p.42Chapter 4.2 --- Related Work --- p.44Chapter 4.3 --- Our Approach --- p.46Chapter 4.3.1 --- Cell Detection --- p.47Chapter 4.3.2 --- Local Affine Estimation --- p.49Chapter 4.3.3 --- Affine Transformation Field --- p.52Chapter 4.4 --- Photo Editing Applications --- p.55Chapter 4.5 --- Discussion --- p.58Chapter 5 --- Conclusion --- p.65Bibliography --- p.6

    Pseudo-Dolly-In Video Generation Combining 3D Modeling and Image Reconstruction

    Get PDF
    This paper proposes a pseudo-dolly-in video generation method that reproduces motion parallax by applying image reconstruction processing to multi-view videos. Since dolly-in video is taken by moving a camera forward to reproduce motion parallax, we can present a sense of immersion. However, at a sporting event in a large-scale space, moving a camera is difficult. Our research generates dolly-in video from multi-view images captured by fixed cameras. By applying the Image-Based Modeling technique, dolly-in video can be generated. Unfortunately, the video quality is often damaged by the 3D estimation error. On the other hand, Bullet-Time realizes high-quality video observation. However, moving the virtual-viewpoint from the capturing positions is difficult. To solve these problems, we propose a method to generate a pseudo-dolly-in image by installing 3D estimation and image reconstruction techniques into Bullet-Time and show its effectiveness by applying it to multi-view videos captured at an actual soccer stadium. In the experiment, we compared the proposed method with digital zoom images and with the dolly-in video generated from the Image-Based Modeling and Rendering method.Published in: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) Date of Conference: 9-13 Oct. 2017 Conference Location: Nantes, Franc

    Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries

    Full text link
    With advanced image journaling tools, one can easily alter the semantic meaning of an image by exploiting certain manipulation techniques such as copy-clone, object splicing, and removal, which mislead the viewers. In contrast, the identification of these manipulations becomes a very challenging task as manipulated regions are not visually apparent. This paper proposes a high-confidence manipulation localization architecture which utilizes resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder network to segment out manipulated regions from non-manipulated ones. Resampling features are used to capture artifacts like JPEG quality loss, upsampling, downsampling, rotation, and shearing. The proposed network exploits larger receptive fields (spatial maps) and frequency domain correlation to analyze the discriminative characteristics between manipulated and non-manipulated regions by incorporating encoder and LSTM network. Finally, decoder network learns the mapping from low-resolution feature maps to pixel-wise predictions for image tamper localization. With predicted mask provided by final layer (softmax) of the proposed architecture, end-to-end training is performed to learn the network parameters through back-propagation using ground-truth masks. Furthermore, a large image splicing dataset is introduced to guide the training process. The proposed method is capable of localizing image manipulations at pixel level with high precision, which is demonstrated through rigorous experimentation on three diverse datasets

    On Layered Area-Proportional Rectangle Contact Representations

    Full text link
    A pair G0,G1\langle G_0, G_1 \rangle of graphs admits a mutual witness proximity drawing Γ0,Γ1\langle \Gamma_0, \Gamma_1 \rangle when: (i) Γi\Gamma_i represents GiG_i, and (ii) there is an edge (u,v)(u,v) in Γi\Gamma_i if and only if there is no vertex ww in Γ1i\Gamma_{1-i} that is ``too close'' to both uu and vv (i=0,1i=0,1). In this paper, we consider infinitely many definitions of closeness by adopting the β\beta-proximity rule for any β[1,]\beta \in [1,\infty] and study pairs of isomorphic trees that admit a mutual witness β\beta-proximity drawing. Specifically, we show that every two isomorphic trees admit a mutual witness β\beta-proximity drawing for any β[1,]\beta \in [1,\infty]. The constructive technique can be made ``robust'': For some tree pairs we can suitably prune linearly many leaves from one of the two trees and still retain their mutual witness β\beta-proximity drawability. Notably, in the special case of isomorphic caterpillars and β=1\beta=1, we construct linearly separable mutual witness Gabriel drawings.Comment: Appears in the Proceedings of the 18th International Conference and Workshops on Algorithms and Computation (WALCOM 2024

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews
    corecore