24 research outputs found

    Adaptation of Images and Videos for Different Screen Sizes

    Full text link
    With the increasing popularity of smartphones and similar mobile devices, the demand for media to consume on the go rises. As most images and videos today are captured with HD or even higher resolutions, there is a need to adapt them in a content-aware fashion before they can be watched comfortably on screens with small sizes and varying aspect ratios. This process is called retargeting. Most distortions during this process are caused by a change of the aspect ratio. Thus, retargeting mainly focuses on adapting the aspect ratio of a video while the rest can be scaled uniformly. The main objective of this dissertation is to contribute to the modern image and video retargeting, especially regarding the potential of the seam carving operator. There are still unsolved problems in this research field that should be addressed in order to improve the quality of the results or speed up the performance of the retargeting process. This dissertation presents novel algorithms that are able to retarget images, videos and stereoscopic videos while dealing with problems like the preservation of straight lines or the reduction of the required memory space and computation time. Additionally, a GPU implementation is used to achieve the retargeting of videos in real-time. Furthermore, an enhancement of face detection is presented which is able to distinguish between faces that are important for the retargeting and faces that are not. Results show that the developed techniques are suitable for the desired scenarios

    Saliency-based image enhancement

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Deformation analysis and its application in image editing.

    Get PDF
    Jiang, Lei.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 68-75).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 2 --- Background and Motivation --- p.5Chapter 2.1 --- Foreshortening --- p.5Chapter 2.1.1 --- Vanishing Point --- p.6Chapter 2.1.2 --- Metric Rectification --- p.8Chapter 2.2 --- Content Aware Image Resizing --- p.11Chapter 2.3 --- Texture Deformation --- p.15Chapter 2.3.1 --- Shape from texture --- p.16Chapter 2.3.2 --- Shape from lattice --- p.18Chapter 3 --- Resizing on Facade --- p.21Chapter 3.1 --- Introduction --- p.21Chapter 3.2 --- Related Work --- p.23Chapter 3.3 --- Algorithm --- p.24Chapter 3.3.1 --- Facade Detection --- p.25Chapter 3.3.2 --- Facade Resizing --- p.32Chapter 3.4 --- Results --- p.34Chapter 4 --- Cell Texture Editing --- p.42Chapter 4.1 --- Introduction --- p.42Chapter 4.2 --- Related Work --- p.44Chapter 4.3 --- Our Approach --- p.46Chapter 4.3.1 --- Cell Detection --- p.47Chapter 4.3.2 --- Local Affine Estimation --- p.49Chapter 4.3.3 --- Affine Transformation Field --- p.52Chapter 4.4 --- Photo Editing Applications --- p.55Chapter 4.5 --- Discussion --- p.58Chapter 5 --- Conclusion --- p.65Bibliography --- p.6

    FrameBreak: Dramatic Image Extrapolation by Guided Shift-Maps

    Get PDF
    Master'sMASTER OF ENGINEERIN

    A Deep-structured Conditional Random Field Model for Object Silhouette Tracking

    Full text link
    In this work, we introduce a deep-structured conditional random field (DS-CRF) model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering.Comment: 17 page

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Saliency Tree: A Novel Saliency Detection Framework

    Full text link
    corecore