6 research outputs found

    Bitmap or Vector? A study on sketch representations for deep stroke segmentation

    Get PDF
    National audienceDeep learning achieves impressive performances on image segmentation, which has motivated the recent developmentof deep neural networks for the related task of sketch segmentation, where the goal is to assign labels to thedifferent strokes that compose a line drawing. However, while natural images are well represented as bitmaps, linedrawings can also be represented as vector graphics, such as point sequences and point clouds. In addition to offeringdifferent trade-offs on resolution and storage, vector representations often come with additional information,such as stroke ordering and speed. In this paper, we evaluate three crucial design choices for sketch segmentationusing deep-learning : which sketch representation to use, which information to encode in this representation,and which loss function to optimize. Our findings suggest that point clouds represent a competitive alternative tobitmaps for sketch segmentation, and that providing extra-geometric information improves performance

    Fast character modeling with sketch-based PDE surfaces

    Get PDF
    © 2020, The Author(s). Virtual characters are 3D geometric models of characters. They have a lot of applications in multimedia. In this paper, we propose a new physics-based deformation method and efficient character modelling framework for creation of detailed 3D virtual character models. Our proposed physics-based deformation method uses PDE surfaces. Here PDE is the abbreviation of Partial Differential Equation, and PDE surfaces are defined as sculpting force-driven shape representations of interpolation surfaces. Interpolation surfaces are obtained by interpolating key cross-section profile curves and the sculpting force-driven shape representation uses an analytical solution to a vector-valued partial differential equation involving sculpting forces to quickly obtain deformed shapes. Our proposed character modelling framework consists of global modeling and local modeling. The global modeling is also called model building, which is a process of creating a whole character model quickly with sketch-guided and template-based modeling techniques. The local modeling produces local details efficiently to improve the realism of the created character model with four shape manipulation techniques. The sketch-guided global modeling generates a character model from three different levels of sketched profile curves called primary, secondary and key cross-section curves in three orthographic views. The template-based global modeling obtains a new character model by deforming a template model to match the three different levels of profile curves. Four shape manipulation techniques for local modeling are investigated and integrated into the new modelling framework. They include: partial differential equation-based shape manipulation, generalized elliptic curve-driven shape manipulation, sketch assisted shape manipulation, and template-based shape manipulation. These new local modeling techniques have both global and local shape control functions and are efficient in local shape manipulation. The final character models are represented with a collection of surfaces, which are modeled with two types of geometric entities: generalized elliptic curves (GECs) and partial differential equation-based surfaces. Our experiments indicate that the proposed modeling approach can build detailed and realistic character models easily and quickly

    High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing

    Get PDF
    Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The “HDR mode” often encountered on such devices, relies on techniques called “exposure fusion” and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness

    Sky Based Light Metering for High Dynamic Range Images

    No full text
    Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real-world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel—effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design

    Gloss Editing in Light Fields

    No full text

    Motion Aware Exposure Bracketing for {HDR} Video

    No full text
    Mobile phones and tablets are rapidly gaining significance as omnipresent image and video capture devices. In this context we present an algorithm that allows such devices to capture high dynamic range (HDR) video. The design of the algorithm was informed by a perceptual study that assesses the relative importance of motion and dynamic range. We found that ghosting artefacts are more visually disturbing than a reduction in dynamic range, even if a comparable number of pixels is affected by each. We incorporated these findings into a real-time, adaptive metering algorithm that seamlessly adjusts its settings to take exposures that will lead to minimal visual artefacts after recombination into an HDR sequence. It is uniquely suitable for real-time selection of exposure settings. Finally, we present an off-line HDR reconstruction algorithm that is matched to the adaptive nature of our real-time metering approach
    corecore