272,285 research outputs found

    INTELLIGENT EDITING ASSISTANT

    Get PDF
    A technique is proposed for assisting writing or editing an online document supported by a cloud-content based platform. An editing assistant service provided by the cloud-content based platform determines a writing style designated for the online document. The editing assistant service determines a subset of content in the online document. Then, the editing assistant service determines whether the subset of the content in the online document is consistent with the writing style. Responsive to determining that the subset of the content in the online document is not consistent with the writing style, the editing assistant service predicts one or more suggestions for editing in accordance with the writing style. Lastly, the editing assistant service presents one or more suggestions for editing to replace the subset of the content in the online document

    Raman identification of cuneiform tablet pigments. Emphasis and colour technology in ancient Mesopotamian mid-third millennium

    Get PDF
    In the modern age, there is a large number of ways to manage a written text, from bolding or underlining some words with the preferred PC editing software down to animated gifs or emoticons for short edited text of mobile messaging and social posting. The task is to catch the eye and rapidly convey the important message. Besides the almost endless opportunities of high-tech displays, to put emphasis on a text written on a hard support mainly relies on changing the editing style, by applying bold, italic or underline style to selected words or phrases and exploiting the characteristic of human eye to be sensible to the change of brightness into a written text

    Diverse Semantic Image Editing with Style Codes

    Full text link
    Semantic image editing requires inpainting pixels following a semantic map. It is a challenging task since this inpainting requires both harmony with the context and strict compliance with the semantic maps. The majority of the previous methods proposed for this task try to encode the whole information from erased images. However, when an object is added to a scene such as a car, its style cannot be encoded from the context alone. On the other hand, the models that can output diverse generations struggle to output images that have seamless boundaries between the generated and unerased parts. Additionally, previous methods do not have a mechanism to encode the styles of visible and partially visible objects differently for better performance. In this work, we propose a framework that can encode visible and partially visible objects with a novel mechanism to achieve consistency in the style encoding and final generations. We extensively compare with previous conditional image generation and semantic image editing algorithms. Our extensive experiments show that our method significantly improves over the state-of-the-art. Our method not only achieves better quantitative results but also provides diverse results. Please refer to the project web page for the released code and demo: https://github.com/hakansivuk/DivSem
    • …
    corecore