588,685 research outputs found

    Subjugating the Beast and the Angel: Suggestions of Dante's Inferno in "Altarwise by owl-light"

    Get PDF
    ‘Altarwise by owl-light’ is one of Thomas’s most intransigent poems, an intricately woven text of images and symbols. It has generated, over the years, a great variety of interpretations ranging from the astrological, to the Freudian to the Surrealistic . The reading of this poem often involves a search for sources, the unravelling of references and allusions. For instance, in some of the sonnets’ most seemingly surreal lines , at the end of Sonnet V, an unexpected source has been discovered by Walford Davies and Ralph Maud. In - Cross-stroked salt Adam to the frozen angel Pin-legged on pole-hills with a black medusa By waste seas where the white bear quoted Virgil And sirens singing from our lady’s sea-straw. - the image of the ‘waste seas where the white bear quoted Virgil’ originates in an allegorical text by Anatole France entitled L’üle des pingouins . There now remains the problem of finding out who the ‘frozen angel’ and the ‘black medusa’ are, and of piecing together the elements. This paper will offer suggestions regarding these and other images, by concentrating on allusions, in the poem, to Dante’s Inferno. In the process, it will raise a previously unrecognised possibility in the core interpretation of the poem

    Pencarian Citra Berbasis Web Berdasarkan Pada Keyword dan Karakteristik Citra

    Get PDF
    Search engine has been developed to assist users in finding information easier, including images in the internet. There are several images search engine working on the internet, such as google image search, which is searching to several URL, using text or color similarity as the query. Visually, the results obtained from these engines are sometimes irrelevant or not sorted

    An Ontology based Text-to-Picture Multimedia m-Learning System

    Get PDF
    Multimedia Text-to-Picture is the process of building mental representation from words associated with images. From the research aspect, multimedia instructional message items are illustrations of material using words and pictures that are designed to promote user realization. Illustrations can be presented in a static form such as images, symbols, icons, figures, tables, charts, and maps; or in a dynamic form such as animation, or video clips. Due to the intuitiveness and vividness of visual illustration, many text to picture systems have been proposed in the literature like, Word2Image, Chat with Illustrations, and many others as discussed in the literature review chapter of this thesis. However, we found that some common limitations exist in these systems, especially for the presented images. In fact, the retrieved materials are not fully suitable for educational purposes. Many of them are not context-based and didn’t take into consideration the need of learners (i.e., general purpose images). Manually finding the required pedagogic images to illustrate educational content for learners is inefficient and requires huge efforts, which is a very challenging task. In addition, the available learning systems that mine text based on keywords or sentences selection provide incomplete pedagogic illustrations. This is because words and their semantically related terms are not considered during the process of finding illustrations. In this dissertation, we propose new approaches based on the semantic conceptual graph and semantically distributed weights to mine optimal illustrations that match Arabic text in the children’s story domain. We combine these approaches with best keywords and sentences selection algorithms, in order to improve the retrieval of images matching the Arabic text. Our findings show significant improvements in modelling Arabic vocabulary with the most meaningful images and best coverage of the domain in discourse. We also develop a mobile Text-to-Picture System that has two novel features, which are (1) a conceptual graph visualization (CGV) and (2) a visual illustrative assessment. The CGV shows the relationship between terms associated with a picture. It enables the learners to discover the semantic links between Arabic terms and improve their understanding of Arabic vocabulary. The assessment component allows the instructor to automatically follow up the performance of learners. Our experiments demonstrate the efficiency of our multimedia text-to-picture system in enhancing the learners’ knowledge and boost their comprehension of Arabic vocabulary

    Text-based Person Search in Full Images via Semantic-Driven Proposal Generation

    Full text link
    Finding target persons in full scene images with a query of text description has important practical applications in intelligent video surveillance.However, different from the real-world scenarios where the bounding boxes are not available, existing text-based person retrieval methods mainly focus on the cross modal matching between the query text descriptions and the gallery of cropped pedestrian images. To close the gap, we study the problem of text-based person search in full images by proposing a new end-to-end learning framework which jointly optimize the pedestrian detection, identification and visual-semantic feature embedding tasks. To take full advantage of the query text, the semantic features are leveraged to instruct the Region Proposal Network to pay more attention to the text-described proposals. Besides, a cross-scale visual-semantic embedding mechanism is utilized to improve the performance. To validate the proposed method, we collect and annotate two large-scale benchmark datasets based on the widely adopted image-based person search datasets CUHK-SYSU and PRW. Comprehensive experiments are conducted on the two datasets and compared with the baseline methods, our method achieves the state-of-the-art performance
    • 

    corecore