85 research outputs found

    Segmentation et indexation d'objets complexes dans les images de bandes dessinées

    Get PDF
    In this thesis, we review, highlight and illustrate the challenges related to comic book image analysis in order to give to the reader a good overview about the last research progress in this field and the current issues. We propose three different approaches for comic book image analysis that are composed by several processing. The first approach is called "sequential'' because the image content is described in an intuitive way, from simple to complex elements using previously extracted elements to guide further processing. Simple elements such as panel text and balloon are extracted first, followed by the balloon tail and then the comic character position in the panel. The second approach addresses independent information extraction to recover the main drawback of the first approach : error propagation. This second method is called “independent” because it is composed by several specific extractors for each elements of the image without any dependence between them. Extra processing such as balloon type classification and text recognition are also covered. The third approach introduces a knowledge-driven and scalable system of comics image understanding. This system called “expert system” is composed by an inference engine and two models, one for comics domain and another one for image processing, stored in an ontology. This expert system combines the benefits of the two first approaches and enables high level semantic description such as the reading order of panels and text, the relations between the speech balloons and their speakers and the comic character identification.Dans ce manuscrit de thèse, nous détaillons et illustrons les différents défis scientifiques liés à l'analyse automatique d'images de bandes dessinées, de manière à donner au lecteur tous les éléments concernant les dernières avancées scientifiques en la matière ainsi que les verrous scientifiques actuels. Nous proposons trois approches pour l'analyse d'image de bandes dessinées. La première approche est dite "séquentielle'' car le contenu de l'image est décrit progressivement et de manière intuitive. Dans cette approche, les extractions se succèdent, en commençant par les plus simples comme les cases, le texte et les bulles qui servent ensuite à guider l'extraction d'éléments plus complexes tels que la queue des bulles et les personnages au sein des cases. La seconde approche propose des extractions indépendantes les unes des autres de manière à éviter la propagation d'erreur due aux traitements successifs. D'autres éléments tels que la classification du type de bulle et la reconnaissance de texte y sont aussi abordés. La troisième approche introduit un système fondé sur une base de connaissance a priori du contenu des images de bandes dessinées. Ce système permet de construire une description sémantique de l'image, dirigée par les modèles de connaissances. Il combine les avantages des deux approches précédentes et permet une description sémantique de haut niveau pouvant inclure des informations telles que l'ordre de lecture, la sémantique des bulles, les relations entre les bulles et leurs locuteurs ainsi que les interactions entre les personnages

    Exploring digital comics as an edutainment tool: An overview

    Get PDF
    This paper aims t oexplore the growing potential of digital comics and graphic novels as an edutainment tool.Initially, the evolvement of comics medium along with academic and commercial initiatives in designing comicware systems arebriefly discussed. Prominent to this study, the methods and impact of utilizing this visual media with embedded instructional content and student-generated comics in classroom setting are rationallyoutlined.By recognizing the emerging technologies available for supporting and accelerating educational comic development, this article addresses the diverse research challenges and opportunities of innovating effective strategies to enhance comics integrated learning across disciplines

    Learner-generated comic (lgc): a production model

    Get PDF
    Recent advancement of authoring tools has fostered widespread interest towards using comics as a Digital Storytelling medium. This technology integrated learning approach is known as Learner-Generated Comic (LGC) production; where learners' knowledge and ideas on various subjects are synthesized in a form of digital educational comic. Despite the prior evidences for the didactic values of LGC production, most scholars do not emphasise on a quality, theoretically supported, and strategic LGC production methodology that accommodate to interrelated key elements and production methods of LGC. As a result, there is a tendency to view LGC production as challenging and impractical. Essentially, there is a lack of conceptual models and methods that comprehensively tailor the crucial theories, elements, techniques, technological, and systematic processes of LGC production. Within this context, this study attempts to propose LGC production model that serves as systematic approach which includes the fundamental components for learners to produce digital educational comics. Therefore, in order to accomplish the main aim, a number of sub objectives are formed: (1) to determine the core components for LGC production model, (2) to construct a systematic LGC production model based on the identified components, (3) to evaluate the proposed LGC production model, and (4) to assess the LGC products developed by the proposed model users. This study adopts the Design Science Research methodology as the framework of the research process. Activities of LGC production model construction include literature review and comparative study, expert consultation, and user participation. The proposed model is evaluated through user experience testing and expert review. Results from hypothesis testing concludes that the proposed LGC production model is significantly perceived as having quality in serving as a guideline for learners to design and develop digital educational comics. It was also found that the proposed model has been well-accepted by local and international experts. In addition, assessment of LGC products developed from the user experience testing has implicated there are significance differences between LGC products developed by the proposed model users and non-users. In conclusion, adoption of a systematic, scholarly grounded, and authenticated LGC production model can contribute to the planning, implementation, and evaluation of Digital Storytelling session that enhance learning experience through LGC design and development

    VISION AND NATURAL LANGUAGE FOR CREATIVE APPLICATIONS, AND THEIR ANALYSIS

    Get PDF
    Recent advances in machine learning, specifically problems in Computer Vision and Natural Language, have involved training deep neural networks with enormous amounts of data. The first frontier for deep networks was in uni-modal classification and detection problems (which were directed more towards ”intelligent robotics” and surveillance applications), while the next wave involves deploying deep networks on more creative tasks and common-sense reasoning. We provide two applications of these, interspersed by an analysis on these deep models. Automatic colorization is the process of adding color to greyscale images. We condition this process on language, allowing end users to manipulate a colorized image by feeding in different captions. We present two different architectures for language-conditioned colorization, both of which produce more accurate and plausible colorizations than a language-agnostic version. Through this language-based framework, we can dramatically alter colorizations by manipulating descriptive color words in captions. Researchers have observed that Visual Question Answering(VQA) models tend to answer questions by learning statistical biases in the data. (for example, the answer to the question “What is the color of the sky?” is usually “Blue”). It is of interest to the community to explicitly discover such biases, both for understanding the behavior of such models, and towards debugging them. In a database, we store the words of the question, answer and visual words corresponding to regions of interest in attention maps. By running simple rule mining algorithms on this database, we discover human-interpretable rules which give us great insight into the behavior of such models. Our results also show examples of unusual behaviors learned by the model in attempting VQA tasks. Visual narrative is often a combination of explicit information and judicious omissions, relying on the viewer to supply missing details. In comics, most movements in time and space are hidden in the gutters between panels. To follow the story, readers logically connect panels together by inferring unseen actions through a process called closure. While computers can now describe what is explicitly depicted in natural images, in this paper we examine whether they can understand the closure-driven narratives conveyed by stylized artwork and dialogue in comic book panels. We construct a dataset, COMICS, that consists of over 1.2 million panels (120 GB) paired with automatic textbox transcriptions. An in-depth analysis of COMICS demonstrates that neither text nor image alone can tell a comic book story, so a computer must understand both modalities to keep up with the plot. We introduce three cloze-style tasks that ask models to predict narrative and character-centric aspects of a panel given n preceding panels as context. Various deep neural architectures underperform human baselines on these tasks, suggesting that COMICS contains fundamental challenges for both vision and language. For many NLP tasks, ordered models, which explicitly encode word order information, do not significantly outperform unordered (bag-of-words) models. One potential explanation is that the tasks themselves do not require word order to solve. To test whether this explanation is valid, we perform several time-controlled human experiments with scrambled language inputs. We compare human accuracies to those of both ordered and unordered neural models. Our results contradict the initial hypothesis, suggesting instead that humans may be less robust to word order variation than computers

    Discourse-Level Language Understanding with Deep Learning

    Get PDF
    Designing computational models that can understand language at a human level is a foundational goal in the field of natural language processing (NLP). Given a sentence, machines are capable of translating it into many different languages, generating a corresponding syntactic parse tree, marking words that refer to people or places, and much more. These tasks are solved by statistical machine learning algorithms, which leverage patterns in large datasets to build predictive models. Many recent advances in NLP are due to deep learning models (parameterized as neural networks), which bypass user-specified features in favor of building representations of language directly from the text. Despite many deep learning-fueled advances at the word and sentence level, however, computers still struggle to understand high-level discourse structure in language, or the way in which authors combine and order different units of text (e.g., sentences, paragraphs, chapters) to express a coherent message or narrative. Part of the reason is data-related, as there are no existing datasets for many contextual language-based problems, and some tasks are too complex to be framed as supervised learning problems; for the latter type, we must either resort to unsupervised learning or devise training objectives that simulate the supervised setting. Another reason is architectural: neural networks designed for sentence-level tasks require additional functionality, interpretability, and efficiency to operate at the discourse level. In this thesis, I design deep learning architectures for three NLP tasks that require integrating information across high-level linguistic context: question answering, fictional relationship understanding, and comic book narrative modeling. While these tasks are very different from each other on the surface, I show that similar neural network modules can be used in each case to form contextual representations

    Image retrieval using automatic region tagging

    Get PDF
    The task of tagging, annotating or labelling image content automatically with semantic keywords is a challenging problem. To automatically tag images semantically based on the objects that they contain is essential for image retrieval. In addressing these problems, we explore the techniques developed to combine textual description of images with visual features, automatic region tagging and region-based ontology image retrieval. To evaluate the techniques, we use three corpora comprising: Lonely Planet travel guide articles with images, Wikipedia articles with images and Goats comic strips. In searching for similar images or textual information specified in a query, we explore the unification of textual descriptions and visual features (such as colour and texture) of the images. We compare the effectiveness of using different retrieval similarity measures for the textual component. We also analyse the effectiveness of different visual features extracted from the images. We then investigate the best weight combination of using textual and visual features. Using the queries from the Multimedia Track of INEX 2005 and 2006, we found that the best weight combination significantly improves the effectiveness of the retrieval system. Our findings suggest that image regions are better in capturing the semantics, since we can identify specific regions of interest in an image. In this context, we develop a technique to tag image regions with high-level semantics. This is done by combining several shape feature descriptors and colour, using an equal-weight linear combination. We experimentally compare this technique with more complex machine-learning algorithms, and show that the equal-weight linear combination of shape features is simpler and at least as effective as using a machine learning algorithm. We focus on the synergy between ontology and image annotations with the aim of reducing the gap between image features and high-level semantics. Ontologies ease information retrieval. They are used to mine, interpret, and organise knowledge. An ontology may be seen as a knowledge base that can be used to improve the image retrieval process, and conversely keywords obtained from automatic tagging of image regions may be useful for creating an ontology. We engineer an ontology that surrogates concepts derived from image feature descriptors. We test the usability of the constructed ontology by querying the ontology via the Visual Ontology Query Interface, which has a formally specified grammar known as the Visual Ontology Query Language. We show that synergy between ontology and image annotations is possible and this method can reduce the gap between image features and high-level semantics by providing the relationships between objects in the image. In this thesis, we conclude that suitable techniques for image retrieval include fusing text accompanying the images with visual features, automatic region tagging and using an ontology to enrich the semantic meaning of the tagged image regions
    corecore