48,746 research outputs found

    Transforming a linear module into an adaptive one : tackling the challenge

    Get PDF
    Every learner is fundamentally different. However, few courses are delivered in a way that is tailored to the specific needs of each student. Delivery systems for adaptive educational hypermedia have been extensively researched and found promising. Still, authoring of adaptive courses remains a challenge. In prior research, we have built an adaptive hypermedia authoring system, MOT3.0. The main focus was on enhancing the type of functionality that allows the non-technical author, to efficiently and effectively use such a tool. Here we show how teachers can start from existing course material and transform it into an adaptive course, catering for various learners. We also show how this apparent simplicity still allows for building of flexible and complex adaptation, and describe an evaluation with course authors

    Learning Character-level Compositionality with Visual Features

    Full text link
    Previous work has modeled the compositionality of words by creating character-level models of meaning, reducing problems of sparsity for rare words. However, in many writing systems compositionality has an effect even on the character-level: the meaning of a character is derived by the sum of its parts. In this paper, we model this effect by creating embeddings for characters based on their visual characteristics, creating an image for the character and running it through a convolutional neural network to produce a visual character embedding. Experiments on a text classification task demonstrate that such model allows for better processing of instances with rare characters in languages such as Chinese, Japanese, and Korean. Additionally, qualitative analyses demonstrate that our proposed model learns to focus on the parts of characters that carry semantic content, resulting in embeddings that are coherent in visual space.Comment: Accepted to ACL 201

    Subjectivity in contemporary visualization of reality: re-visiting Ottoman miniatures

    Get PDF
    Though Ottoman miniatures are 2D representations, they carry the potential of conveying an individual’s perception in a more detailed manner as compared to 3D perspective renderings. In a typical 2-vanishing-point perspective; objects / subjects drawn in the foreground hide the ones that are located at their back: This phenomenon is called occlusion. In Ottoman miniatures there is no occlusion, all object / subject illustrations are wholistic, there is no partial description of figures. Consequently, you end up with a life form that is the synthesis of individual forms, a sui generis state... This unique visual narrative can be extended to cubist works where multifaceted descriptions are observed. Another advantage of Ottoman miniatures is that hierarchies of image and image maker are quite clear. Miniatures make use of distance, void, shape, scale relationships and their layout to give a sense of depth in space. Though objectivity is very much valued in visual representation, ideal objectivity is not possible since representations are created by subjects and subjects belong to cultures that have different criteria in forming / perceiving portrayals. Moreover, tools that are used for visual representations usually prove to be narrower than the scope of human perception. Departing from the point of view explained above, Muta-morphosis is a photography project that is created as an almost surreal visualization stemming from the real. The lack of a single perspectival structure due to multiplicity of perspectives after compressed panoramic imaging, can be linked to Ottoman miniatures, which in turn, connects the global contemporary representation to its local traditional counterpart. Keywords: Ottoman miniature painting, contemporary photography, child drawings, visualization, representation, reality, documentary, subjectivity, objectivity, visual narration

    Gaze Embeddings for Zero-Shot Image Classification

    Get PDF
    Zero-shot image classification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even non-expert users have a natural ability to judge class membership. We present a data collection paradigm that involves a discrimination task to increase the information content obtained from gaze data. Our method extracts discriminative descriptors from the data and learns a compatibility function between image and gaze using three novel gaze embeddings: Gaze Histograms (GH), Gaze Features with Grid (GFG) and Gaze Features with Sequence (GFS). We introduce two new gaze-annotated datasets for fine-grained image classification and show that human gaze data is indeed class discriminative, provides a competitive alternative to expert-annotated attributes, and outperforms other baselines for zero-shot image classification
    • …
    corecore