571,440 research outputs found

    Using Concept Maps to Plan an Introductory Structural Geology Course

    Get PDF
    This report presents the results of incorporating constructivist methods, including concept maps, into an undergraduate structural geology curriculum. A concept map is a visual representation of concepts and their relationship to each other in a body of knowledge. They show the hierarchy of these concepts and emphasize the links between them. The overall goal of this project was to encourage students to adopt a deep/holistic approach to learning in order to better understand the concepts of structural geology. The authors sought to determine whether teaching methods became more overtly constructivist, whether there was a change in the order of presentation of topics, and whether the order of presentation normally followed by textbooks was the same as the order determined using concept maps. Educational levels: Graduate or professional

    Visual representation of concepts : exploring users’ and designers’ concepts of everyday products

    Get PDF
    To address the question on how to enhance the design of user-artefact interaction at the initial stages of the design process, this study focuses on exploring the differences between designers and users in regard to their concepts of an artefact usage. It also considers that human experience determines people’s knowledge and concepts of the artefacts they interact with, and broadens or limits their concept of context of use. In this exploratory study visual representation of concepts is used to elicit information from designers and users, and to explore how these concepts are influenced by their individual experience. Observation, concurrent verbal and retrospective protocols and thematic interviews are employed to access more in depth information about users’ and designers’ concepts. The experiment was conducted with designers and users who were asked about their concepts of an everyday product. Three types of data were produced in each session: sketches, transcriptions from retrospectives verbal reports and observations. Through an iterative process, references about context, use and experience were identified in the data collected; this led to the definition of a coding system of categories that was applied for the interpretation of visuals and texts. The methodology was tested through preliminary studies. Their initial outcomes indicate that the main differences between designers’ and users’ concepts come from their knowledge domain, while main similarities are related to human experience as source that drives concept formulation. Cultural background has been found to influence concepts about product usability and its context of use. The use of visual representation of concepts with retrospective reports and interviews allowed access to insightful information on how human experience influence people’s knowledge about product usability and its context of use. It is expected that this knowledge contributes to the enhancement of the design of product usability

    ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue

    Full text link
    Incorporating visual knowledge into text-only dialogue systems has become a potential direction to imitate the way humans think, imagine, and communicate. However, existing multimodal dialogue systems are either confined by the scale and quality of available datasets or the coarse concept of visual knowledge. To address these issues, we provide a new paradigm of constructing multimodal dialogues as well as two datasets extended from text-only dialogues under such paradigm (ReSee-WoW, ReSee-DD). We propose to explicitly split the visual knowledge into finer granularity (``turn-level'' and ``entity-level''). To further boost the accuracy and diversity of augmented visual information, we retrieve them from the Internet or a large image dataset. To demonstrate the superiority and universality of the provided visual knowledge, we propose a simple but effective framework ReSee to add visual representation into vanilla dialogue models by modality concatenations. We also conduct extensive experiments and ablations w.r.t. different model configurations and visual knowledge settings. Empirical, encouraging results not only demonstrate the effectiveness of introducing visual knowledge at both entity and turn level but also verify the proposed model ReSee outperforms several state-of-the-art methods on automatic and human evaluations. By leveraging text and vision knowledge, ReSee can produce informative responses with real-world visual concepts.Comment: 15 pages, preprin

    Adapting Visual Question Answering Models for Enhancing Multimodal Community Q&A Platforms

    Full text link
    Question categorization and expert retrieval methods have been crucial for information organization and accessibility in community question & answering (CQA) platforms. Research in this area, however, has dealt with only the text modality. With the increasing multimodal nature of web content, we focus on extending these methods for CQA questions accompanied by images. Specifically, we leverage the success of representation learning for text and images in the visual question answering (VQA) domain, and adapt the underlying concept and architecture for automated category classification and expert retrieval on image-based questions posted on Yahoo! Chiebukuro, the Japanese counterpart of Yahoo! Answers. To the best of our knowledge, this is the first work to tackle the multimodality challenge in CQA, and to adapt VQA models for tasks on a more ecologically valid source of visual questions. Our analysis of the differences between visual QA and community QA data drives our proposal of novel augmentations of an attention method tailored for CQA, and use of auxiliary tasks for learning better grounding features. Our final model markedly outperforms the text-only and VQA model baselines for both tasks of classification and expert retrieval on real-world multimodal CQA data.Comment: Submitted for review at CIKM 201

    Concept Maps and Information Systems: An Investigation into the Assessment of Students\u27 Understanding of IS

    Get PDF
    At the end of a four-year undergraduate program, it is often difficult to capture the knowledge of the graduating students. The use of mental models, specifically concept maps, can aid in the assessment of this knowledge at a conceptual level. Concept maps provide a visual representation of conceptual and relationship knowledge within a particular domain. Students in a senior-level, undergraduate class were given an assignment of creating conceptual maps of Information Systems. These maps were coded and analyzed for their “coverage” or conceptualization of the sub-field of Telecommunications. The analysis included both quantitative and qualitative assessments as well as comparisons across students’ maps. Preliminary assessments have indicated that there is a fairly large degree of overlap between maps, though a full analysis is not yet complete

    Using Multivariate Pattern Analysis to Investigate the Neural Representation of Concepts With Visual and Haptic Features

    Get PDF
    A fundamental debate in cognitive neuroscience concerns how conceptual knowledge is represented in the brain. Over the past decade, cognitive theorists have adopted explanations that suggest cognition is rooted in perception and action. This is called the embodiment hypothesis. Theories of conceptual representation differ in the degree to which representations are embodied, from those which suggest conceptual representation requires no involvement of sensory and motor systems to those which suggest it is entirely dependent upon them. This work investigated how the brain represents concepts that are defined by their visual and haptic features using novel multivariate approaches to the analysis of functional magnetic resonance imaging (fMRI) data. A behavioral study replicated a perceptual phenomenon, known as the tactile disadvantage, demonstrating that that verifying the properties of concepts with haptic features takes significantly longer than verifying the properties of concepts with visual features. This study suggested that processing the perceptual properties of concepts likely recruits the same processes involved in perception. A neuroimaging study using the same paradigm showed that processing concepts with visual and haptic features elicits activity in bimodal object-selective regions, such as the fusiform gyrus (FG) and the lateral occipitotemporal cortex (LOC). Multivariate pattern analysis (MVPA) was successful at identifying whether a concept had perceptual or abstract features from patterns of brain activity located in functionally-defined object-selective and general perceptual regions in addition to the whole brain. The conceptual representation was also consistent across participants. Finally, the functional networks for verifying the properties of concepts with visual and haptic features were highly overlapping but showed differing patterns of connectivity with the occipitotemporal cortex across people. Several conclusions can be drawn from this work, which provide insight into the nature of the neural representation of concepts with perceptual features. The neural representation of concepts with visual and haptic features involves brain regions which underlie general visual and haptic perception as well visual and haptic perception of objects. These brain regions interact differently based on the type of perceptual feature a concept possesses. Additionally, the neural representation of concepts with visual and haptic features is distributed across the whole brain and is consistent across people. The results of this work provide partial support for weak and strong embodiment theories, but further studies are necessary to determine whether sensory systems are required for conceptual representation
    • …
    corecore