24,917 research outputs found

    Rendering techniques for multimodal data

    Get PDF
    Many different direct volume rendering methods have been developed to visualize 3D scalar fields on uniform rectilinear grids. However, little work has been done on rendering simultaneously various properties of the same 3D region measured with different registration devices or at different instants of time. The demand for this type of visualization is rapidly increasing in scientific applications such as medicine in which the visual integration of multiple modalities allows a better comprehension of the anatomy and a perception of its relationships with activity. This paper presents different strategies of Direct Multimodal Volume Rendering (DMVR). It is restricted to voxel models with a known 3D rigid alignment transformation. The paper evaluates at which steps of the render-ing pipeline must the data fusion be realized in order to accomplish the desired visual integration and to provide fast re-renders when some fusion parameters are modified. In addition, it analyzes how existing monomodal visualization al-gorithms can be extended to multiple datasets and it compares their efficiency and their computational cost.Postprint (published version

    Design of a multimodal rendering system

    Get PDF
    This paper addresses the rendering of aligned regular multimodal datasets. It presents a general framework of multimodal data fusion that includes several data merging methods. We also analyze the requirements of a rendering system able to provide these different fusion methods. On the basis of these requirements, we propose a novel design for a multimodal rendering system. The design has been implemented and proved showing to be efficient and flexible.Postprint (published version

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    What do faculties specializing in brain and neural sciences think about, and how do they approach, brain-friendly teaching-learning in Iran?

    Get PDF
    Objective: to investigate the perspectives and experiences of the faculties specializing in brain and neural sciences regarding brain-friendly teaching-learning in Iran. Methods: 17 faculties from 5 universities were selected by purposive sampling (2018). In-depth semi-structured interviews with directed content analysis were used. Results: 31 sub-subcategories, 10 subcategories, and 4 categories were formed according to the “General teaching model”. “Mentorship” was a newly added category. Conclusions: A neuro-educational approach that consider the roles of the learner’s brain uniqueness, executive function facilitation, and the valence system are important to learning. Such learning can be facilitated through cognitive load considerations, repetition, deep questioning, visualization, feedback, and reflection. The contextualized, problem-oriented, social, multi-sensory, experiential, spaced learning, and brain-friendly evaluation must be considered. Mentorship is important for coaching and emotional facilitation

    Multimodal Grounding for Language Processing

    Get PDF
    This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language.Comment: The paper has been published in the Proceedings of the 27 Conference of Computational Linguistics. Please refer to this version for citations: https://www.aclweb.org/anthology/papers/C/C18/C18-1197

    Embodiment, sound and visualization : a multimodal perspective in music education

    Get PDF
    Recently, many studies have emphasized the role of body movements in processing, sharing and giving meaning to music. At the same time, neuroscience studies, suggest that different parts of the brain are integrated and activated by the same stimuli: sounds, for example, can be perceived by touch and can evoke imagery, energy, fluency and periodicity. This interaction of auditory, visual and motor senses can be found in the verbal descriptions of music and among children during their spontaneous games. The question to be asked is, if a more multisensory and embodied approach could redefine some of our assumptions regarding musical education. Recent research on embodiment and multimodal perception in instrumental teaching could suggest new directions in musical education. Can we consider the integration between the activities of body movement, listening, metaphor visualization, and singing, as more effective than a disembodied and fragmented approach for the process of musical understanding

    Integration of multimodal data based on surface registration

    Get PDF
    The paper proposes and evaluates a strategy for the alignment of anatomical and functional data of the brain. The method takes as an input two different sets of images of a same patient: MR data and SPECT. It proceeds in four steps: first, it constructs two voxel models from the two image sets; next, it extracts from the two voxel models the surfaces of regions of interest; in the third step, the surfaces are interactively aligned by corresponding pairs; finally a unique volume model is constructed by selectively applying the geometrical transformations associated to the regions and weighting their contributions. The main advantages of this strategy are (i) that it can be applied retrospectively, (ii) that it is tri-dimensional, and (iii) that it is local. Its main disadvantage with regard to previously published methods it that it requires the extraction of surfaces. However, this step is often required for other stages of the multimodal analysis such as the visualization and therefore its cost can be accounted in the global cost of the process.Postprint (published version

    Exploring individual user differences in the 2D/3D interaction with medical image data

    Get PDF
    User-centered design is often performed without regard to individual user differences. In this paper, we report results of an empirical study aimed to evaluate whether computer experience and demographic user characteristics would have an effect on the way people interact with the visualized medical data in a 3D virtual environment using 2D and 3D input devices. We analyzed the interaction through performance data, questionnaires and observations. The results suggest that differences in gender, age and game experience have an effect on people’s behavior and task performance, as well as on subjective\ud user preferences
    corecore