110,377 research outputs found

    MildInt: Deep Learning-Based Multimodal Longitudinal Data Integration Framework

    Get PDF
    As large amounts of heterogeneous biomedical data become available, numerous methods for integrating such datasets have been developed to extract complementary knowledge from multiple domains of sources. Recently, a deep learning approach has shown promising results in a variety of research areas. However, applying the deep learning approach requires expertise for constructing a deep architecture that can take multimodal longitudinal data. Thus, in this paper, a deep learning-based python package for data integration is developed. The python package deep learning-based multimodal longitudinal data integration framework (MildInt) provides the preconstructed deep learning architecture for a classification task. MildInt contains two learning phases: learning feature representation from each modality of data and training a classifier for the final decision. Adopting deep architecture in the first phase leads to learning more task-relevant feature representation than a linear model. In the second phase, linear regression classifier is used for detecting and investigating biomarkers from multimodal data. Thus, by combining the linear model and the deep learning model, higher accuracy and better interpretability can be achieved. We validated the performance of our package using simulation data and real data. For the real data, as a pilot study, we used clinical and multimodal neuroimaging datasets in Alzheimer's disease to predict the disease progression. MildInt is capable of integrating multiple forms of numerical data including time series and non-time series data for extracting complementary features from the multimodal dataset

    Multiple multimodal mobile devices: Lessons learned from engineering lifelog solutions

    Get PDF
    For lifelogging, or the recording of one’s life history through digital means, to be successful, a range of separate multimodal mobile devices must be employed. These include smartphones such as the N95, the Microsoft SenseCam – a wearable passive photo capture device, or wearable biometric devices. Each collects a facet of the bigger picture, through, for example, personal digital photos, mobile messages and documents access history, but unfortunately, they operate independently and unaware of each other. This creates significant challenges for the practical application of these devices, the use and integration of their data and their operation by a user. In this chapter we discuss the software engineering challenges and their implications for individuals working on integration of data from multiple ubiquitous mobile devices drawing on our experiences working with such technology over the past several years for the development of integrated personal lifelogs. The chapter serves as an engineering guide to those considering working in the domain of lifelogging and more generally to those working with multiple multimodal devices and integration of their data

    Integration of multimodal data based on surface registration

    Get PDF
    The paper proposes and evaluates a strategy for the alignment of anatomical and functional data of the brain. The method takes as an input two different sets of images of a same patient: MR data and SPECT. It proceeds in four steps: first, it constructs two voxel models from the two image sets; next, it extracts from the two voxel models the surfaces of regions of interest; in the third step, the surfaces are interactively aligned by corresponding pairs; finally a unique volume model is constructed by selectively applying the geometrical transformations associated to the regions and weighting their contributions. The main advantages of this strategy are (i) that it can be applied retrospectively, (ii) that it is tri-dimensional, and (iii) that it is local. Its main disadvantage with regard to previously published methods it that it requires the extraction of surfaces. However, this step is often required for other stages of the multimodal analysis such as the visualization and therefore its cost can be accounted in the global cost of the process.Postprint (published version

    Influence of Stimulus Intensity on Multimodal Integration in the Startle Escape System of Goldfish

    Full text link
    Processing of multimodal information is essential for an organism to respond to environmental events. However, how multimodal integration in neurons translates into behavior is far from clear. Here, we investigate integration of biologically relevant visual and auditory information in the goldfish startle escape system in which paired Mauthner-cells (M-cells) initiate the behavior. Sound pips and visual looms as well as multimodal combinations of these stimuli were tested for their effectiveness of evoking the startle response. Results showed that adding a low intensity sound early during a visual loom (low visual effectiveness) produced a supralinear increase in startle responsiveness as compared to an increase expected from a linear summation of the two unimodal stimuli. In contrast, adding a sound pip late during the loom (high visual effectiveness) increased responsiveness consistent with a linear multimodal integration of the two stimuli. Together the results confirm the Inverse Effectiveness Principle (IEP) of multimodal integration proposed in other species. Given the well-established role of the M-cell as a multimodal integrator, these results suggest that IEP is computed in individual neurons that initiate vital behavioral decisions

    FiEstAS sampling -- a Monte Carlo algorithm for multidimensional numerical integration

    Full text link
    This paper describes a new algorithm for Monte Carlo integration, based on the Field Estimator for Arbitrary Spaces (FiEstAS). The algorithm is discussed in detail, and its performance is evaluated in the context of Bayesian analysis, with emphasis on multimodal distributions with strong parameter degeneracies. Source code is available upon request.Comment: 18 pages, 3 figures, submitted to Comp. Phys. Com

    A Flexible pragmatics-driven language generator for animated agents

    Get PDF
    This paper describes the NECA MNLG; a fully implemented Multimodal Natural Language Generation module. The MNLG is deployed as part of the NECA system which generates dialogues between animated agents. The generation module supports the seamless integration of full grammar rules, templates and canned text. The generator takes input which allows for the specification of syntactic, semantic and pragmatic constraints on the output

    The integration of audio−tactile information is modulated by multimodal social interaction with physical contact in infancy

    Get PDF
    Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio − tactile (A-T) information. By using electroencephalogram (EEG) and event-related potentials (ERPs), the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal ‘A-T’ condition), and not being tickled (unimodal ‘A’ condition). Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants’ brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy
    corecore