1,052 research outputs found
Categorical Dimensions of Human Odor Descriptor Space Revealed by Non-Negative Matrix Factorization
In contrast to most other sensory modalities, the basic perceptual dimensions of olfaction remain unclear. Here, we use non-negative matrix factorization (NMF) – a dimensionality reduction technique – to uncover structure in a panel of odor profiles, with each odor defined as a point in multi-dimensional descriptor space. The properties of NMF are favorable for the analysis of such lexical and perceptual data, and lead to a high-dimensional account of odor space. We further provide evidence that odor dimensions apply categorically. That is, odor space is not occupied homogenously, but rather in a discrete and intrinsically clustered manner. We discuss the potential implications of these results for the neural coding of odors, as well as for developing classifiers on larger datasets that may be useful for predicting perceptual qualities from chemical structures
Grounding deep neural network predictions of human categorization behavior in understandable functional features: the case of face identity
Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed
Correlated Components Analysis - Extracting Reliable Dimensions in Multivariate Data
How does one find dimensions in multivariate data that are reliably expressed
across repetitions? For example, in a brain imaging study one may want to
identify combinations of neural signals that are reliably expressed across
multiple trials or subjects. For a behavioral assessment with multiple ratings,
one may want to identify an aggregate score that is reliably reproduced across
raters. Correlated Components Analysis (CorrCA) addresses this problem by
identifying components that are maximally correlated between repetitions (e.g.
trials, subjects, raters). Here we formalize this as the maximization of the
ratio of between-repetition to within-repetition covariance. We show that this
criterion maximizes repeat-reliability, defined as mean over variance across
repeats, and that it leads to CorrCA or to multi-set Canonical Correlation
Analysis, depending on the constraints. Surprisingly, we also find that CorrCA
is equivalent to Linear Discriminant Analysis for zero-mean signals, which
provides an unexpected link between classic concepts of multivariate analysis.
We present an exact parametric test of statistical significance based on the
F-statistic for normally distributed independent samples, and present and
validate shuffle statistics for the case of dependent samples. Regularization
and extension to non-linear mappings using kernels are also presented. The
algorithms are demonstrated on a series of data analysis applications, and we
provide all code and data required to reproduce the results
Dynamic Thermal Imaging for Intraoperative Monitoring of Neuronal Activity and Cortical Perfusion
Neurosurgery is a demanding medical discipline that requires a complex interplay of several neuroimaging techniques. This allows structural as well as functional information to be recovered and then visualized to the surgeon. In the case of tumor resections this approach allows more fine-grained differentiation of healthy and pathological tissue which positively influences the postoperative outcome as well as the patient's quality of life.
In this work, we will discuss several approaches to establish thermal imaging as a novel neuroimaging technique to primarily visualize neural activity and perfusion state in case of ischaemic stroke. Both applications require novel methods for data-preprocessing, visualization, pattern recognition as well as regression analysis of intraoperative thermal imaging.
Online multimodal integration of preoperative and intraoperative data is accomplished by a 2D-3D image registration and image fusion framework with an average accuracy of 2.46 mm. In navigated surgeries, the proposed framework generally provides all necessary tools to project intraoperative 2D imaging data onto preoperative 3D volumetric datasets like 3D MR or CT imaging. Additionally, a fast machine learning framework for the recognition of cortical NaCl rinsings will be discussed throughout this thesis. Hereby, the standardized quantification of tissue perfusion by means of an approximated heating model can be achieved. Classifying the parameters of these models yields a map of connected areas, for which we have shown that these areas correlate with the demarcation caused by an ischaemic stroke segmented in postoperative CT datasets.
Finally, a semiparametric regression model has been developed for intraoperative neural activity monitoring of the somatosensory cortex by somatosensory evoked potentials. These results were correlated with neural activity of optical imaging. We found that thermal imaging yields comparable results, yet doesn't share the limitations of optical imaging. In this thesis we would like to emphasize that thermal imaging depicts a novel and valid tool for both intraoperative functional and structural neuroimaging
EEG filtering based on blind source separation (BSS) for early detection of Alzheimer's disease
Objective: Development of an EEG preprocessing technique for improvement of detection of Alzheimer’s disease (AD). The technique is based on filtering of EEG data using blind source separation (BSS) and projection of components which are possibly sensitive to cortical neuronal impairment found in early stages of AD. Method: Artifact-free 20 s intervals of raw resting EEG recordings from 22 patients with Mild Cognitive Impairment (MCI) who later proceeded to AD and 38 age-matched normal controls were decomposed into spatio-temporally decorrelated components using BSS algorithm ‘AMUSE’. Filtered EEG was obtained by back projection of components with the highest linear predictability. Relative power of filtered data in delta, theta, alpha1, alpha2, beta1, and beta 2 bands were processed with Linear Discriminant Analysis (LDA). Results: Preprocessing improved the percentage of correctly classified patients and controls computed with jack-knifing cross-validation from 59 to 73% and from 76 to 84%, correspondingly. Conclusions: The proposed approach can significantly improve the sensitivity and specificity of EEG based diagnosis. Significance: Filtering based on BSS can improve the performance of the existing EEG approaches to early diagnosis of Alzheimer’s disease. It may also have potential for improvement of EEG classification in other clinical areas or fundamental research. The developed method is quite general and flexible, allowing for various extensions and improvements. q 2004 Published by Elsevier Ireland Ltd. on behalf of International Federation of Clinical Neurophysiology
Development and Evaluation of Data Processing Techniques in Magnetoencephalography
With MEG, the tiny magnetic fields produced by neuronal currents within the brain can be measured completely non-invasively. But the signals are very small (~100 fT) and often obscured by spontaneous brain activity and external noise. So, a recurrent issue in MEG data analysis is the identification and elimination of this unwanted interference within the recordings. Various strategies exist to meet this purpose. In this thesis, two of these strategies are scrutinized in detail.
The first is the commonly used procedure of averaging over trials which is a successfully applied data reduction method in many neurocognitive studies. However, the brain does not always respond identically to repeated stimuli, so averaging can eliminate valuable information. Alternative approaches aiming at single trial analysis are difficult to realize and many of them focus on temporal patterns.
Here, a compromise involving random subaveraging of trials and repeated source localization is presented. A simulation study with numerous examples demonstrates the applicability of the new method. As a result, inferences about the generators of single trials can be drawn which allows deeper insight into neuronal processes of the human brain.
The second technique examined in this thesis is a preprocessing tool termed Signal Space Separation (SSS). It is widely used for preprocessing of MEG data, including noise reduction by suppression of external interference, as well as movement correction.
Here, the mathematical principles of the SSS series expansion and the rules for its application are investigated. The most important mathematical precondition is a source-free sensor space. Using three data sets, the influence of a violation of this convergence criterion on source localization accuracy is demonstrated. The analysis reveals that the SSS method works reliably, even when the convergence criterion is not fully obeyed.
This leads to utilizing the SSS method for the transformation of MEG data to virtual sensors on the scalp surface. Having MEG data directly on the individual scalp surface would alleviate sensor space analysis across subjects and comparability with EEG.
A comparison study of the transformation results obtained with SSS and those produced by inverse and subsequent forward computation is performed. It shows strong dependence on the relative position of sources and sensors. In addition, the latter approach yields superior results for the intended purpose of data transformation
- …