29,188 research outputs found
Signal2Image Modules in Deep Neural Networks for EEG Classification
Deep learning has revolutionized computer vision utilizing the increased
availability of big data and the power of parallel computational units such as
graphical processing units. The vast majority of deep learning research is
conducted using images as training data, however the biomedical domain is rich
in physiological signals that are used for diagnosis and prediction problems.
It is still an open research question how to best utilize signals to train deep
neural networks.
In this paper we define the term Signal2Image (S2Is) as trainable or
non-trainable prefix modules that convert signals, such as
Electroencephalography (EEG), to image-like representations making them
suitable for training image-based deep neural networks defined as `base
models'. We compare the accuracy and time performance of four S2Is (`signal as
image', spectrogram, one and two layer Convolutional Neural Networks (CNNs))
combined with a set of `base models' (LeNet, AlexNet, VGGnet, ResNet, DenseNet)
along with the depth-wise and 1D variations of the latter. We also provide
empirical evidence that the one layer CNN S2I performs better in eleven out of
fifteen tested models than non-trainable S2Is for classifying EEG signals and
we present visual comparisons of the outputs of the S2Is.Comment: 4 pages, 2 figures, 1 table, EMBC 201
Cross-Modal Data Programming Enables Rapid Medical Machine Learning
Labeling training datasets has become a key barrier to building medical
machine learning models. One strategy is to generate training labels
programmatically, for example by applying natural language processing pipelines
to text reports associated with imaging studies. We propose cross-modal data
programming, which generalizes this intuitive strategy in a
theoretically-grounded way that enables simpler, clinician-driven input,
reduces required labeling time, and improves with additional unlabeled data. In
this approach, clinicians generate training labels for models defined over a
target modality (e.g. images or time series) by writing rules over an auxiliary
modality (e.g. text reports). The resulting technical challenge consists of
estimating the accuracies and correlations of these rules; we extend a recent
unsupervised generative modeling technique to handle this cross-modal setting
in a provably consistent way. Across four applications in radiography, computed
tomography, and electroencephalography, and using only several hours of
clinician time, our approach matches or exceeds the efficacy of
physician-months of hand-labeling with statistical significance, demonstrating
a fundamentally faster and more flexible way of building machine learning
models in medicine
An EEG study on emotional intelligence and advertising message effectiveness
Some electroencephalography (EEG) studies have investigated emotional intelligence (EI), but none have examined the relationships between EI and commercial advertising messages and related consumer behaviors. This study combines brain (EEG) techniques with an EI psychometric to explore the brain responses associated with a range of advertisements. A group of 45 participants (23females, 22males) had their EEG recorded while watching a series of advertisements selected from various marketing categories such as community interests, celebrities, food/drink, and social issues. Participants were also categorized as high or low in emotional intelligence (n = 34). The EEG data analysis was centered on rating decision-making in order to measure brain responses associated with advertising information processing for both groups. The findings suggest that participants with high and low emotional intelligence (EI) were attentive to different types of advertising messages. The two EI groups demonstrated preferences for “people” or “object,” related advertising information. This suggests that differences in consumer perception and emotions may suggest why certain advertising material or marketing strategies are effective or not
- …