154 research outputs found
Short-segment heart sound classification using an ensemble of deep convolutional neural networks
This paper proposes a framework based on deep convolutional neural networks
(CNNs) for automatic heart sound classification using short-segments of
individual heart beats. We design a 1D-CNN that directly learns features from
raw heart-sound signals, and a 2D-CNN that takes inputs of two- dimensional
time-frequency feature maps based on Mel-frequency cepstral coefficients
(MFCC). We further develop a time-frequency CNN ensemble (TF-ECNN) combining
the 1D-CNN and 2D-CNN based on score-level fusion of the class probabilities.
On the large PhysioNet CinC challenge 2016 database, the proposed CNN models
outperformed traditional classifiers based on support vector machine and hidden
Markov models with various hand-crafted time- and frequency-domain features.
Best classification scores with 89.22% accuracy and 89.94% sensitivity were
achieved by the ECNN, and 91.55% specificity and 88.82% modified accuracy by
the 2D-CNN alone on the test set.Comment: 8 pages, 1 figure, conferenc
Discriminative Tandem Features for HMM-based EEG Classification
Abstract—We investigate the use of discriminative feature extractors in tandem configuration with generative EEG classification system. Existing studies on dynamic EEG classification typically use hidden Markov models (HMMs) which lack discriminative capability. In this paper, a linear and a non-linear classifier are discriminatively trained to produce complementary input features to the conventional HMM system. Two sets of tandem features are derived from linear discriminant analysis (LDA) projection output and multilayer perceptron (MLP) class-posterior probability, before appended to the standard autoregressive (AR) features. Evaluation on a two-class motor-imagery classification task shows that both the proposed tandem features yield consistent gains over the AR baseline, resulting in significant relative improvement of 6.2% and 11.2 % for the LDA and MLP features respectively. We also explore portability of these features across different subjects. Index Terms- Artificial neural network-hidden Markov models, EEG classification, brain-computer-interface (BCI)
Estimating Time-Varying Effective Connectivity in High-Dimensional fMRI Data Using Regime-Switching Factor Models
Recent studies on analyzing dynamic brain connectivity rely on sliding-window
analysis or time-varying coefficient models which are unable to capture both
smooth and abrupt changes simultaneously. Emerging evidence suggests
state-related changes in brain connectivity where dependence structure
alternates between a finite number of latent states or regimes. Another
challenge is inference of full-brain networks with large number of nodes. We
employ a Markov-switching dynamic factor model in which the state-driven
time-varying connectivity regimes of high-dimensional fMRI data are
characterized by lower-dimensional common latent factors, following a
regime-switching process. It enables a reliable, data-adaptive estimation of
change-points of connectivity regimes and the massive dependencies associated
with each regime. We consider the switching VAR to quantity the dynamic
effective connectivity. We propose a three-step estimation procedure: (1)
extracting the factors using principal component analysis (PCA) and (2)
identifying dynamic connectivity states using the factor-based switching vector
autoregressive (VAR) models in a state-space formulation using Kalman filter
and expectation-maximization (EM) algorithm, and (3) constructing the
high-dimensional connectivity metrics for each state based on subspace
estimates. Simulation results show that our proposed estimator outperforms the
K-means clustering of time-windowed coefficients, providing more accurate
estimation of regime dynamics and connectivity metrics in high-dimensional
settings. Applications to analyzing resting-state fMRI data identify dynamic
changes in brain states during rest, and reveal distinct directed connectivity
patterns and modular organization in resting-state networks across different
states.Comment: 21 page
BGF-YOLO: Enhanced YOLOv8 with Multiscale Attentional Feature Fusion for Brain Tumor Detection
You Only Look Once (YOLO)-based object detectors have shown remarkable
accuracy for automated brain tumor detection. In this paper, we develop a novel
BGF-YOLO architecture by incorporating Bi-level Routing Attention (BRA),
Generalized feature pyramid networks (GFPN), and Fourth detecting head into
YOLOv8. BGF-YOLO contains an attention mechanism to focus more on important
features, and feature pyramid networks to enrich feature representation by
merging high-level semantic features with spatial details. Furthermore, we
investigate the effect of different attention mechanisms and feature fusions,
detection head architectures on brain tumor detection accuracy. Experimental
results show that BGF-YOLO gives a 4.7% absolute increase of mAP
compared to YOLOv8x, and achieves state-of-the-art on the brain tumor detection
dataset Br35H. The code is available at https://github.com/mkang315/BGF-YOLO
- …