1,567 research outputs found
Evaluating the Migration Rates in Percutaneous Spinal Cord Stimulation Trials
Introduction: Spinal cord stimulation (SCS) provides symptom reduction in patients with chronic low back pain. The most common complication in SCS is percutaneous lead migration from initial placement site. It is our goal to determine whether using skin anchors during trial implantation reduces SCS trial lead migration rates compared to historical controls.
Methods: 197 patients who underwent SCS trial placement at Thomas Jefferson University Hospital between 2015 and 2018 were considered for this study. Complete data including device impedance measurements and pre and post trial x-rays was collected on 12 historical control patients and 19 patients with leads secured using an anchor.
Results: The mean degree of lead migration was not statistically significantly different between the anchor group and control group in the right lead (0.71 mm (95% CI -6.24, 7.66, p=0.84) and the left lead (-0.85 mm (95% CI -7.70, 6.00, p=0.80). Additionally, there was no statistical difference in device impedance from the first day of the trial to the trial removal date between the anchor group and control group (-47.35 Ohms (95% CI -181.48, 86.78, p=0.47).
Discussion: There was no significant reduction in lead migration or device impedance measurement in patients who underwent trial SCS with leads secured with an anchor compared to historical controls. This raises the question of whether the anchoring technique successfully reduces lead migration and emphasizes the importance of obtaining pre and post trial x-rays to evaluate lead migration
Stuttering Detection Using Speaker Representations and Self-supervised Contextual Embeddings
The adoption of advanced deep learning architectures in stuttering detection
(SD) tasks is challenging due to the limited size of the available datasets. To
this end, this work introduces the application of speech embeddings extracted
from pre-trained deep learning models trained on large audio datasets for
different tasks. In particular, we explore audio representations obtained using
emphasized channel attention, propagation, and aggregation time delay neural
network (ECAPA-TDNN) and Wav2Vec2.0 models trained on VoxCeleb and LibriSpeech
datasets respectively. After extracting the embeddings, we benchmark with
several traditional classifiers, such as the K-nearest neighbour (KNN),
Gaussian naive Bayes, and neural network, for the SD tasks. In comparison to
the standard SD systems trained only on the limited SEP-28k dataset, we obtain
a relative improvement of 12.08%, 28.71%, 37.9% in terms of unweighted average
recall (UAR) over the baselines. Finally, we have shown that combining two
embeddings and concatenating multiple layers of Wav2Vec2.0 can further improve
the UAR by up to 2.60% and 6.32% respectively.Comment: Accepted in International Journal of Speech Technology, Springer 2023
substantial overlap with arXiv:2204.0156
Robust Stuttering Detection via Multi-task and Adversarial Learning
By automatic detection and identification of stuttering, speech pathologists
can track the progression of disfluencies of persons who stutter (PWS). In this
paper, we investigate the impact of multi-task (MTL) and adversarial learning
(ADV) to learn robust stutter features. This is the first-ever preliminary
study where MTL and ADV have been employed in stuttering identification (SI).
We evaluate our system on the SEP-28k stuttering dataset consisting of 20 hours
(approx) of data from 385 podcasts. Our methods show promising results and
outperform the baseline in various disfluency classes. We achieve up to 10%,
6.78%, and 2% improvement in repetitions, blocks, and interjections
respectively over the baseline.Comment: Under Review in European Signal Processing Conference 202
End-to-End and Self-Supervised Learning for ComParE 2022 Stuttering Sub-Challenge
In this paper, we present end-to-end and speech embedding based systems
trained in a self-supervised fashion to participate in the ACM Multimedia 2022
ComParE Challenge, specifically the stuttering sub-challenge. In particular, we
exploit the embeddings from the pre-trained Wav2Vec2.0 model for stuttering
detection (SD) on the KSoF dataset. After embedding extraction, we benchmark
with several methods for SD. Our proposed self-supervised based SD system
achieves a UAR of 36.9% and 41.0% on validation and test sets respectively,
which is 31.32% (validation set) and 1.49% (test set) higher than the best
(DeepSpectrum) challenge baseline (CBL). Moreover, we show that concatenating
layer embeddings with Mel-frequency cepstral coefficients (MFCCs) features
further improves the UAR of 33.81% and 5.45% on validation and test sets
respectively over the CBL. Finally, we demonstrate that the summing information
across all the layers of Wav2Vec2.0 surpasses the CBL by a relative margin of
45.91% and 5.69% on validation and test sets respectively. Grand-challenge:
Computational Paralinguistics ChallengEComment: Accepted in ACM MM 2022 Conference : Grand Challenges, "\c{opyright}
{Owner/Author | ACM} {2022}. This is the author's version of the work. It is
posted here for your personal use. Not for redistributio
Neurosurgical Applications of Magnetic Resonance Diffusion Tensor Imaging
Magnetic Resonance (MR) Diffusion Tensor Imaging (DTI) is a rapidly evolving technology that enables the visualization of neural fiber bundles, or white matter (WM) tracts. There are numerous neurosurgical applications for MR DTI including: (1) Tumor grading and staging; (2) Pre-surgical planning (determination of resectability, determination of surgical approach, identification of WM tracts at risk); (3) Intraoperative navigation (tumor resection that spares WM damage, epilepsy resection that spares WM damage, accurate location of deep brain stimulation structures); (4) Post-operative assessment and monitoring (identification of WM damage, identification of tumor recurrence). Limitations of MR DTI include difficulty tracking small and crossing WM tracts, lack of standardized data acquisition and post-processing techniques, and practical equipment, software, and timing considerations. Overall, MR DTI is a useful tool for planning, performing, and following neurosurgical procedures, and has the potential to significantly improve patient care. Technological improvements and increased familiarity with DTI among clinicians are next steps
Measuring co-authorship and networking-adjusted scientific impact
Appraisal of the scientific impact of researchers, teams and institutions
with productivity and citation metrics has major repercussions. Funding and
promotion of individuals and survival of teams and institutions depend on
publications and citations. In this competitive environment, the number of
authors per paper is increasing and apparently some co-authors don't satisfy
authorship criteria. Listing of individual contributions is still sporadic and
also open to manipulation. Metrics are needed to measure the networking
intensity for a single scientist or group of scientists accounting for patterns
of co-authorship. Here, I define I1 for a single scientist as the number of
authors who appear in at least I1 papers of the specific scientist. For a group
of scientists or institution, In is defined as the number of authors who appear
in at least In papers that bear the affiliation of the group or institution. I1
depends on the number of papers authored Np. The power exponent R of the
relationship between I1 and Np categorizes scientists as solitary (R>2.5),
nuclear (R=2.25-2.5), networked (R=2-2.25), extensively networked (R=1.75-2) or
collaborators (R<1.75). R may be used to adjust for co-authorship networking
the citation impact of a scientist. In similarly provides a simple measure of
the effective networking size to adjust the citation impact of groups or
institutions. Empirical data are provided for single scientists and
institutions for the proposed metrics. Cautious adoption of adjustments for
co-authorship and networking in scientific appraisals may offer incentives for
more accountable co-authorship behaviour in published articles.Comment: 25 pages, 5 figure
End-to-End and Self-Supervised Learning for ComParE 2022 Stuttering Sub-Challenge
"\c{opyright} {Owner/Author | ACM} {2022}. This is the author's version of the work. It is posted here for your personal use. Not for redistributionInternational audienceIn this paper, we present end-to-end and speech embedding based systems trained in a self-supervised fashion to participate in the ACM Multimedia 2022 ComParE Challenge, specifically the stuttering sub-challenge. In particular, we exploit the embeddings from the pre-trained Wav2Vec2.0 model for stuttering detection (SD) on the KSoF dataset. After embedding extraction, we benchmark with several methods for SD. Our proposed self-supervised based SD system achieves a UAR of 36.9% and 41.0% on validation and test sets respectively, which is 31.32% (validation set) and 1.49% (test set) higher than the best (DeepSpectrum) challenge baseline (CBL). Moreover, we show that concatenating layer embeddings with Mel-frequency cepstral coefficients (MFCCs) features further improves the UAR of 33.81% and 5.45% on validation and test sets respectively over the CBL. Finally, we demonstrate that the summing information across all the layers of Wav2Vec2.0 surpasses the CBL by a relative margin of 45.91% and 5.69% on validation and test sets respectively
Robust Stuttering Detection via Multi-task and Adversarial Learning
Accepted in EUSIPCO 2022International audienceBy automatic detection and identification of stuttering, speech pathologists can track the progression of disfluencies of persons who stutter (PWS). In this paper, we investigate the impact of multi-task (MTL) and adversarial learning (ADV) to learn robust stutter features. This is the first-ever preliminary study where MTL and ADV have been employed in stuttering identification (SI). We evaluate our system on the SEP-28k stuttering dataset consisting of ≈ 20 hours of data from 385 podcasts. Our methods show promising results and outperform the baseline in various disfluency classes. We achieve up to 10%, 6.78%, and 2% improvement in repetitions, blocks, and interjections respectively over the baseline
- …