33 research outputs found

    Characteristics and functional impact of unplanned acute care unit readmissions during inpatient traumatic brain injury rehabilitation: a retrospective cohort study

    No full text
    Background: This study investigated the incidence, characteristics and functional outcomes associated with unplanned Acute Care Unit Readmissions (ACUR) during inpatient traumatic brain injury (TBI) rehabilitation in an Asian cohort. Methods: A retrospective review of electronic medical records from a single rehabilitation unit was conducted from 1 January 2012 to 31 December 2014. Inclusion criteria were first TBI, aged >18 years, admitted <6 months of TBI. ACUR were characterized into neurological, medical or neurosurgical subtypes. The main outcome measure was discharge and Functional Independence Measure (FIMâ„¢). Secondary outcomes included rehabilitation length of stay (RLOS). Results: Of 121 eligible TBI records, the incidence of ACUR was 14% (n = 17), comprising neurologic (76.5%) and medical (23.5%) subtypes occurring at median of 13 days (IQR 6, 28.5) after rehabilitation admission. Patients without ACUR had a significantly higher admission mean (SD) FIM score compared to those with ACUR (FIM ACUR-negative 63.4 (21.1) vs. FIM ACUR-positive 50.53(25.4), p = 0.026). Significantly lower discharge FIM was noted in those with ACUR compared to those without. (FIM ACUR-positive 65.8(31.4) vs. FIM ACUR-negative 85.4 (21.1), p = 0.023) Furthermore, a significant near-doubling of RLOS was noted in ACUR patients compared to non-ACUR counterparts (ACUR-positive median 55 days (IQR 34.50, 87.50) vs. ACUR-negative median 28 days (IQR 16.25, 40.00), p = 0.002). Conclusions: This study highlights the significant negative functional impact and lengthening of rehabilitation duration of ACUR on inpatient rehabilitation outcome for TBI.Published versio

    Speech Emotion Classification with Deep Learning

    No full text
    Deep neural networks have been applied to speech emotion recognition. • Employed for automatic feature extraction from the audio signal in the related field. • Audio signals are essential to have the pre-processing step. • Quality of voice has a direct influence on the speech emotion recognition results. • Acoustic speech features. • Word, phoneme, phrase. • Spectral features. • Mel-frequency Cepstral Coefficients (MFCC)

    Inpatient rehabilitation outcomes after primary severe haemorrhagic stroke: a retrospective study comparing surgical versus non-surgical management

    No full text
    Background: Haemorrhagic stroke, accounting for 10–20% of all strokes, often requires decompressive surgery as a life-saving measure for cases with massive oedema and raised intracranial pressure. This study was conducted to compare the demographics, characteristics and rehabilitation profiles of patients with severe haemorrhagic stroke who were managed surgically versus those who were managed non-surgically. Methods: A single-centre retrospective study of electronic medical records was conducted over a 3-year period from 1 January 2018 to 31 December 2020. The inclusion criteria were first haemorrhagic stroke, age of >18 years and an admission Functional Independence Measure (FIM™) score of 18–40 upon admission to the rehabilitation centre. The primary outcome measure was discharge FIM™. Secondary outcome measures included modified Rankin Scale (mRS), rehabilitation length of stay (RLOS) and complication rates. Results: A total of 107 patients’ records were analysed; 45 (42.1%) received surgical intervention and 62 (57.9%) patients underwent non-surgical management. Surgically managed patients were significantly younger than non-surgical patients, with a mean age of [surgical 53.1 (SD 12) vs. non-surgical 61.6 (SD 12.3), p = 0.001]. Admission FIM was significantly lower in the surgical vs. non-surgical group [23.7 (SD6.7) vs. 26.71 (SD 7.4), p = 0.031). However, discharge FIM was similar between both groups [surgical 53.91 (SD23.0) vs. non-surgical 57.0 (SD23.6), p = 0.625). Similarly, FIM gain (surgical 30.1 (SD 21.1) vs. non-surgical 30.3 (SD 21.1), p = 0.094) and RLOS [surgical 56.2 days (SD 21.5) vs. non-surgical 52.0 days (SD 23.4), p = 0.134) were not significantly different between groups. The majority of patients were discharged home (surgical 73.3% vs. non-surgical 74.2%, p = 0.920) despite a high level of dependency. Conclusions: Our findings suggest that patients with surgically managed haemorrhagic stroke, while older and more dependent on admission to rehabilitation, achieved comparable FIM gains, discharge FIM and discharge home rates after ~8 weeks of rehabilitation. This highlights the importance of rehabilitation, especially for surgically managed haemorrhagic stroke patients.Published versio

    NegCosIC: Negative Cosine Similarity-Invariance- Covariance Regularization for Few-Shot Learning

    No full text
    Few-shot learning continues to pose a challenge as it is inherently difficult for visual recognition models to generalize with limited labeled examples. When the training data is limited, the process of training and fine-tuning the model will be unstable and inefficient due to overfitting. In this paper, we introduce NegCosIC: Negative Cosine Similarity-Invariance-Covariance Regularization, a method that aims to improve the mean accuracy from the perspective of stabilizing the fine-tuning process and regularizing variance. NegCosIC incorporates a negative simple cosine similarity loss to stabilize the parameters of the feature extractor during fine-tuning. In addition, NegCosIC integrates invariance loss and covariance loss to regularize the embeddings in order to reduce overfitting. Experimental results demonstrate that NegCosIC is able to bring substantial improvements over the current state-of-the-art methods. An indepth worse case analysis is also conducted and shows that NegCosIC is able to outperform state-of-theart methods on worst case accuracy. The proposed NegCosIC achieved 2.15% and 2.13% higher accuracy on miniImageNet 1-shot and 5-shot tasks, 3.22% and 2.67% higher accuracy on CUB 1-shot and 5-shot tasks, and 2.13% and 7.74% higher accuracy on CIFAR-FS 1-shot and 5-shot tasks in terms of worst-case accuracies

    Speech emotion recognition with light gradient boosting decision trees machine

    Get PDF
    Speech emotion recognition aims to identify the emotion expressed in the speech by analyzing the audio signals. In this work, data augmentation is first performed on the audio samples to increase the number of samples for better model learning. The audio samples are comprehensively encoded as the frequency and temporal domain features. In the classification, a light gradient boosting machine is leveraged. The hyperparameter tuning of the light gradient boosting machine is performed to determine the optimal hyperparameter settings. As the speech emotion recognition datasets are imbalanced, the class weights are regulated to be inversely proportional to the sample distribution where minority classes are assigned higher class weights. The experimental results demonstrate that the proposed method outshines the state-of-the-art methods with 84.91% accuracy on the emo-DB dataset, 67.72% on the Ryerson audio-visual database of emotional speech and song (RAVDESS) dataset, and 62.94% on the interactive emotional dyadic motion capture (IEMOCAP) dataset

    Forex Daily Price Prediction Using Gated Recurrent Unit

    No full text
    The foreign exchange (Forex) market is globally recognized as one of the most prominent financial markets. In this paper, we focus on three major currency pairs: EUR/USD, GBP/USD, and USD/CHF, spanning from January 2007 to July 2022. We employ a range of techniques, including technical indicators, feature scaling, and Gated Recurrent Unit (GRU) network, to predict the closing price one day ahead of the current day. Our method demonstrates superior performance compared to other state-of-the-art approaches, achieving remarkably low Mean Absolute Errors (MAE) of 0.0046, 0.0063, and 0.0039 for the respective currency pairs: EUR/USD, GBP/USD, and USD/CHF. Keywords— Forex price prediction, Recurrent neural networks, Gated Recurrent Uni

    DBDC-SSL: Deep Brownian Distance Covariance with Self-supervised Learning for Few-shot Image Classification

    No full text
    Few-shot image classification remains a persistent challenge due to the intrinsic difficulty faced by visual recognition models in achieving generalization with limited training data. Existing methods primarily focus on exploiting marginal distributions and overlook the disparity between the product of marginals and the joint characteristic functions. This can lead to less robust feature representations. In this paper, we introduce DBDC-SSL, a method that aims to improve few-shot visual recognition models by learning a feature extractor that produces image representations that are more robust. To improve the robustness of the model, we integrate DeepBDC (DBDC) during the training process to learn better feature embeddings by effectively computing the disparity between product of the marginals and joint characteristic functions of the features. To reduce overfitting and improve the generalization of the model, we utilize an auxiliary rotation loss for self-supervised learning (SSL) in the training of the feature extractor. The auxiliary rotation loss is derived from a pretext task, where input images undergo rotation by predefined angles, and the model classifies the rotation angle based on the features it generates. Experimental results demonstrate that DBDC-SSL is able to outperform current state-of-the-art methods on 4 common few-shot image classification benchmark, which are miniImageNet, tieredImageNet, CUB and CIFAR-FS. For 5-way 1-shot and 5-way 5-shot tasks respectively, the proposed DBDC-SSL achieved the accuracy of 68.64±0.43 and 86.02±0.28 on miniImageNet, 73.88±0.48 and 89.03±0.29 on tieredImageNet, 84.67±0.39 and 94.76±0.16 on CUB, and 75.60±0.44 and 88.49±0.31 on CIFAR-FS

    Enhanced Traffic Sign Recognition with Ensemble Learning

    No full text
    With the growing trend in autonomous vehicles, accurate recognition of traffic signs has become crucial. This research focuses on the use of convolutional neural networks for traffic sign classification, specifically utilizing pre-trained models of ResNet50, DenseNet121, and VGG16. To enhance the accuracy and robustness of the model, the authors implement an ensemble learning technique with majority voting, to combine the predictions of multiple CNNs. The proposed approach was evaluated on three different traffic sign datasets: the German Traffic Sign Recognition Benchmark (GTSRB), the Belgium Traffic Sign Dataset (BTSD), and the Chinese Traffic Sign Database (TSRD). The results demonstrate the efficacy of the ensemble approach, with recognition rates of 98.84% on the GTSRB dataset, 98.33% on the BTSD dataset, and 94.55% on the TSRD dataset
    corecore