20 research outputs found
From 'publish or perish' to 'partner with patients or perish'
"The medical writing world is evolving rapidly, but a focus on patient partnerships provides writers with a ‘North Star’ – a strong, sound and stable guide for navigating their future"This article will help medical writers prepare to partner, ethically and effectively, with patients when working on publications, decentralised clinical trials, and diversity, equity, and inclusion initiatives.Full special edition of ICT 2024 medical writing here:https://www.calameo.com/read/00611338545666a5c40f0?page=33</p
Time required for mobile tagging of video features needed to run the machine learning models.
We highlight the average length of videos (all participants, only participants with ASD, and only participants without ASD) as well as the average time required to watch and score the videos and the average time required from start to end of the scoring component alone.</p
Performance for LR5 by age.
LR5 exhibited the highest classifier performance (89% accuracy) out of the 8 classifiers tested (Table 1). This model performed best on children between the ages of 2 and 6 years. (A) shows the performance of LR5 across 4 age ranges, and (B) provides the ROC curve for LR5’s performance for children ages 2 to 6 years. Table 3 provides additional details, including the number of affected and unaffected control participants within each age range. AUC, area under the curve; LR5, 5-feature logistic regression classifier; ROC, receiver operating characteristic.</p
Eight machine learning classifiers used for video analysis and autism detection.
The models were constructed from an analysis of archived medical records from the use of standard instruments, including the ADOS and the ADI-R. All 8 models identified a small, stable subset of features in cross-validation experiments. The total numbers of affected and unaffected control participants for training and testing are provided together with measures of accuracy on the test set. Four models were tested on independent datasets and have been mentioned in a separate “Test” category. The remaining 4, indicated with “Train/test,” used the given dataset with an 80%:20% train:test split to calculate test accuracy on the 20% held-out test set. The naming convention of the classifiers is “model type”-“number of features”.</p
Accuracy across different permutations of 9 raters for 50 videos.
We performed the analysis to determine the optimal number (the minimum number to reach a consensus on classification) of video raters needed to maintain accuracy without loss of power. Nine raters analyzed and generated feature tags for a subset of n = 50 videos (n = 25 ASD, n = 25 non-ASD) on which we ran the ADTree8 classifier (Table 1). The increase in accuracy conferred by the use of 3 versus 9 raters was not significant. We therefore set the optimal rater number to 3 for subsequent analyses. ADTree8, 8-feature alternating decision tree; ASD, autism spectrum disorder.</p
ROC curve for LR-EN-VF showing performance on test data along with an ROC for L2 loss with no feature reduction.
The former chose 8 out of 30 video features. AUC, area under the curve; LR-EN-VF, logistic regression with an elastic net penalty; ROC, receiver operating characteristic.</p
Overall procedure for rapid and mobile classification of ASD versus non-ASD and performance of models from Table 1.
Participants were recruited to participate via crowdsourcing methods and provided video by direct upload or via a preexisting YouTube link. The minimum for majority rules of 3 video raters tagged all features, generating feature vectors to run each of the 8 classifiers automatically. The sensitivity and specificity based on majority outcome generated by the 3 raters on 162 (119 with autism) videos are provided. Highlighted in yellow is the best performing model, LR5. ADTree7, 7-feature alternating decision tree; ADTree8, 8-feature alternating decision tree; ASD, autism spectrum disorder; LR5, 5-feature logistic regression classifier; LR9, 9-feature logistic regression classifier; LR10, 10-feature logistic regression classifier; SVM5, 5-feature support vector machine; SVM10, 10-feature support vector machine; SVM12, 12-feature support vector machine.</p
Demographic information on children in the collected home videos.
We collected N = 193 (119 ASD, 74 non-ASD) home videos for analysis. We excluded 31 videos because of inadequate labeling or video quality. We used a randomly chosen 25 autism and 25 non-autism videos to empirically define an optimal number of raters. Video feature tagging for machine learning was then done on 162 home videos.</p
Model performance by age.
This table details the accuracy, sensitivity, specificity, precision, and recall for 8 classifiers (Table 1) and for 4 age ranges found in evaluation of 162 home videos with an average length of 2 minutes. We also provide the IRA, which indicates the frequency with which the model results from all 3 raters’ feature tags agreed on class. The top-performing classifier was LR5, which yielded an accuracy of 88.9%, sensitivity of 94.5%, and specificity of 77.4%. Other notable classifiers were SVM5 and LR10, which yielded 85.4% and 84.8% accuracy, respectively. These 3 best-performing classifiers showed improved classification power within certain age ranges.</p
Feature-to-classifier mapping.
Video analysts scored each video with 30 features. This matrix shows which feature corresponds to which classifier. Darker colored features indicate higher overlap, and lighter colors indicate lower overlap across the models. The features are rank ordered according to their frequency of use across the 8 classifiers. Further details about the classifiers are provided in Table 1. The bottom 7 features were not part of the machine learning process but were chosen because of their potential relationship with the autism phenotype and for use in further evaluation of the models’ feature sets when constructing a video feature–specific classifier. ADTree7, 7-feature alternating decision tree; ADTree8, 8-feature alternating decision tree; LR5, 5-feature logistic regression classifier; LR10, 10-feature logistic regression classifier; SVM5, 5-feature support vector machine; SVM10, 10-feature support vector machine; SVM12, 12-feature support vector machine.</p
