128,261 research outputs found

    Developing Effective Online Training Tools For Maine Adaptive Sports And Recreation

    Get PDF
    Background: Maine Adaptive Sports and Recreation (MASR) relies on volunteers to instruct their participants with disabilities to participate in a variety of adaptive sport programs. Volunteers must have a comprehensive understanding of participants’ health conditions to assist appropriately. MASR’s traditional training program lacked a formal curriculum and assessment of volunteer learning. Our purpose was to create online learning modules and determine whether implementing a massed or distributed schedule resulted in better long term retention. Methods: Two non-randomized groups of eleven adults were assigned to either an in-class, massed format (Group A) or an at-home, distributed schedule (Group B) to complete six online learning modules. Participant competence was assessed prior, immediately after, and two weeks after completion of learning modules. A global rating scale survey and satisfaction survey were also completed to determine perceived confidence in using the information learned and obtain feedback. Results: Post-hoc testing revealed both groups had significant increase in competence after reviewing the modules, in terms of both immediate recall and long-term retention scores compared to baseline. There was a significant difference between group pre-test scores, but no difference between the groups’ immediate recall or long-term retention scores. Both groups exceeded the MCIC score of 2 points for the Global Rate of Change Scale, indicating a notable increase in confidence. Participants reported the modules to be beneficial and effective in the Volunteer Satisfaction Survey. Conclusion: Our findings suggest the online learning modules were effective regardless of the applied learning schedule. Both groups increased their competence and reported improved confidence with the presented material. A small sample size and discrepancies in participant demographics between groups presented limitations which prohibit recommending a superior learning schedule

    Training Neural Networks for and by Interpolation

    Full text link
    In modern supervised learning, many deep neural networks are able to interpolate the data: the empirical loss can be driven to near zero on all samples simultaneously. In this work, we explicitly exploit this interpolation property for the design of a new optimization algorithm for deep learning, which we term Adaptive Learning-rates for Interpolation with Gradients (ALI-G). ALI-G retains the two main advantages of Stochastic Gradient Descent (SGD), which are (i) a low computational cost per iteration and (ii) good generalization performance in practice. At each iteration, ALI-G exploits the interpolation property to compute an adaptive learning-rate in closed form. In addition, ALI-G clips the learning-rate to a maximal value, which we prove to be helpful for non-convex problems. Crucially, in contrast to the learning-rate of SGD, the maximal learning-rate of ALI-G does not require a decay schedule, which makes it considerably easier to tune. We provide convergence guarantees of ALI-G in various stochastic settings. Notably, we tackle the realistic case where the interpolation property is satisfied up to some tolerance. We provide experiments on a variety of architectures and tasks: (i) learning a differentiable neural computer; (ii) training a wide residual network on the SVHN data set; (iii) training a Bi-LSTM on the SNLI data set; and (iv) training wide residual networks and densely connected networks on the CIFAR data sets. ALI-G produces state-of-the-art results among adaptive methods, and even yields comparable performance with SGD, which requires manually tuned learning-rate schedules. Furthermore, ALI-G is simple to implement in any standard deep learning framework and can be used as a drop-in replacement in existing code.Comment: Published at ICML 202

    On the adequacy of untuned warmup for adaptive optimization

    Full text link
    Adaptive optimization algorithms such as Adam are widely used in deep learning. The stability of such algorithms is often improved with a warmup schedule for the learning rate. Motivated by the difficulty of choosing and tuning warmup schedules, recent work proposes automatic variance rectification of Adam's adaptive learning rate, claiming that this rectified approach ("RAdam") surpasses the vanilla Adam algorithm and reduces the need for expensive tuning of Adam with warmup. In this work, we refute this analysis and provide an alternative explanation for the necessity of warmup based on the magnitude of the update term, which is of greater relevance to training stability. We then provide some "rule-of-thumb" warmup schedules, and we demonstrate that simple untuned warmup of Adam performs more-or-less identically to RAdam in typical practical settings. We conclude by suggesting that practitioners stick to linear warmup with Adam, with a sensible default being linear warmup over 2/(1−β2)2 / (1 - \beta_2) training iterations.Comment: AAAI 202

    A computational approach to developing cost-efficient adaptive-threshold algorithms for EEG neuro feedback

    Get PDF
    In electroencephalography (EEG) neurofeedback protocols, trainees receive feedback about the spectral power of the target brain wave oscillation and are tasked to increase or decrease this feedback signal compared to a predetermined threshold. In a recent computational analysis of a neurofeedback protocol it was shown that the placement of the threshold has a major impact on the learning rate and that placed too low or too high leads to no learning or even unlearning, respectively. However, the optimal threshold placement is not known in real-life scenarios. Here, these analyses were extended to assess whether an adaptive-mean threshold procedure could lead to faster learning curves. The results indicate that such a procedure is indeed superior to a fixed-mean procedure and that the distribution of asymptotic EEG power values converges to that obtained with the optimal-threshold procedure. Surprisingly, the adaptive-mean procedure leads to thresholds that are higher than the optimal one, which is explained through the increase in threshold lagging behind the increase in the likelihood of activation of the target neurons. To date, no computational model was used to compute the cost-efficiency of EEG neurofeedback procedures. The current simulation (within the specific reinforcement schedule) demonstrated a 35% reduction in training time, which could translate into sizeable financial savings. This study demonstrates the utility of computational methods in neurofeedback research and opens up further developments that tackle specific neurofeedback protocols to assess their real-life cost- efficiency
    • …
    corecore