9,748 research outputs found

    The group-based social skills training SOSTA-FRA in children and adolescents with high functioning autism spectrum disorder - study protocol of the randomised, multi-centre controlled SOSTA - net trial

    Get PDF
    Background: Group-based social skills training (SST) has repeatedly been recommended as treatment of choice in high-functioning autism spectrum disorder (HFASD). To date, no sufficiently powered randomised controlled trial has been performed to establish efficacy and safety of SST in children and adolescents with HFASD. In this randomised, multi-centre, controlled trial with 220 children and adolescents with HFASD it is hypothesized, that add-on group-based SST using the 12 weeks manualised SOSTA–FRA program will result in improved social responsiveness (measured by the parent rated social responsiveness scale, SRS) compared to treatment as usual (TAU). It is further expected, that parent and self reported anxiety and depressive symptoms will decline and pro-social behaviour will increase in the treatment group. A neurophysiological study in the Frankfurt HFASD subgroup will be performed pre- and post treatment to assess changes in neural function induced by SST versus TAU. Methods/design: The SOSTA – net trial is designed as a prospective, randomised, multi-centre, controlled trial with two parallel groups. The primary outcome is change in SRS score directly after the intervention and at 3 months follow-up. Several secondary outcome measures are also obtained. The target sample consists of 220 individuals with ASD, included at the six study centres. Discussion: This study is currently one of the largest trials on SST in children and adolescents with HFASD worldwide. Compared to recent randomised controlled studies, our study shows several advantages with regard to in- and exclusion criteria, study methods, and the therapeutic approach chosen, which can be easily implemented in non-university-based clinical settings. Trial registration: ISRCTN94863788 – SOSTA – net: Group-based social skills training in children and adolescents with high functioning autism spectrum disorder

    Neural network-based colonoscopic diagnosis using on-line learning and differential evolution

    Get PDF
    In this paper, on-line training of neural networks is investigated in the context of computer-assisted colonoscopic diagnosis. A memory-based adaptation of the learning rate for the on-line back-propagation (BP) is proposed and used to seed an on-line evolution process that applies a differential evolution (DE) strategy to (re-) adapt the neural network to modified environmental conditions. Our approach looks at on-line training from the perspective of tracking the changing location of an approximate solution of a pattern-based, and thus, dynamically changing, error function. The proposed hybrid strategy is compared with other standard training methods that have traditionally been used for training neural networks off-line. Results in interpreting colonoscopy images and frames of video sequences are promising and suggest that networks trained with this strategy detect malignant regions of interest with accuracy

    Sketch-a-Net that Beats Humans

    Full text link
    We propose a multi-scale multi-channel deep neural network framework that, for the first time, yields sketch recognition performance surpassing that of humans. Our superior performance is a result of explicitly embedding the unique characteristics of sketches in our model: (i) a network architecture designed for sketch rather than natural photo statistics, (ii) a multi-channel generalisation that encodes sequential ordering in the sketching process, and (iii) a multi-scale network ensemble with joint Bayesian fusion that accounts for the different levels of abstraction exhibited in free-hand sketches. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photo or sketch. Our network on the other hand not only delivers the best performance on the largest human sketch dataset to date, but also is small in size making efficient training possible using just CPUs.Comment: Accepted to BMVC 2015 (oral

    Neural Network Ensembles for Time Series Prediction

    Get PDF
    Rapidly evolving businesses generate massive amounts of time-stamped data sequences and defy a demand for massively multivariate time series analysis. For such data the predictive engine shifts from the historical auto-regression to modelling complex non-linear relationships between multidimensional features and the time series outputs. In order to exploit these time-disparate relationships for the improved time series forecasting, the system requires a flexible methodology of combining multiple prediction models applied to multiple versions of the temporal data under significant noise component and variable temporal depth of predictions. In reply to this challenge a composite time series prediction model is proposed which combines the strength of multiple neural network (NN) regressors applied to the temporally varied feature subsets and the postprocessing smoothing of outputs developed to further reduce noise. The key strength of the model is its excellent adaptability and generalisation ability achieved through a highly diversified set of complementary NN models. The model has been evaluated within NISIS Competition 2006 and NN3 Competition 2007 concerning prediction of univariate and multivariate time-series. It showed the best predictive performance among 12 competitive models in the NISIS 2006 and is under evaluation within NN3 2007 Competition

    Reinforcement Learning using Augmented Neural Networks

    Full text link
    Neural networks allow Q-learning reinforcement learning agents such as deep Q-networks (DQN) to approximate complex mappings from state spaces to value functions. However, this also brings drawbacks when compared to other function approximators such as tile coding or their generalisations, radial basis functions (RBF) because they introduce instability due to the side effect of globalised updates present in neural networks. This instability does not even vanish in neural networks that do not have any hidden layers. In this paper, we show that simple modifications to the structure of the neural network can improve stability of DQN learning when a multi-layer perceptron is used for function approximation.Comment: 7 pages; two columns; 4 figure

    Improving Language Modelling with Noise-contrastive estimation

    Full text link
    Neural language models do not scale well when the vocabulary is large. Noise-contrastive estimation (NCE) is a sampling-based method that allows for fast learning with large vocabularies. Although NCE has shown promising performance in neural machine translation, it was considered to be an unsuccessful approach for language modelling. A sufficient investigation of the hyperparameters in the NCE-based neural language models was also missing. In this paper, we showed that NCE can be a successful approach in neural language modelling when the hyperparameters of a neural network are tuned appropriately. We introduced the 'search-then-converge' learning rate schedule for NCE and designed a heuristic that specifies how to use this schedule. The impact of the other important hyperparameters, such as the dropout rate and the weight initialisation range, was also demonstrated. We showed that appropriate tuning of NCE-based neural language models outperforms the state-of-the-art single-model methods on a popular benchmark
    • 

    corecore