43,497 research outputs found

    Supervisor trainees' and their supervisors' perceptions of attainment of knowledge and skills. An empirical evaluation of a psychotherapy supervisor training programme

    Get PDF
    Objectives. This study aimed to evaluate the success of a two-year, part-time training programme for psychotherapy supervisors. A second aim was to examine factors that might contribute to perceived knowledge and skills attainment during the training course. Design. This is a naturalistic, longitudinal study where several measures are used to examine group process and outcome. Methods. Supervisor trainees’ (n=21) and their facilitators’ (n=6) ratings of learning (knowledge and skills), relations to the supervisor and supervision group, usage of the group, and supervisor style were completed at three time points. Results. The findings suggested that both trainees and their supervisors perceived that the trainees attained a substantial amount of knowledge and skills during the course. In accordance with the literature and expectations, the regression analysis suggested a strong negative association between a strong focus on group processes in the initial and middle phases of the training and perceived knowledge and skills attainment in the final phase of the training. The expected, positive role of relations among trainees in the supervision group in the first half of the training and perceived knowledge and skills attainment in the final part of the training was obtained, whilst the hypothesized significance of the relationship between trainee and supervisor did not receive support. Conclusions The supervisory course seemed to provide a training that allowed trainees to attain knowledge and skills that are necessary for psychotherapy supervisors. The results of this pilot study also emphasize the need of more research on learning in the context of group supervision in psychotherapy

    Supervised learning with hybrid global optimisation methods

    Get PDF

    Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images

    Full text link
    Convolutional neural networks (CNNs) show impressive performance for image classification and detection, extending heavily to the medical image domain. Nevertheless, medical experts are sceptical in these predictions as the nonlinear multilayer structure resulting in a classification outcome is not directly graspable. Recently, approaches have been shown which help the user to understand the discriminative regions within an image which are decisive for the CNN to conclude to a certain class. Although these approaches could help to build trust in the CNNs predictions, they are only slightly shown to work with medical image data which often poses a challenge as the decision for a class relies on different lesion areas scattered around the entire image. Using the DiaretDB1 dataset, we show that on retina images different lesion areas fundamental for diabetic retinopathy are detected on an image level with high accuracy, comparable or exceeding supervised methods. On lesion level, we achieve few false positives with high sensitivity, though, the network is solely trained on image-level labels which do not include information about existing lesions. Classifying between diseased and healthy images, we achieve an AUC of 0.954 on the DiaretDB1.Comment: Accepted in Proc. IEEE International Conference on Image Processing (ICIP), 201

    Subitizing with Variational Autoencoders

    Full text link
    Numerosity, the number of objects in a set, is a basic property of a given visual scene. Many animals develop the perceptual ability to subitize: the near-instantaneous identification of the numerosity in small sets of visual items. In computer vision, it has been shown that numerosity emerges as a statistical property in neural networks during unsupervised learning from simple synthetic images. In this work, we focus on more complex natural images using unsupervised hierarchical neural networks. Specifically, we show that variational autoencoders are able to spontaneously perform subitizing after training without supervision on a large amount images from the Salient Object Subitizing dataset. While our method is unable to outperform supervised convolutional networks for subitizing, we observe that the networks learn to encode numerosity as basic visual property. Moreover, we find that the learned representations are likely invariant to object area; an observation in alignment with studies on biological neural networks in cognitive neuroscience

    Critical Learning Periods for Multisensory Integration in Deep Networks

    Full text link
    We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training. Interfering with the learning process during this initial stage can permanently impair the development of a skill, both in artificial and biological systems where the phenomenon is known as critical learning period. We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations. This evidence challenges the view, engendered by analysis of wide and shallow networks, that early learning dynamics of neural networks are simple, akin to those of a linear model. Indeed, we show that even deep linear networks exhibit critical learning periods for multi-source integration, while shallow networks do not. To better understand how the internal representations change according to disturbances or sensory deficits, we introduce a new measure of source sensitivity, which allows us to track the inhibition and integration of sources during training. Our analysis of inhibition suggests cross-source reconstruction as a natural auxiliary training objective, and indeed we show that architectures trained with cross-sensor reconstruction objectives are remarkably more resilient to critical periods. Our findings suggest that the recent success in self-supervised multi-modal training compared to previous supervised efforts may be in part due to more robust learning dynamics and not solely due to better architectures and/or more data
    • …
    corecore