10,673 research outputs found

    An evaluation framework for data competitions in TEL

    Get PDF
    This paper presents a study describing the development of an Evaluation Framework (EF) for data competitions in TEL. The study applies the Group Concept Method (GCM) to empirically depict criteria and their indicators for evaluating software applications in TEL. A statistical analysis including multidimensional scaling and hierarchical clustering on the GCM data identified the following six evaluation criteria: 1.Educational Innovation, 2.Usability, 3.Data, 4.Performance, 5.Privacy, and 6.Audience. Each of them was operationalized through a set of indicators. The resulting Evaluation Framework (EF) incorporating these criteria was applied to the first data competition of the LinkedUp project. The EF was consequently improved using the results from reviewers' interviews, which were analysed qualitatively and quantitatively. The outcome of these efforts is a comprehensive EF that can be used for TEL data competitions and for the evaluation of TEL tools in general. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-11200-8_6.EC/FP7/LinkedUpEC/FP7/DURAAR

    Awarding Innovation: An Assessment of the Digital Media and Learning Competition

    Get PDF
    Increasing availability and accessibility of digital media have changed the ways in which young people learn, socialize, play, and engage in civic life. Seeking to understand how learning environments and institutions should transform to respond to these changes, the John D. and Catherine T. MacArthur Foundation (the Foundation) launched the Digital Media and Learning (DML) Initiative in 2005. This report highlights the successes and challenges of one component of the DML Initiative: the DML Competition (the Competition)

    Gender Discrimination and Evaluators’ Gender: Evidence from the Italian Academy

    Get PDF
    Relying on a natural experiment consisting in 130 competitions for promotion to associate and full professor in the Italian University, we analyze whether gender discrimination is affected by the gender of evaluators. Taking advantage of the random assignment of evaluators to each competition, we examine the probability of success of each candidate in relation to the committee gender composition, controlling for candidates’ scientific productivity and a number of individual characteristics. We find that female candidates are less likely to be promoted when the committee is composed exclusively by males, while the gender gap disappears when the candidates are evaluated by a mixed sex committee. Results are qualitatively similar across fields and type of competitions. The analysis of candidates’ decisions to withdraw from competition highlights that gender differences in preferences for competition play only a minor role in explaining gender discrimination. It also emerges that withdrawal decisions are not affected by the committee gender composition and therefore the gender discrimination is not related to self-fulfilling expectations.Gender Discrimination, Evaluators’ Gender, Affirmative Actions, Academic Promotion

    Confirmatory factor analysis of the Test of Performance Strategies (TOPS) among adolescent athletes

    Get PDF
    The aim of the present study was to examine the factorial validity of the Test of Performance Strategies (TOPS; Thomas et al., 1999) among adolescent athletes using confirmatory factor analysis. The TOPS was designed to assess eight psychological strategies used in competition (i.e. activation, automaticity, emotional control, goal-setting, imagery, negative thinking, relaxation and self-talk,) and eight used in practice (the same strategies except negative thinking is replaced by attentional control). National-level athletes (n = 584) completed the 64-item TOPS during training camps. Fit indices provided partial support for the overall measurement model for the competition items (robust comparative fit index = 0.92, Tucker-Lewis index = 0.88, root mean square error of approximation = 0.05) but minimal support for the training items (robust comparative fit index = 0.86, Tucker-Lewis index = 0.81, root mean square error of approximation = 0.06). For the competition items, the automaticity, goal-setting, relaxation and self-talk scales showed good fit, whereas the activation, emotional control, imagery and negative thinking scales did not. For the practice items, the attentional control, emotional control, goal-setting, imagery and self-talk scales showed good fit, whereas the activation, automaticity and relaxation scales did not. Overall, it appears that the factorial validity of the TOPS for use with adolescents is questionable at present and further development is required
    • …
    corecore