660 research outputs found

    Perception-oriented methodology for robust motion estimation design

    Get PDF
    Optimizing a motion estimator (ME) for picture rate conversion is challenging. This is because there are many types of MEs and, within each type, many parameters, which makes subjective assessment of all the alternatives impractical. To solve this problem, we propose an automatic design methodology that provides `well-performing MEs' from the multitude of options. Moreover, we prove that applying this methodology results in subjectively pleasing quality of the upconverted video, even while our objective performance metrics are necessarily suboptimal. This proof involved a user rating of 93 MEs in 3 video sequences. The 93 MEs were systematically selected from a total of 7000 ME alternatives. The proposed methodology may provide an inspiration for similar tough multi-dimensional optimization tasks with unreliable metrics

    Ageing effects around the glass and melting transitions in poly(dimethylsiloxane) visualized by resistance measurements

    Get PDF
    The process of ageing in rubbers requires monitoring over long periods (days to years). To do so in non-conducting rubbers, small amounts of carbon-black particles were dispersed in a fractal network through the rubber matrix, to make the rubber conducting without modifying its properties. Continuous monitoring of the resistance reveals the structural changes around the glass and melting transitions and especially details about the hysteresis and ageing processes. We illustrate the method for the semicrystalline polymer poly(dimethylsiloxane) (PDMS).Comment: 4 pages, 4 figure

    The Wheel of Competency Assessment: Presenting Quality Criteria for Competency Assessment Programs

    Get PDF
    Baartman, L. K. J., Bastiaens, T. J., Kirschner, P. A., & Van der Vleuten, C. P. M. (2006). The wheel of competency assessment: Presenting quality criteria for Competency Assessment Programmes. Studies in Educational Evaluation, 32, 153-170.Instruction and learning are increasingly based on competencies, causing a call for assessment methods to adequately determine competency acquisition. Because competency assessment is such a complex endeavor, one single assessment method seems not to be sufficient. This necessitates Competency Assessment Programs (CAPs) that combine different methods, ranging from classical tests to recently developed assessment methods. However, many of the quality criteria used for classical tests cannot be applied to CAPs, since they use a combination of different methods rather than just one. This article presents a framework of 10 quality criteria for CAPs. An expert focus group was used to validate this framework. The results confirm the framework (9 out of 10 criteria) and expand it with 3 additional criteria. Based on the results, an adapted and layered new framework is presented

    Teachers’ opinions on quality criteria for Competency Assessment Programs

    Get PDF
    Quality control policies towards Dutch vocational schools have changed dramatically because the government questioned examination quality. Schools must now demonstrate assessment quality to a new Examination Quality Center. Since teachers often design assessments, they must be involved in quality issues. This study therefore explores teachers’ opinions on assessment quality evaluation criteria. Pre-vocational and vocational teachers (N=211) responded to a questionnaire. Contrary to expectations, results show that teachers deem classical and competency-based quality criteria equally important. Vocational teachers gave higher importance scores than pre-vocational teachers, possibly due to the pressure they experience to improve the quality of their assessments

    Determining the quality of competences assessment programs:: A self-evaluation procedure

    Get PDF
    Baartman, L. K. J., Prins, F. J., Kirschner, P. A., & Van der Vleuten, C. P. M. (2007). Determining the quality of Competence Assessment Programs: A self-evaluation procedure. Studies in Educational Evaluation, 33, 258-281.As assessment methods are changing, the way to determine their quality needs to be changed accordingly. This article argues for the use Competences Assessment Programs (CAPs), combinations of traditional tests and new assessment methods which involve both formative and summative assessments. To assist schools in evaluating their CAPs, a self-evaluation procedure was developed, based on 12 quality criteria for CAPs developed in earlier studies. A self-evaluation was chosen as it is increasingly used as an alternative to external evaluation. The CAP self-evaluation is carried out by a group of functionaries from the same school and comprises individual self-evaluations and a group interview. The CAP is rated on the 12 quality criteria and a piece of evidence is asked for to support these ratings. In this study, three functionaries from eight schools (N = 24) evaluated their CAP using the self-evaluation procedure. Results show that the group interview was very important as different perspectives on the CAP are assembled here into an overall picture of the CAP’s quality. Schools seem to use mainly personal experiences to support their ratings and need to be supported in the process of carrying out a self-evaluation

    Evaluating assessment quality in competence-based education: A qualitative comparison of two frameworks

    Get PDF
    Because learning and instruction are increasingly competence-based, the call for assessment methods to adequately determine competence is growing. Using just one single assessment method is not sufficient to determine competence acquisition. This article argues for Competence Assessment Programmes (CAPs), consisting of a combination of different assessment methods, including both traditional and new forms of assessment. To develop and evaluate CAPs, criteria to determine their quality are needed. Just as CAPs are combinations of traditional and new forms of assessment, criteria used to evaluate CAP quality should be derived from both psychometrics and edumetrics. A framework of 10 quality criteria for CAPs is presented, which is then compared to Messick's framework of construct validity. Results show that the 10-criterion framework partly overlaps with Messick's, but adds some important new criteria, which get a more prominent place in quality control issues in competence-based education

    LEFT BEHIND: MONITORING THE SOCIAL INCLUSION OF YOUNG AUSTRALIANS WITH SELF- REPORTED LONG TERM HEALTH CONDITIONS, IMPAIRMENTS OR DISABILITIES 2001 - 2009

    Get PDF
    Adolescents and young adults with disabilities are at heightened risk of social exclusion. Exclusion leads to poor outcomes in adulthood which in turn affects individuals’ health and wellbeing and that of their families and society through loss of productive engagement in their communities. Australia’s Social Inclusion Indicators Framework provides indices in domains of participation, resources and multiple and entrenched disadvantage to monitor and report on social inclusion. The Household Income and Labour Dynamics in Australia survey provides data over time on households in Australia. Using these tools we report here on the extent of social inclusion/exclusion of young disabled Australians over the past decade. Relative to their non-disabled peers, young disabled Australians are significantly less likely to do well on participation indicators.Centre for Disability Research and Polic

    Driving lesson or driving test?:A metaphor to help faculty separate feedback from assessment

    Get PDF
    Although there is consensus in the medical education world that feedback is an important and effective tool to support experiential workplace-based learning, learners tend to avoid the feedback associated with direct observation because they perceive it as a high-stakes evaluation with significant consequences for their future. The perceived dominance of the summative assessment paradigm throughout medical education reduces learners’ willingness to seek feedback, and encourages supervisors to mix up feedback with provision of ‘objective’ grades or pass/fail marks. This eye-opener article argues that the provision and reception of effective feedback by clinical supervisors and their learners is dependent on both parties’ awareness of the important distinction between feedback used in coaching towards growth and development (assessment for learning) and reaching a high-stakes judgement on the learner’s competence and fitness for practice (assessment of learning). Using driving lessons and the driving test as a metaphor for feedback and assessment helps supervisors and learners to understand this crucial difference and to act upon it. It is the supervisor’s responsibility to ensure that supervisor and learner achieve a clear mutual understanding of the purpose of each interaction (i.e. feedback or assessment). To allow supervisors to use the driving lesson—driving test metaphor for this purpose in their interactions with learners, it should be included in faculty development initiatives, along with a discussion of the key importance of separating feedback from assessment, to promote a feedback culture of growth and support programmatic assessment of competence
    • …
    corecore