4 research outputs found

    Teachers as designers of formative e-rubrics: a case study on the introduction and validation of go/no-go criteria

    Get PDF
    [EN] Information and Communications Technologies (ICTs) offer new roles to teachers to improve learning processes. In this regard, learning rubrics are commonplace. However, the design of these rubrics has focused mainly on scoring (summative rubrics), whereas formative rubrics have received significantly less attention. ICTs make possible electronic rubrics (e-rubrics) that enable dynamic and interactive functionalities that facilitate the adaptable and adaptive delivery of content. In this paper, we present a case study that examines three characteristics to make formative rubrics more adaptable and adaptive: criteria dichotomization, weighted evaluation criteria, and go/no-go criteria. A new approach to the design of formative rubrics is introduced, taking advantage of ICTs, where dichotomization and weighted criteria are combined with the use of go/no-go criteria. The approach is discussed as a method to better guide the learner while adjusting to the student's assimilation pace. Two types of go/no-go criteria (hard and soft) are studied and experimentally validated in a computer-aided design assessment context. Bland-Altman plots are constructed as discussed to further illuminate this topic.This work was partially supported by Grant DPI2017-84526-R (MINECO/AEI/FEDER, UE), Project "CAL-MBE, Implementation and validation of a theoretical CAD quality model in a Model-Based Enterprise (MBE) context."Company, P.; Otey, J.; Agost, M.; Contero, M.; Camba, J. (2019). Teachers as designers of formative e-rubrics: a case study on the introduction and validation of go/no-go criteria. Universal Access in the Information Society. 18(3):675-688. https://doi.org/10.1007/s10209-019-00686-7S675688183Popham, W.J.: What’s wrong—and what’s right—with rubrics. Educ. Leadersh 55(2), 72–75 (1997)Educational Research Service: Focus on: Developing and using instructional rubrics. Educational Research Service (2004)Panadero, E., Jonsson, A.: The use of scoring rubrics for formative assessment purposes revisited: a review. Educ. Res. Rev. 9, 129–144 (2013)Reddy, Y.M., Andrade, H.: A review of rubric use in higher education. Assess. Eval. High. Educ. 35(4), 435–448 (2010)Company, P., Contero, M., Otey, J., Plumed, R.: Approach for developing coordinated rubrics to convey quality criteria in MCAD training. Comput. Aided Des. 63, 101–117 (2015)Company, P., Contero, M., Otey, J., Camba, J.D., Agost, M.J., Perez-Lopez, D.: Web-Based system for adaptable rubrics: case study on CAD assessment. Educ Technol Soc 20(3), 24–41 (2017)Tierney, R., Simon M.: What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. Pract. Assess. Res. Eval. 9(2) (2004). http://www.pareonline.netLikert, R.: A technique for the measurement of attitudes. Arch. Psychol. 22(140), 55 (1932)Rohrmann, B.: Verbal qualifiers for rating scales: Sociolinguistic considerations and psychometric data. Project Report, University of Melbourne/Australia (2007)Fluckiger, J.: Single point rubric: a tool for responsible student self-assessment. Delta Kappa Gamma Bull. 76(4), 18–25 (2010)Estell, J. K., Sapp, H. M., Reeping, D.: Work in progress: Developing single point rubrics for formative assessment. In: ASEE’s 123rd annual conference and exposition, New Orleans, LA, USA, June 26–29. Paper ID #14595 (2016)Jonsson, A., Svingby, G.: The Use of scoring rubrics: reliability, validity and educational consequences. Educ. Res. Rev. 2, 130–144 (2007)Georgiadou, E., Triantafillou, E., Economides, A.A.: Evaluation parameters for computer-adaptive testing. Br. J. Edu. Technol. 37(2), 261–278 (2006)Company, P., Otey, J., Contero, M., Agost, M.J., Almiñana, A.: Implementation of adaptable rubrics for CAD model quality formative assessment purposes. Int. J. Eng. Educ. 32(2A), 749–761 (2016)Otey, J.: A contribution to conveying quality criteria in mechanical CAD models and assemblies through rubrics and comprehensive design intent qualification. Ph.D. Thesis, Submitted to the Doctoral School of Universitat Politècnica de València (2017)Watson, P.F., Petrie, A.: Method agreement analysis: a review of correct methodology. Theriogenology 73(9), 1167–1179 (2010)Kottner, J., Streiner, D.L.: The difference between reliability and agreement. J. Clin. Epidemiol. 64(6), 701–702 (2011)McLaughlin, P.: Testing agreement between a new method and the gold standard—how do we test. J. Biomech. 46, 2757–2760 (2013)Costa-Santos, C., Bernardes, J., Ayres-de-Campos, D., Costa, A., Costa, C.: The limits of agreement and the intraclass correlation coefficient may be inconsistent in the interpretation of agreement. J. Clin. Epidemiol. 64(3), 264–269 (2011)Chen, C.C., Barnhart, H.X.: Assessing agreement with intraclass correlation coefficient and concordance correlation coefficient for data with repeated measures. Comput. Stat. Data Anal. 60, 132–145 (2013)Bland, J.M., Altman, D.: Statistical methods for assessing agreement between two methods of clinical measurement. The Lancet 327(8476), 307–310 (1986)Van Stralen, K.J., Jager, K.J., Zoccali, C., Dekker, F.W.: Agreement between methods. Kidney Int. 74(9), 1116–1120 (2008)Beckstead, J.W.: Agreement, reliability, and bias in measurement: commentary on Bland and Altman (1986:2010). Int. J. Nurs. Stud. 48, 134–135 (2011)Bland, J.M., Altman, D.: Measuring agreement in method comparison studies. Stat. Methods Med. Res. 8, 135–160 (1999)Giavarina, D.: Understanding Bland–Altman analysis. Biochem. Med. 25(2), 141–151 (2015)GraphPad: Interpreting results: Bland–Altman. Retrieved from https://www.graphpad.com/guides/prism/7/statistics/bland-altman_results.htm?toc=0&printWindow (1995

    Web-based system for adaptable rubrics: case study on CAD assessment

    Get PDF
    [EN] This paper describes the implementation and testing of our concept of adaptable rubrics, defined as analytical rubrics that arrange assessment criteria at multiple levels that can be expanded on demand. Because of its adaptable nature, these rubrics cannot be implemented in paper formats, neither are they supported by current Learning Management Systems (LMS). The main contribution of this work involves the adaptable capability of different levels of detail, which can be expanded for each rubric criterion as needed. Our rubrics platform provides specialized and intuitive tools to create and modify rubrics as well as managing metadata to support learning analytics. As an example of a practical assessment situation, a case study on Mechanical Computer Aided Design (MCAD) systems training is presented. The validation process in this scenario proved the effectiveness of our adaptable rubric platform for supporting formative assessment in a multifaceted and complex field such as MCAD. The system also showed the potential of collecting user interaction metadata, which can be used to analyze the evaluation process and guide further improvements in the teaching strategy.This work was supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund, through the ANNOTA project (Ref. TIN2013-46036-C3-1-R).Company, P.; Contero, M.; Otey, J.; Camba, J.; Agost, M.; Pérez Lopez, DC. (2017). Web-based system for adaptable rubrics: case study on CAD assessment. Journal of Educational Technology and Society. 20(3):24-41. http://hdl.handle.net/10251/136958S244120

    Taking the customer into account in collaborative design

    Get PDF
    This article describes the improvement of a model of collaborative design for the ceramic industry. A new stakeholder playing a crucial role is now included in the design process, i.e. the customer. Specifically, we present a pilot validation study for the framework that aims to analyse how the environment, experiences and reference criteria of different types of the customers (commercial dealers, final users, architects and interior designers, etc.) can affect their preferences. Information about these customer preferences could be very useful for designers during the early stages of product development. A multidisciplinary approach to the problem can introduce substantial improvements in defining a truly collaborative design chain

    Implementation of Adaptable Rubrics for CAD Model Quality Formative Assessment

    Full text link
    [EN] Evaluation rubrics are helpful as formative tools to convey quality criteria from the beginning of the training period of future mechanical CAD users. But rubrics should provide feedback and evolve dynamically to adapt to different learning paces. Computer-Assisted Assessment (CAA) tools can easily provide automatic feedback, although they are sometimes linked to general purpose educational tools, which may be complicated, costly and non-customizable, thus preventing teachers from adopting them. Hence, our main concern is finding CAA tools easy-to-use and compatible with adaptable rubrics. In this paper, a prototype of a spreadsheet-based adaptable rubric is designed and tested in three experiments, which involved until 43 mechanical engineering degree students. Results are analyzed to conclude that spreadsheet forms are unpractical, as its implementation requires expertise in spreadsheet programming, and extracting information from the spreadsheet-rubrics is time consuming. But the main conclusion from this proof of concept is that adaptable rubrics can be implemented, and allow students to evolve at different pace. Rubrics structured around different dimensions which are progressively developed in increasing levels of detail can be efficiently implemented as adaptable rubrics, by simply folding the low level details and allowing the user to freely unfold them under request.This work was partially funded by the Spanish ‘‘Ministerio de Economía y Competitividad; Proyectos I+D+I Convocatoria RETOS 2013’’, project TIN2013-46036-C3-1-R (ANNOTA: la aplicación de las anotaciones 3D en el contexto industrial de la empresa basada en modelos (model based enterprise) y la formación técnica).Company, P.; Otey, J.; Contero, M.; Agost, M.; Almiñana, A. (2016). Implementation of Adaptable Rubrics for CAD Model Quality Formative Assessment. INTERNATIONAL JOURNAL OF ENGINEERING EDUCATION. 32(2A):749-761. http://hdl.handle.net/10251/136495S749761322
    corecore