6 research outputs found

    Auto-Grading for 3D Modeling Assignments in MOOCs

    Full text link
    Bottlenecks such as the latency in correcting assignments and providing a grade for Massive Open Online Courses (MOOCs) could impact the levels of interest among learners. In this proposal for an auto-grading system, we present a method to simplify grading for an online course that focuses on 3D Modeling, thus addressing a critical component of the MOOC ecosystem that affects. Our approach involves a live auto-grader that is capable of attaching descriptive labels to assignments which will be deployed for evaluating submissions. This paper presents a brief overview of this auto-grading system and the reasoning behind its inception. Preliminary internal tests show that our system presents results comparable to human graders

    Comparación del efecto de diferentes modosde agregar las calificaciones de evaluación continua en la nota final

    Get PDF
    [EN] We present the results of comparing various ways of calculating students' final grades from continuous assessment grades. Traditionally the weighted arithmetic mean has been used and we compare this method with other alternatives: arithmetic mean, geometric mean, harmonic mean and multiplication of the percentage of overcoming of each activi-ty. Our objective is to verify, if any of the alternative methods, agree with the student’s performance proposed by the teacher of the subject, further discriminating the grade be-tween high and low learning outcomes and reducing the number of approved opportunists.Este trabajo ha sido parcialmente financiado por la Universitat Politécnica de Valencia (PIME/2016/A/027/A) “La evaluación pareada como metodología para la evaluación del pensamiento crí- tico de los alumnos”.Marin-Garcia, JA.; Maheut, J.; Garcia Sabater, JJ. (2017). Comparison of different ways of computing grades in continuous assessment into the final grade. Working Papers on Operations Management. 8(SP):1-12. https://doi.org/10.4995/wpom.v8i0.72421128S

    Assessment of parametric assembly models based on CAD quality dimensions

    Full text link
    [EN] An approach to convey CAD quality-oriented strategies to beginning users to create bottom-up assemblies is described. The work builds on previous efforts in the area of single part history-based, feature-based parametric modeling evaluation by defining, testing, and validating a set of quality dimensions that can be applied to MCAD assembly assessment. The process of redefining and adapting dimension descriptors and achievement levels of parts rubrics to make them applicable to assemblies is addressed, then the results of two experimental studies designed to analyze the inter-rater reliability of this approach to assembly evaluation are reported. Results suggest the mechanism is reliable to provide an objective assessment of assembly models. Limitations for the formative selfevaluation of CAD assembly skills are also identified.This work was partially supported by the Spanish grant DPI2017-84526-R (MINECO/AEI/FEDER, UE), project CAL-MBE, Implementation and validation of a theoretical CAD quality model in a Model-Based Enterprise (MBE) context. , and the ANNOTA2 project funded by Universitat Politècnica de València.Otey, J.; Company, P.; Contero, M.; Camba, JD. (2019). Assessment of parametric assembly models based on CAD quality dimensions. Computer-Aided Design and Applications. 16(4):628-653. https://doi.org/10.14733/cadaps.2019.628-653S62865316

    Propuestas Tecnológicas de Autocorrección de ejercicios de modelado 3D

    Get PDF
    En la actualidad, existen varios procedimientos contrastados y algunas otras propuestas [1] para realizar la autoevaluación de ejercicios o exámenes de materias que se evalúan mediante ejercicios numéricos. Se comparan los valores intermedios o finales y se asigna una calificación automática de autoevaluación. Este procedimiento clásico de corrección por parte del profesor se puede ampliar [2]. La evaluación automática de los ejercicios basados en textos resulta más complicada porque, aunque la apariencia de ciertas palabras clave o sus sinónimos podría ofrecer un posible acercamiento a la evaluación mecánica de esos ejercicios, la dificultad en la evaluación de éstos reside en la interpretación de su significado [3]. En el caso de los ejercicios gráficos en 2D, que son típicos del dibujo técnico, el problema es muy diferente, ya que no hay cadenas alfanuméricas para comparar. Las similitudes entre las imágenes y la comparación de entidades primitivas (objetos vectoriales) pueden ser posibles formas de evaluación [4]. El problema resulta más complicado cuando queremos evaluar mecánicamente los modelos 3D. En este artículo se presenta una compilación de posibles procedimientos a utilizar en la generación de una herramienta de autoevaluación para ejercicios de modelización industrial de sólidos, es decir, de piezas mecánicas [5]. En estos casos, ciertos parámetros como los volúmenes, las superficies, los centros de gravedad o los momentos de inercia pueden ser una primera aproximación a sus correcciones [6]. Estas evaluaciones podrían continuar con el análisis de las operaciones constructivas que existen en la modelización del objeto, tales como piezas sólidas, vaciados, agujeros, roscados, etc., todas ellas incluidas en sus árboles de modelización o listas de operaciones. La generación de una utilidad que ayude a la corrección de los ejercicios de modelización 3D sería de gran interés, ya que aportaría eficacia y agilidad al proceso de evaluación, así como una mayor objetividad al utilizar un sistema informático que aísla los factores de similitud y aplica automáticamente reglas de evaluación mensurablesNowadays, there are several contrasted procedures and other proposals [1] for the self-assessment of exercises or exams of subjects which are evaluated using numerical exercises. Intermediate or final values are compared, and an automatic qualification of self-assessment is assigned. It is possible to extend this classic correction procedure by the teacher [2]. The automatic assessment of exercises based on texts is more complicated because the appearance of certain keywords or their synonyms could offer a possible approach as a mechanical assessment of those exercises. However, the difficulty in the assessment of these exercises is the interpretation of their meaning [3]. In the case of 2D graphic exercises, which are typical of technical drawing, the problem is very different, since there are no alphanumeric chains to compare. Similarities between images and the comparison of primitive entities (vector objects) may be possible ways for evaluation [4]. The problem is more complicated when we want to evaluate 3D models mechanically. This article presents a compilation of possible procedures to use in the generation of a self-assessment tool for industrial solid modelling exercises, that is, of mechanical parts [5]. In these cases, certain parameters such as volumes, surfaces, centres of gravity or moments of inertia can be a first approximation to their corrections [6]. These evaluations could continue with the analysis of the constructive operations that exist in the modelling of the object, such as solid parts, emptying, holes, threading, etc., all of them included in their modelling trees or lists of operations. The generation of a utility that helps in the correction of 3D modelling exercises would be of great interest, since it would bring effectiveness and agility to the evaluation process, as well as greater objectivity when using a computer system that isolates similarity factors and implements rules of measurable evaluation automaticall

    Investigating a learning analytics interface for automatically marked programming assessments

    Get PDF
    Student numbers at the University of Cape Town continue to grow, with an increasing number of students enrolling to study programming courses. With this increase in numbers, it becomes difficult for lecturers to provide individualised feedback on programming assessments submitted by students. To solve this, the university utilises an automatic marking tool for marking assignments and providing feedback. Students can submit assignments and receive instant feedback on marks allocated or errors in their submissions. This tool saves time as lecturers spend less time on marking and provides instant feedback on submitted code, hence providing the student with an opportunity to correct errors in their submitted code. However, most students have identified areas where improvements can be made on the interface between the automatic marker and the submitted programs. This study investigates the potential of creating a learning analytics inspired dashboard interface to improve the feedback provided to students on their submitted programs. A focus group consisting of computer science class representatives was organised, and feedback from this focus group was used to create dashboard mock-ups. These mock-ups were then used to develop high-fidelity learning analytics inspired dashboard prototypes that were tested by first-year computer science students to determine if the interfaces were useful and usable. The prototypes were designed using the Python programming language and Plotly Python library. User-centred design methods were employed by eliciting constant feedback from students during the prototyping and design of the learning analytics inspired interface. A usability study was employed where students were required to use the dashboard and then provide feedback on its use by completing a questionnaire. The questionnaire was designed using Nielsen's Usability Heuristics and AttrakDiff. These methods also assisted in the evaluation of the dashboard design. The research showed that students considered a learning analytics dashboard as an essential tool that could help them as they learn to program. Students found the dashboard useful and had an overall understanding of the specific features they would like to see implemented on a learning analytics inspired dashboard used by the automatic marking tool. Some of the specific features mentioned by students include overall performance, duly performed needed to qualify for exams, highest score, assignment due dates, class average score, and most common errors. This research hopes to provide insight on how automatically marked programming assessments could be displayed to students in a way that supports learning
    corecore