7 research outputs found

    A review of techniques in automatic programming assessment for practical skill test

    Get PDF
    Computer programming ability is a challenging competency that requires several cognitive skills and extensive practice. The increased number of students enrolled in computer and engineering courses and also the increased of failure and drop rate in programming subject is the motivational factor to this research. Due to the importance of this skill, this paper intends to study the landscape of current scenario in assisted assessment for hands-on practical programming focusing on competency-based assessment. The Bloom Taxonomy is used as a competency-based assessment platform. The review showed to-date that there are several automatic assessments for programming skills. However, there is no common grading being applied. Thus, further research is required to propose an automatic assessment that grades the student achievement based on learning taxonomy such as Bloom Cognitive Competency model

    Evaluación Automática de Resultados de Aprendizaje como un Nuevo Paradigma en la enseñanza de un Curso de Programación: La ingeniería en la sociedad 5.0

    Get PDF
    Programming education goes through the transition from the content model to the learning outcomes model and the integration of technology, where automatic assessment tools allow students to support them in practice; This makes learning inclusive by proposing a transversal approach so that the necessary programming skills are achieved. In this sense, this paper presents a strategy that evaluates the source code and analyzes it using software metrics to identify students' learning results in a programming course. A strategy was developed that integrates an automatic source code evaluation tool, which allowed us to identify How an evaluation-based approach supports the learning process, time, and impact in a computer programming course? The results show that the strategy helps the transition from a programming course to a learning outcomes model and reduces the evaluation time without affecting students' grades, compared to the traditional way. Finally, it is essential to highlight that the development of strategies that integrate tools that support the teaching-learning and evaluation process in programming courses has a positive impact on academic training and decision-making, seeking to improve students' weaknesses through the analysis of the outcomes obtained

    Uma abordagem para a avaliação do design visual de aplicativos móveis criados com linguagens de programação baseadas em blocos

    Get PDF
    Competências relacionadas à computação são necessárias para atuar na sociedade, o que tem motivado o ensino de computação na educação básica. A computação frequentemente é ensinada por meio de atividades abertas de programação de aplicativos móveis com linguagens visuais. Nesse contexto, avalia-se a aprendizagem com base no artefato de código criado pelo aluno, identificando se os conceitos abordados foram aplicados corretamente. Embora já existam diversas abordagens para a avaliação de código de linguagens visuais, não se encontraram abordagens incluindo a avaliação do design de interface do usuário (IU) de aplicativos móveis de forma detalhada. Assim, o objetivo deste trabalho é desenvolver uma abordagem para a avaliação de design visual de aplicativos móveis desenvolvidos com linguagens de programação baseadas em blocos. A abordagem é instanciada por uma rubrica para a avaliação do design de IU de aplicativos criados com o ambiente App Inventor e automatizada evoluindo a ferramenta web CodeMaster. A abordagem é avaliada estatisticamente em relação à sua confiabilidade e validade utilizando aplicativos desenvolvidos com o App Inventor. Os resultados da avaliação indicam que a abordagem é confiável e válida. Disponibilizando essa abordagem, espera-se facilitar a avaliação do design de IU de aplicativos móveis criados com App Inventor, fornecendo suporte ao ensino desses conceitos na educação básica

    Identifying evidences of computer programming skills through automatic source code evaluation

    Get PDF
    Orientador: Roberto PereiraCoorientador: Eleandro MaschioTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 27/03/2020Inclui referências: p. 98-106Área de concentração: Ciência da ComputaçãoResumo: Esta tese e contextualizada no ensino de programacao de computadores em cursos de Computacao e investiga aspectos e estrategias para avaliacao automatica e continua de codigos fonte desenvolvidos pelos alunos. O estado da arte foi identificado por meio de revisao sistematica de literatura e revelou que as pesquisas anteriores tendem a realizar avaliacoes baseadas em aspectos tecnicos de codigos fonte, como a avaliacao de corretude funcional e a deteccao de erros. Avaliacoes baseadas em habilidades, por outro lado, sao pouco exploradas e possuem potencial para fornecer detalhes a respeito de habilidades representadas por conceitos de alto nivel, como desvios condicionais e estruturas de repeticao. Um metodo de identificacao automatica de evidencias de aprendizado e entao proposto como uma abordagem baseada em habilidades para a avaliacao automatica de codigos fonte de programacao. O metodo e caracterizado pela implementacao de diferentes estrategias para avaliacao de codigos fonte, identificacao de evidencias de habilidades de programacao, e representacao destas habilidades em um modelo do aluno. Experimentos realizados em ambientes controlados (bases de dados artificiais) mostraram que estrategias automaticas de avaliacao de codigo fonte sao viaveis. Experimentos conduzidos em ambientes reais (codigos fonte produzidos por alunos) produziram resultados semelhantes aos ambientes controlados, entretanto revelaram limitacoes relacionadas a implementacao das estrategias, como vulnerabilidades a sintaxes inesperadas e falhas em expressoes regulares. Um conjunto de habilidades foi selecionado para compor o modelo do aluno, representado por uma rede bayesiana dinamica. Por meio de experimentos foi demonstrado que a alimentacao do modelo com evidencias resultantes da avaliacao automatica de codigos fonte permite o acompanhamento do progresso das habilidades dos alunos. Finalmente, as estrategias automaticas em conjunto com os recursos do modelo do aluno permitiram a demonstracao da avaliacao baseada em habilidades, que se mostrou um recurso valioso para identificacao de solucoes funcionalmente corretas, porem conceitualmente incorretas; quando o programa e funcionalmente correto, retornando resultados esperados a determinadas entradas, porem foi construido com recursos e conceitos incorretos. Palavras-chave: Programacao de Computadores, Avaliacao Automatica, Avaliacao Baseada em HabilidadesAbstract: This thesis is contextualized in the teaching of computer programming in Computing courses and investigates aspects and strategies for automatic and continuous evaluation of student developed source codes. The state of the art was identified through systematic literature review and revealed previous research tends to perform evaluations based on source codes technical aspects, such as functional correctness assessment and error detection. Skills-based assessments, in turn, are less explored although having potential to provide details of skills represented by high-level concepts, such as conditionals and repetition structures. A method for automatic identification of learning evidences is then proposed as a skills-based approach to automatic evaluation of programming source codes. The method is characterized by implementing different strategies for source code evaluation, identifying evidences of programming skills, and representing these skills in a student model. Experiments conducted in controlled scenarios (testing datasets) have shown automatic source code evaluation strategies are viable. Experiments conducted in real scenarios (student-made source codes) produced results similar to controlled scenarios, however, implementation-related limitations were revealed for some strategies, such as vulnerabilities to unexpected syntax and flaws in regular expressions. A skill set was selected to compose our student model, represented by a Dynamic Bayesian Network. Experiments have shown feeding the model with evidences resulting from source codes automatic evaluation allows monitoring students' skills progress. Finally, automatic strategies coupled with student model capabilities enabled demonstrating skills-based assessment, which showed a valuable resource for identifying functionally correct source codes, but conceptually incorrect; when a program is correct functionally, returning expected results to specific inputs, but it was built with erroneous concepts and resources. Keywords: Computer Programming, Automatic Evaluation, Skills-Based Assessmen

    Integration of Virtual Programming Lab in a process of teaching programming EduScrum based

    Get PDF
    Programming teaching is a key factor for technological evolution. The efficient way to learn to program is by programming and hard training and thus feedback is a crucial factor in the success and flow of the process. This work aims to analyse the potential use of VPL in the teaching process of programming in higher education. It also intends to verify whether, with VPL, it is possible to make students learning more effective and autonomous, with a reduction in the volume of assessment work by teachers. Experiments were carried out with the VPL, in the practical-laboratory classes of a curricular unit of initiation to programming in a higher education institution. The results supported by the responses to surveys, point to the validity of the model

    A flexible dynamic system for automatic grading of programming exercises

    Get PDF
    The research on programs capable to automatically grade source code has been a subject of great interest to many researchers. Automatic Grading Systems (AGS) were born to support programming courses and gained popularity due to their ability to assess, evaluate, grade and manage the students’ programming exercises, saving teachers from this manual task. This paper discusses semantic analysis techniques, and how they can be applied to improve the validation and assessment process of an AGS. We believe that the more flexible is the results assessment, the more precise is the source code grading, and better feedback is provided (improving the students learning process). In this paper, we introduce a generic model to obtain a more flexible and fair grading process, closer to a manual one. More specifically, an extension of the traditional Dynamic Analysis concept, by performing a comparison of the output produced by a program under assessment with the expected output at a semantic level. To implement our model, we propose a Flexible Dynamic Analyzer, able to perform a semantic-similarity analysis based on our Output SemanticSimilarity Language (OSSL) that, besides specifying the output structure, allows to define how to mark partially correct answers. Our proposal is compliant with the Learning Objects standard.(undefined
    corecore