1,481 research outputs found

    ItsSQL: Intelligent Tutoring System for SQL

    Full text link
    SQL is a central component of any database course. Despite the small number of SQL commands, students struggle to practice the concepts. To overcome this challenge, we developed an intelligent tutoring system (ITS) to guide the learning process with a small effort by the lecturer. Other systems often give only basic feedback (correct or incorrect) or require hundreds of instance specific rules defined by a lecturer. In contrast, our system can provide individual feedback based on a semi-automatically/intelligent growing pool of reference solutions, i.e., sensible approaches. Moreover, we introduced the concept of good and bad reference solutions. The system was developed and evaluated in three steps based on Design Science research guidelines. The results of the study demonstrate that providing multiple reference solutions are useful with the support of harmonization to provide individual and real-time feedback and thus improve the learning process for students

    Interactive correction and recommendation for computer language learning and training

    Get PDF
    Active learning and training is a particularly effective form of education. In various domains, skills are equally important to knowledge. We present an automated learning and skills training system for a database programming environment that promotes procedural knowledge acquisition and skills training. The system provides meaningful, knowledge-level feedback such as correction of student solutions and personalised guidance through recommendations. Specifically, we address automated synchronous feedback and recommendations based on personalised performance assessment. At the core of the tutoring system is a pattern-based error classification and correction component that analyses student input in order to provide immediate feedback and in order to diagnose student weaknesses and suggest further study material. A syntax-driven approach based on grammars and syntax trees provides the solution for a semantic analysis technique. Syntax tree abstractions and comparison techniques based on equivalence rules and pattern matching are specific approaches

    ScaffoldSQL: Using Parson’s Problems to Support Database Pedagogy

    Get PDF
    This paper examines ScaffoldSQL, an interactive tool for helping students learn SQL through a system of interactive scaffolded exercises using Parson’s problems. In the system, students are posed with a problem to solve using SQL. They start by attempting to answer the question using free-form text. If they get the problem wrong, they can use a Parson’s problem interface to simplify the problem. After completing the problem, students are given one of two “secret words,” which allows instructors to track student progress without the need to install anything beyond their typical LMS. The system is designed to help instructors of flipped classrooms identify students who are struggling early, while simultaneously providing immediate feedback for students as they are learning. The system also provides tools for content creation and data gathering for research and development purposes

    A comparative analysis of student SQL and relational database knowledge using automated grading tools

    Get PDF
    This paper evaluates a blended learning methodology for Relational Database Systems. Our module offers students a range of interconnected tools and teaching resources. Among them is \testsql, a query tool giving the students automated feedback on SQL query exercises; but we do not use it to assess the students. Instead assessment is through a range of questions which test not only SQL writing skills, but also other aspects of the field, including questions on optimisation, physical modelling, PL/SQL, and indirect questions on SQL knowledge, such as processing order. The effectiveness of the approach is investigated through a survey of student attitudes', and assessment data. Our analysis shows, unsurprisingly, that the students' use of more resources correlates significantly with better results; but also that success at the different sub-topics tested is not at all well correlated, which shows that students can master some topics while remaining weak at others; and finally, that indirect SQL questions is best predictor of success at each of the other sub-topics. This last result confirms our choice to broaden the testing of SQL skills, and has implications for the use automated SQL assessment tools: we recommend that in automated testing for Database Systems, SQL writing tests be complemented with indirect questions on keyword use, parsing, or error recognition aimed at revealing broader abilities of learners

    Semi-automatic assessment of basic SQL statements

    Get PDF
    Learning and assessing the Structured Query Language (SQL) is an important step in developing students' database skills. However, due to the increasing numbers of students learning SQL, assessing and providing detailed feedback to students' work can be time consuming and prone to errors. The main purpose of this research is to reduce or remove as many of the repetitive tasks in any phase of the assessment process of SQL statements as possible to achieve the consistency of marking and feedback on SQL answers.This research examines existing SQL assessment tools and their limitations by testing them on SQL questions, where the results reveal that students must attaint essential skills to be able to formulate basic SQL queries. This is because formulating SQL statements requires practice and effort by students. In addition, the standard steps adopted in many SQL assessment tools were found to be insufficient in successfully assessing our sample of exam scripts. The analysis of the outcomes identified several ways of solving the same query and the categories of errors based on the common student mistakes in SQL statements. Based on this, this research proposes a semi-automated assessment approach as a solution to improve students’ SQL formulation process, ensure the consistency of SQL grading and the feedback generated during the marking process. The semi-automatic marking method utilities both the Case-Based Reasoning (CBR) system and Rule-Based Reasoning (RBR) system methodologies. The approach aims to reduce the workload of marking tasks by reducing or removing as many of the repetitive tasks in any phase of the marking process of SQL statements as possible. It also targets the improvement of feedback dimensions that can be given to students.In addition, the research implemented a prototype of the SQL assessment framework which supports the process of the semi-automated assessment approach. The prototype aims to enhance the SQL formulation process for students and minimise the required human effort for assessing and evaluating SQL statements. Furthermore, it aims to provide timely, individual and detailed feedback to the students. The new prototype tool allows students to formulate SQL statements using the point-and-click approach by using the SQL Formulation Editor (SQL-FE). It also aims to minimise the required human effort for assessing and evaluating SQL statements through the use of the SQL Marking Editor (SQL-ME). To ensure the effectiveness of the SQL-FE tool, the research conducted two studies which compared the newly implemented tool with the paper-based manual method in the first study (pilot study), and with the SQL Management Studio tool in the second study (full experiment). The results provided reasonable evidence that using SQL-FE can have a beneficial effect on formulating SQL statements and improve students’ SQL learning. The results also showed that students were able to solve and formulate the SQL query on time and their performance showed significant improvement. The research also carried out an experiment to examine the viability of the SQL Marking Editor by testing the SQL partial marking, grouping of identical SQL statements, and the resulting marking process after applying the generic marking rules. The experimental results presented demonstrated that the newly implemented editor was able to provide consistent marking and individual feedback for all SQL parts. This means that the main aim of this research has been fulfilled, since the workload of the lecturers has been reduced, and students’ performance in formulating SQL statements has been improved.</div

    EvalSQL - AUTOMATED ASSESSMENT OF DATABASE QUERIES

    Get PDF
    In computer science programs, database is a fundamental subject taught through several undergraduate courses. These courses develop theoretical and practical concepts of databases. Building queries is a key aspect of this learning process, and students are assessed through assignments and quizzes. However, grading these assignments can be time-consuming for professors, and students usually receive feedback only after the deadlines have passed. As a result, students may miss the opportunity to improve their work and achieve better grades. To address this issue, it would be beneficial to provide students with immediate feedback on their submissions. EvalSQL is an automated system that allows the evaluation of assignments to provide constructive feedback to students on Canvas after submissions. The feedback is based on the correctness of the query, the state of the database after a query execution, and keyword matching. This would allow them to identify their mistakes and correct them promptly, which can lead to a better learning experience and improved grades. Additionally, professors could benefit from a streamlined evaluation process that allows them to focus on teaching and other tasks

    Semi-automated assessment of SQL schemas via database unit testing

    Get PDF
    A key skill for students learning relational database concepts is how to design and implement a database schema in SQL. This skill is often tested in an assignment where students derive a schema from a natural language specification. Grading of such assignments can be complex and time consuming, and novice database students often lack the skills to evaluate whether their implementation accurately reflects the specified requirements. In this paper we describe a novel semi-automated system for grading student-created SQL schemas, based on a unit testing model. The system verifies whether a schema conforms to a machine-readable specification and runs in two modes: a staff mode for grading, and a reduced functionality student mode that enables students to check that their schema meets specified minimum requirements. Analysis of student performance over the period this system was in use shows evidence of improved grades as a result of students using the system.Peer Reviewe

    Interactive Correction and Recommendation for Computer Language Learning and Training

    Full text link
    • 

    corecore