279 research outputs found

    Modelling for understanding AND for prediction/classification - the power of neural networks in research

    Get PDF
    Two articles, Edelsbrunner and, Schneider (2013), and Nokelainen and Silander (2014) comment on Musso, Kyndt, Cascallar, and Dochy (2013). Several relevant issues are raised and some important clarifications are made in response to both commentaries. Predictive systems based on artificial neural networks continue to be the focus of current research and several advances have improved the model building and the interpretation of the resulting neural network models. What is needed is the courage and open-mindedness to actually explore new paths and rigorously apply new methodologies which can perhaps, sometimes unexpectedly, provide new conceptualisations and tools for theoretical advancement and practical applied research. This is particularly true in the fields of educational science and social sciences, where the complexity of the problems to be solved requires the exploration of proven methods and new methods, the latter usually not among the common arsenal of tools of neither practitioners nor researchers in these fields. This response will enrich the understanding of the predictive systems methodology proposed by the authors and clarify the application of the procedure, as well as give a perspective on its place among other predictive approaches

    Predicting general academic performance and identifying the differential contribution of participating variables using artificial neural networks

    Get PDF
    oai:flr.journals.publicknowledgeproject.org:article/13Many studies have explored the contribution of different factors from diverse theoretical perspectives to the explanation of academic performance. These factors have been identified as having important implications not only for the study of learning processes, but also as tools for improving curriculum designs, tutorial systems, and students’ outcomes. Some authors have suggested that traditional statistical methods do not always yield accurate predictions and/or classifications (Everson, 1995; Garson, 1998). This paper explores a relatively new methodological approach for the field of learning and education, but which is widely used in other areas, such as computational sciences, engineering and economics. This study uses cognitive and non-cognitive measures of students, together with background information, in order to design predictive models of student performance using artificial neural networks (ANN). These predictions of performance constitute a true predictive classification of academic performance over time, a year in advance of the actual observed measure of academic performance. A total sample of 864 university students of both genders, ages ranging between 18 and 25 was used. Three neural network models were developed. Two of the models (identifying the top 33% and the lowest 33% groups, respectively) were able to reach 100% correct identification of all students in each of the two groups. The third model (identifying low, mid and high performance levels) reached precisions from 87% to 100% for the three groups. Analyses also explored the predicted outcomes at an individual level, and their correlations with the observed results, as a continuous variable for the whole group of students. Results demonstrate the greater accuracy of the ANN compared to traditional methods such as discriminant analyses.  In addition, the ANN provided information on those predictors that best explained the different levels of expected performance. Thus, results have allowed the identification of the specific influence of each pattern of variables on different levels of academic performance, providing a better understanding of the variables with the greatest impact on individual learning processes, and of those factors that best explain these processes for different academic levels

    Co-constructing the assessment criteria for EFL writing by instructors and students: A participative approach to constructively aligning the CEFR, curricula, teaching and learning

    Get PDF
    The current assessment in language classrooms prevailingly utilizes the criteria provided by instructors, regarding learners as passive recipients of assessment. The current study drew upon sustainable assessment and the community of practice to highlight the importance of involving learners in co-constructing the assessment criteria and argued that using the criteria provided by instructors could lead to discrepancy between assessment, teaching, and learning. It adopted a participatory approach and investigated how to involve learners in co-constructing the assessment criteria with instructors in tertiary English writing instruction in China, based on the European Language Profile (ELP), an evolved version of the Common European Framework of Reference for Languages (CEFR). Two writing instructors and 146 tertiary students played different, yet interactive roles in adapting the assessment criteria in the local context. Instructors drafted the criteria in line with curricula, teaching, learning and learners. Learners utilized the draft criteria in a training session and suggested possible modifications to the criteria in a survey. Suggestions were used to revise the descriptors alongside teachers’ reflections via reflective logs. A follow-up survey explored students’ perceptions of the feasibility and usefulness of the modified descriptors to investigate the effectiveness of co-constructing the assessment criteria for learning and reveal further improvement if necessary. Vigilant decision-making processes were thickly described regarding how assessment descriptors were selected, arranged, and modified to constructively align them with curricula, teaching, and learning. Statistical and thematic analyses were conducted to examine the accessibility, feasibility, and usefulness of the assessment descriptors prior to and after the modifications. Results substantiated the effectiveness and thus the importance of co-constructing assessment criteria for enhancing the quality of assessment criteria and developing learners’ cognitive and metacognitive knowledge of writing and assessment. Implications for language tutors regarding co-constructing assessment criteria in local contexts were deliberated on at the end of the article

    Reflexive learning, socio-cognitive conflict and peer-assessment to improve the quality of feedbacks in online tests

    Get PDF
    International audienceOur previous works have introduced the Tsaap-notes platform dedicated to the semi automatic generation of multiple choice questionnaire providing feedbacks: it reuses interactive questions asked by teachers during lectures, as well as the notes taken by students after the presentation of the results as feedbacks integrated into the quizzes. In this paper, we introduce a new feature which aims at increasing the number of contributions of students in order to significantly improve the quality of the feedbacks used in the resulting quizzes. This feature splits the submission of an answer into several distinct phases to harvest explanations given by students, and then applies an algorithm to filter the best contributions to be integrated as feedbacks in the tests. Our approach has been validated by a first experimentation involving master students enrolled in a computer science course

    Accounting students' IT applicaton skills over a 10-year period

    Get PDF
    This paper reports on the changing nature of a range of information technology (IT) application skills that students declare on entering an accounting degree over the period from 1996 to 2006. Accounting educators need to be aware of the IT skills students bring with them to university because of the implications this has for learning and teaching within the discipline and the importance of both general and specific IT skills within the practice and craft of accounting. Additionally, IT skills constitute a significant element within the portfolio of employability skills that are increasingly demanded by employers and emphasized within the overall Higher Education (HE) agenda. The analysis of students' reported IT application skills on entry to university, across a range of the most relevant areas of IT use in accounting, suggest that their skills have continued to improve over time. However, there are significant differential patterns of change through the years and within cohorts. The paper addresses the generalizability of these findings and discusses the implications of these factors for accounting educators, including the importance of recognising the differences that are potentially masked by the general increase in skills; the need for further research into the changing nature, and implications, of the gender gap in entrants' IT application skills; and the low levels of entrants' spreadsheet and database skills that are a cause for concern

    Collaboration scripts - a conceptual analysis

    Get PDF
    This article presents a conceptual analysis of collaboration scripts used in face-to-face and computer-mediated collaborative learning. Collaboration scripts are scaffolds that aim to improve collaboration through structuring the interactive processes between two or more learning partners. Collaboration scripts consist of at least five components: (a) learning objectives, (b) type of activities, (c) sequencing, (d) role distribution, and (e) type of representation. These components serve as a basis for comparing prototypical collaboration script approaches for face-to-face vs. computer-mediated learning. As our analysis reveals, collaboration scripts for face-to-face learning often focus on supporting collaborators in engaging in activities that are specifically related to individual knowledge acquisition. Scripts for computer-mediated collaboration are typically concerned with facilitating communicative-coordinative processes that occur among group members. The two lines of research can be consolidated to facilitate the design of collaboration scripts, which both support participation and coordination, as well as induce learning activities closely related to individual knowledge acquisition and metacognition. In addition, research on collaboration scripts needs to consider the learners’ internal collaboration scripts as a further determinant of collaboration behavior. The article closes with the presentation of a conceptual framework incorporating both external and internal collaboration scripts

    Exploring the impact of cumulative testing on academic performance of undergraduate students in Spain

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11092-014-9208-zFrequent testing provides opportunities for students to receive regular feedback and to increase their motivation. It also provides the instructor with valuable information on how course progresses, thus making it possible to solve the problems encountered before it is too late. Frequent tests with noncumulative contents have been widely analysed in the literature with inconclusive results. However, cumulative testing methods have hardly been reported in higher education courses. This paper analyses the effect of applying an assessment method based on frequent and cumulative tests on student performance. Our results show that, when applied to a microeconomics course, students who were assessed by a frequent, cumulative testing approach largely outperformed those assessed with a single final exam.Doménech I De Soria, J.; Blázquez Soriano, MD.; De La Poza, E.; Muñoz Miquel, A. (2015). Exploring the impact of cumulative testing on academic performance of undergraduate students in Spain. Educational Assessment, Evaluation and Accountability. 27(2):153-169. https://doi.org/10.1007/s11092-014-9208-zS153169272Adelman, HS, & Taylor, L. (1990). Intrinsic motivation and school misbehaviour some intervention implications. Journal of Learning Disabilities, 23, 541–550.Biggs, J, & Tang, C. (2007). Teaching for quality learning at university 3rd edn. Open University Press.Boston, C. (2002). The concept of formative assessment. Practical Assessment Research & Evaluation 8.Brown, GA, Bull, J, Pendlebury, M. (1997). Assessing Student Learning in Higher Education, 1st edn. Routledge.Cano, MD. (2011). Students’ involvement in continuous assessment methodologies: a case study for a distributed information systems course. IEEE Transactions on Education, 54, 442–451.Casem, ML (2006). Active learning is not enough. Journal of College Science Teaching, 35.Chen, J, & Lin, TF. (2008). Class attendance and exam performance a randomized experiment. The Journal of Economic Education, 39, 213–227.Chickering, AW, & Gamson, ZF. (1987). Seven principles for good practice in undergraduate education. American Association for Higher Education Bulletin, 39, 3–7.Crooks, TJ. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58, 438–481.De Paola, M, & Scoppa, V. (2011). Frequency of examinations and student achievement in a randomized experiment. Economics of Education Review, 30, 1416–1429.Deck, W. (1998). The effects of frequency of testing on college students in a principles of marketing course, PhD thesis, Virginia Polytechnic Institute and State University. Virginia: Blacksburg.Dempster, FN. (1991). Synthesis of research on reviews and tests. Educational Leadership, 48, 71–76.Dochy, F. (2008). The Edumetric Quality of New Modes of Assessment: Some Issues and Prospects. Assessment, Learning and Judgement in Higher Education. Dordrecht: Springer Netherlands.Eikner, AE, & Montondon, L. (2001). Evidence on factors associated with success in intermediate accounting I. Accounting Educators’ Journal 13.Emerson, TLN, & Mencken, KD. (2011). Homework to require or not? online graded homework and student achievement Perspectives on Economic Education Research 7.Fulkerson, F, & Martin, G. (1981). Effects of exam frequency on student performance, evaluations of instructor, and test anxiety. Teaching of Psychology, 8, 90–93.Furnham, A, & Chamorro-Premuzic, T. (2005). Individual differences and beliefs concerning preference for university assessment methods. Journal of Applied Social Psychology, 35, 1968–1994.Gibbs, G, & Simpson, C. (2005). Conditions under which assessment supports students’ learning Learning and Teaching in Higher Education 1 (August 5, 2011)3–31.Haberyan, KA. (2003). Do weekly quizzes improve student performance on general biology exams?. The American Biology Teacher, 65, 110–114.Kling, N, McCorkle, D, Miller, C, Reardon, J. (2005). The impact of testing frequency on student performance in a marketing course. Journal of Education for Business, 81, 67–72.Kuh, GD (2003). What we’re learning about student engagement from NSSE Change 35.Kuo, T, & Simon, A. (2009). How many tests do we really need. College Teaching, 57, 156–160.Leeming, FC. (2002). The exam-a-day procedure improves performance in psychology classes. Teaching of Psychology, 29, 210–212.Lumsden, KG, Scott, A, Becker, WE. (1987). The economics student reexamined Male-female differences in comprehension. Journal of Economic Education, 18, 365–375.Marriott, P. (2009). Students’ evaluation of the use of online summative assessment on an undergraduate financial accounting module. British Journal of Educational Technology, 40, 237–254.Marriott, P, & Lau, A. (2008). The use of on-line summative assessment in an undergraduate financial accounting course. Journal of Accounting Education, 26, 73–90.McNabb, R, Pal, S, Sloane, P. (2002). Gender differences in educational attainment. the case of university students in england and wales. Economica, 69, 481–503.Miller, F. (1987). Test frequency, student performance and teacher evaluation in the basic marketing class. Journal of Marketing Education, 9, 14–19.Nicol, DJ, & Macfarlane Dick, D. (2006). Formative assessment and self-regulated learning, A model and seven principles of good feedback practice. Studies in Higher Education, 31, 199–218.Nowell, C, & Alston, RM. (2007). I thought I got an A! Overconfidence across the economics curriculum. The Journal of Economic Education, 38, 131–142.Race, P (1995). The art of assessing 1 New Academic 4.Scriven, M. (1967). The Methodology of Evaluation, vol 1 (pp. 39–83). Chicago: Rand McNally.Skinner, BF. (1974). About behaviorism. New York: Alfred A Knopf.Taras, M. (2005). Assessment - summative and formative - some theoretical reflections. British Journal of Educational Studies, 53, 466–478.Trotter, E. (2006). Student perceptions of continuous summative assessment. Assessment & Evaluation in Higher Education, 31, 505–521.Yorke, M. (2003). Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education, 45, 477–501

    Reflexive learning, socio-cognitive conflict and peer-assessment to improve the quality of feedbacks in online tests

    Get PDF
    Our previous works have introduced the Tsaap-notes platform dedicated to the semi automatic generation of multiple choice questionnaire providing feedbacks: it reuses interactive questions asked by teachers during lectures, as well as the notes taken by students after the presentation of the results as feedbacks integrated into the quizzes. In this paper, we introduce a new feature which aims at increasing the number of contributions of students in order to significantly improve the quality of the feedbacks used in the resulting quizzes. This feature splits the submission of an answer into several distinct phases to harvest explanations given by students, and then applies an algorithm to filter the best contributions to be integrated as feedbacks in the tests. Our approach has been validated by a first experimentation involving master students enrolled in a computer science course

    Academic and social integration and study progress in problem based learning

    Get PDF
    The present study explores the effects of problem-based learning (PBL) on social and academic integration and study progress. Three hundred and five first-year students from three different psychology curricula completed a questionnaire on social and academic integration. Effects of a full-fledged PBL environment were compared to (1) effects of a conventional lecture-based learning environment, and (2) effects of a learning environment that combined lectures and other methods aimed at activating students. Lisrel analyses show direct positive effects of the learning environment on study progress: students in PBL obtained more credits compared to students in more conventional curricula. Moreover, the levels of social and academic integration were also higher among students in the PBL curriculum. The links between integration and study progress were less straightforward. Formal social integration positively affected study progress, but informal academic integration was negatively related to study progress
    • …
    corecore