158 research outputs found

    Writing Lesson Plans

    Get PDF

    Making a Case for Scenario-Based Learning in IS and Executive Education

    Get PDF
    This paper argues that scenario based learning is an appropriate pedagogical strategy for business school education, both for students and for executive education. Beginning with a discussion of problem based learning, the pedagogical strategy within which scenario based learning is grounded. The approach is explained and two examples of scenarios are offered, one for students and one for executives

    Researching the comparability of paper-based and computer-based delivery in a high-stakes writing test

    Get PDF
    International language testing bodies are now moving rapidly towards using computers for many areas of English language assessment, despite the fact that research on comparability with paper-based assessment is still relatively limited in key areas. This study contributes to the debate by researching the comparability of a highstakes EAP writing test (IELTS) in two delivery modes, paper-based (PB) and computer-based (CB). The study investigated 153 test takers' performances and their cognitive processes on IELTS Academic Writing Task 2 in the two modes, and the possible effect of computer familiarity on their test scores. Many-Facet Rasch Measurement (MFRM) was used to examine the difference in test takers' scores between the two modes, in relation to their overall and analytic scores. By means of questionnaires and interviews, we investigated the cognitive processes students employed under the two conditions of the test. A major contribution of our study is its use - for the first time in the computer-based writing assessment literature - of data from research into cognitive processes within realworld academic settings as a comparison with cognitive processing during academic writing under test conditions. In summary, this study offers important new insights into academic writing assessment in computer mode

    Augmenting Assessment with Learning Analytics

    Full text link
    Learning analytics as currently deployed has tended to consist of large-scale analyses of available learning process data to provide descriptive or predictive insight into behaviours. What is sometimes missing in this analysis is a connection to human-interpretable, actionable, diagnostic information. To gain traction, learning analytics researchers should work within existing good practice particularly in assessment, where high quality assessments are designed to provide both student and educator with diagnostic or formative feedback. Such a model keeps the human in the analytics design and implementation loop, by supporting student, peer, tutor, and instructor sense-making of assessment data, while adding value from computational analyses

    The effects of a high-salt diet on Manduca sexta growth

    No full text

    State-Of-The-Art Automated Essay Scoring: Competition, Results, and Future Directions from a United States Demonstration

    No full text
    This article summarizes the highlights of two studies: a national demonstration that contrasted commercial vendors\u27 performance on automated essay scoring (AES) with that of human raters: and an international competition to match or exceed commercial vendor performance benchmarks. In these studies, the automated essay scoring engines performed well on five of seven measures and approximated human rater performance on the other two. With additional validity studies, it appears that automated essay scoring holds the potential to play a viable role in high-stakes writing assessments. (C) 2013 Elsevier Ltd. All rights reserved

    Issues in Survey Data Quality: Four Field Experiments.

    Full text link
    This study addresses four research design issues affecting survey data quality: (1) mode of data collection, (2) systematic interviewer training, (3) survey questionnaire format alterations, and (4) proxy responses. The data were obtained from 14 Michigan school districts, with half-samples r and omly assigned to seven experimental groups. Survey responses were obtained from 3770 of the 5713 former vocational education students for an overall response rate of 66%. Method of Data Collection. Comparisons of data collection methods, i.e., telephone and mail, in districts r and omly assigned to collection mode, revealed: (1) the telephone mode yielded the highest (69.1%); (2) a combination of mail and phone methods yielded a 67.1% response rate; (3) the mail only method brought a significantly lower response (43.3%). However, item response rates were higher for the mailed questionnaires. The two methods obtained significant differences on some demographic variables, e.g., race and sex. These findings suggest that different modes do capture different subclasses of respondents. Systematic Interviewer Training. Interviewers in eight districts were r and omly assigned to either a training or control group. Training group interviewers received a three hour training session followed by a "h and s on" practice session, and an interviewer's manual. Results showed that systematic training was not effective for increasing response rates. Trained interviewers, however, were more persistent in attempting to contact respondents and more successful in obtaining permission to have an outside agency contact the respondent's employer. Survey Questionnaire Format. Respondents in six districts were r and omly assigned to be contacted using either the st and ard questionnaire or a modified version which incorporated alterations of five questions by word or graphic changes. Item and overall response rates were higher using the modified mailed questionnaire form. Similar changes made to the telephone interview schedule did not result in increased response rates. Proxy Response Bias. Proxy response data from two experimental groups were examined to compare proxy and target responses on attitudinal questions and to assess the accuracy of sibling versus "other" proxy types on information about target students. Results revealed that the proxy ratings on attitudinal items were significantly lower than target student ratings, and that sibling responses did not reflect information more accurately than responses of other proxies.Ph.D.Educational tests and measurementsUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/159274/1/8304591.pd

    Analysis of a Large-Scale Formative Writing Assessment System with Automated Feedback

    No full text
    • …
    corecore