203 research outputs found

    A set of free cross-platform authoring programs for flexible web-based CALL exercises

    Get PDF
    [EN] The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are all Adobe Flash based. The three programs are: (1) Mango-multi, which constructs multiple-choice exercises with an optional sound and/or image; (2) Mango-match, which is for word/phrase matching exercises, and has an added feature intended to promote memorization, whereby an item must be matched correctly not once but an optional consecutive number of times; (3) Mango-gap, which produces seamless gap filling exercises, where the gaps can be as small as desired, down to the level of individual letters, and correction feedback is similarly detailed. Sounds may also be inserted at any desired points within the text, so that it is suitable for listening or dictation exercises. Each exercise generated by any of the programs is produced in the form of a folder containing all of the necessary files for immediate upload and deployment (except that if sound files are used in a Mango-gap exercise, they must be copied to the folder manually). The html file in which the flash exercise is embedded may be edited in any way to suit the user, and an xml file controlling the appearance of the exercise itself may be edited through a wysiwyg interface in the authoring program. The programs aim to combine ease of use with features not available in other authoring programs, toprovide a useful teaching and research tool.O’brien, M. (2012). A set of free cross-platform authoring programs for flexible web-based CALL exercises. The EuroCALL Review. 20(2):59-68. https://doi.org/10.4995/eurocall.2012.11378SWORD5968202Butler, A. C. and Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36(3): 604-616. Available from: http://duke.edu/~ab259/pubs/Butler&Roediger(2008).pdf https://doi.org/10.3758/MC.36.3.604Folse, K. S. (2004). Vocabulary Myths: Applying Second Language Research to Classroom Teaching. Ann Arbor: University of Michigan Press. https://doi.org/10.3998/mpub.23925Godwin-Jones, R. (2010). Emerging technologies - from memory palaces to spacing algorithms: approaches to second-language vocabulary learning. Language Learning & Technology, 14(2): 4-11. Available from: http://llt.msu.edu/vol14num2/emerging.pdfGoto, T., Kojiri, T., Watanabe, T., Iwata, T. and Yamada, T. (2010). Automatic Generation System of Multiple-Choice Cloze Questions and its Evaluation. Knowledge Management & E-Learning, 2(3): 210-224. Available from: http://kmel-journal.org/ojs/index.php/online-publication/article/view/72/53Hewer, S. (2011). Text Manipulation. In: Davies, G. (ed.) Introduction to Computer Assisted Language Learning (CALL) - Module 1.4 of Information and Communications Technologies for Language Teachers (ICT4LT), Slough: Thames Valley University [Online]. Available from: http://www.ict4lt.org/en/en_mod1-4.htm#textmanipHorst, M., Cobb, T. and Nicolae, I. (2005). Expanding academic vocabulary with an interactive on-line database, Language Learning & Technology, 9(2): 90-110. Available from: http://llt.msu.edu/vol9num2/horst/default.htmlKim, D., & Gilman, D. A. (2008). Effects of Text, Audio, and Graphic Aids in Multimedia Instruction for Vocabulary Learning. Educational Technology & Society, 11(3), 114-126. Available from: http://www.ifets.info/journals/11_3/9.pdfKrashen, S. D. (1981). Principles and Practice in Second Language Acquisition. London: Prentice-Hall International.Ma, Q, and Kelly, P. (2006). Computer assisted vocabulary learning: design and evaluation. Computer Assisted Language Learning, 19(1), 15-45. https://doi.org/10.1080/09588220600803998Nakata, T. (2008). English vocabulary learning with word lists, word cards and computer: implications from cognitive psychology research for optimal spaced learning. ReCALL, 20(1): 3-20. https://doi.org/10.1017/S0958344008000219Pino, J., Heilman, M. and Eskenaz, M. (2008). Selection Strategy to Improve Cloze Question Quality. In: Intelligent Tutoring Systems for Ill-Defined Domains: Assessment and Feedback in Ill-Defined Domains, Proceedings of the 9th International Conference on Intelligent Tutoring Systems, Montreal Canada: 22-34. Available from: http://www.philippe-fournier-viger.com/ill-defined/Workshop-ITS08-ill-defined.pdfRoediger, H. L. and Marsh, E.J. (2005). The Positive and Negative Consequences of Multiple-Choice Testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(5): 1155-1159. Available from: http://www.niu.edu/~britt/courses/Roediger_Marsh_pos_neg_testing.pdf https://doi.org/10.1037/0278-7393.31.5.1155Yun, S., Miller, P. C., Baek, Y., Jung, J. and Ko, M. (2008). Improving recall and transfer skills through vocabulary building in web-based second language learning: An examination by item and feedback type, Educational Technology & Society, 11(4): 158-172. Available from: http://www.ifets.info/journals/11_4/12.pdf

    Study of the English Entrance Examination of Ibaraki Christian College

    Get PDF
    Englis

    Strategies in technology-enhanced language learning

    Get PDF
    The predominant context for strategy research over the last three decades has focused on language learning situated in a conventional classroom environment. Computer technology has brought about many changes in language learning and has become ecological and normalized rather than a supporting tool in the language classroom. Consequently, the landscape of language learning has been rapidly and largely changed with the normalization of technologies in people’s daily communication. The pervasive use of mobile technologies and easy access to online resources require that digital language learners understand and employ appropriate learning strategies for learning effectiveness and that their teachers are able and willing to teach these strategies as needed. This article provides an overview of the state-of-the-art research into technology-enhanced language learning strategies. The strategies under review include those for language learning skill areas, language subsystems, and self-regulated learning. At the end, we discuss the pressing issues that Digital Age language learning has posed to learners, teachers, and researchers and propose considerations for strategy research in digital realms

    Development, implementation, and evaluation of an online English placement test at college level: a case study.

    Get PDF
    The primary purpose of the present project was to research the case study of current English placement practices at Intercollege in view of incorporating change, improvement and efficiency, within the framework of current work based learning and applied linguistics (and more particularly English online language testing) research discipline. The review of work based learning and current theories and practices in applied linguistics research discipline helped establish the characteristics of an insider researcher and the research approach and research techniques that would best serve such a project. The review of current theories and practices in second language (L2) teaching and learning in general, and in L2 testing in particular revealed that there is an extensive range of practices: these range from testing discrete points to integrative tasks. Tests are also delivered both in pen-and-paper as well as in electronic form, the latter being either computer based testing (CBT) or computer adaptive testing (CAT). The review of current English placement practices at Intercollege indicated the need for a new English placement test, developed in a scientific way, informed by current theories and practices, based on current test design models and taking advantage of more efficient methods of delivery, and placement. This review also revealed the need for more efficiency in the mode of delivery, administration, marking, reporting and test duration. Finally, this study of the current English placement practices at Intercollege established the need for a placement test that would incorporate a mechanism of continuous testing of reliability and validity as well as improvement. The detailed study of the specific context, setting, particular language programme, resources, test-takers, instructors, etc. informed by current theories and practices in second language (L2) testing online, helped in the development of the New English Placement Test Online (NEPTON) test specifications, and as a consequence, the development of the proposed test itself. The study of test delivery modes and the consideration of the specific work based conditions and requirements. For example administration, delivery, time and money efficiency, urgent need of an improved and more efficient English placement test (EPT) resulted in the selection of computer based testing delivery, with many features of the computer adaptive testing delivery mode incorporated in it such as randomized selection of test items and fewer items. The test item writing and item modération process resulted in the formation of a substantial pool of varied items in different skills, text types, topics, settings, and covering a variety of lexical and grammatical points and communicative, authentic-like situations in ali six levels. The field test which was took place in May 2004 in pen-and-paper form by almost 1200 students in ali three Intercollege campuses helped check the content and the test trial which took place in the period of August-September in its electronic form helped come up with the test cutoff points, and the fine-tuning of the test. The item analysis ensured the appropriateness of ali items. Pre-test questionnaires established test-takers' biographical data and information about test-taker computer familiarity. The test face validity (stakeholders' attitudes and feelings about the NEPTON) was established through the use of pre and post-test questionnaires. Experts in the area Coming, from the three campuses, also studied the test specifications and the test itself (both in its electronic and pen-and-paper format) and completed a questionnaire, thus contributing to the establishment of the test content and construct validity. The test reliability was established through a split half reliability index process and a series of other aspects or processes such as the size of the item bank, the instructions, the moderation process, and the item analysis, which are explained in chapter 5 in more details. The research project consists of two components: (a) The report, which describes the way work based and applied linguistics research approaches were used to investigate the case study of English placement test at college level at Intercollege in Cyprus and to what extent this has broad change, improvement and effìciency to current practices; and (b) The evidence of such a research project, which is the New English Placement Test Online (NEPTON), in other words, the test itself, developed, implemented and evaluated in order to materialize this change, improvement and efficiency aimed at by this project

    The Effect of Device When Using Smartphones and Computers to Answer Multiple-Choice and Open-Response Questions in Distance Education

    Get PDF
    Traditionally in higher education, online courses have been designed for computer users. However, the advent of mobile learning (m-learning) and the proliferation of smartphones have created two challenges for online students and instructional designers. First, instruction designed for a larger computer screen often loses its effectiveness when displayed on a smaller smartphone screen. Second, requiring students to write remains a hallmark of higher education, but miniature keyboards might restrict how thoroughly smartphone users respond to open- response test questions. The present study addressed both challenges by featuring m-learning’s greatest strength (multimedia) and by investigating its greatest weakness (text input). The purpose of the current study was to extend previous research associated with m- learning. The first goal was to determine the effect of device (computer vs. smartphone) on performance when answering multiple-choice and open-response questions. The second goal was to determine whether computers and smartphones would receive significantly different usability ratings when used by participants to answer multiple-choice and open-response questions. The construct of usability was defined as a composite score based on ratings of effectiveness, efficiency, and satisfaction. This comparative study used a between-subjects, posttest, experimental design. The study randomly assigned 70 adults to either the computer treatment group or the smartphone treatment group. Both treatment groups received the same narrated multimedia lesson on how a solar cell works. Participants accessed the lesson using either their personal computers (computer treatment group) or their personal smartphones (smartphone treatment group) at the time and location of their choice. After viewing the multimedia lesson, all participants answered the same multiple-choice and open-response posttest questions. In the current study, computer users and smartphone users had no significant difference in their scores on multiple-choice recall questions. On open-response questions, smartphone users performed better than predicted, which resulted in no significant difference between scores of the two treatment groups. Regarding usability, participants gave computers and smartphones high usability ratings when answering multiple-choice items. However, for answering open-response items, smartphones received significantly lower usability ratings than computers

    Automated Exercise Generation in Mobile Language Learning

    Get PDF
    The Language Lion is an Android application that teaches basic Dutch to English speakers. While mobile language learning has increased exponentially in popularity, course creation is still labor-intensive. By contrast, the Language Lion uses a map of Dutch to English lexemes, a context-free grammar, and a modified version of the SimpleNLG sentence realizer to automatically generate semi-random translation exercises for the student. Each component is evaluated individually to find and analyze the particular roadblocks in automated exercise generation for mobile language learning

    Improving Comprehension for Students with Learning Disabilities Using The Comprehension Improvement Strategy

    Get PDF
    Students with learning disabilities generally have a difficult time meeting all of the course deadlines and gaining necessary skills in each of their rigorous high school courses. There are students who have difficulty showing what they learn and completing all the requirements for each class. For many years, there have been teachers that have looked for the best ways to instruct students and give them the tools they need to find success. There have been some strategies that have worked through the years and proved to be a great benefit for the students. There are other strategies that must be revamped and updated to fit the diverse needs of the 21st century learner. In order for students to be successful, one must ask if the learning strategy is effective for students and the teacher and whether the strategy can be implemented by students in practical situations. The main goal of this creative project was to determine if the implementation of the Comprehension Improvement Strategy in a reading prompt would improve students’ reading comprehension. To answer this question, data were collected for the number of accurate synonyms generated and the number of correctly answered comprehension questions answered. When synonyms were used within a reading prompt, the objective was for students to begin to put information together and enhance comprehension. However, the data suggest that this was not always the case for all students- while use of synonyms increased for some students, the number of correctly answered comprehension questions did not

    Predicting and Manipulating the Difficulty of Text-Completion Exercises for Language Learning

    Get PDF
    The increasing levels of international communication in all aspects of life lead to a growing demand of language skills. Traditional language courses compete nowadays with a wide range of online offerings that promise higher flexibility. However, most platforms provide rather static educational content and do not yet incorporate the recent progress in educational natural language processing. In the last years, many researchers developed new methods for automatic exercise generation, but the generated output is often either too easy or too difficult to be used with real learners. In this thesis, we address the task of predicting and manipulating the difficulty of text-completion exercises based on measurable linguistic properties to bridge the gap between technical ambition and educational needs. The main contribution consists of a theoretical model and a computational implementation for exercise difficulty prediction on the item level. This is the first automatic approach that reaches human performance levels and is applicable to various languages and exercise types. The exercises in this thesis differ with respect to the exercise content and the exercise format. As theoretical basis for the thesis, we develop a new difficulty model that combines content and format factors and further distinguishes the dimensions of text difficulty, word difficulty, candidate ambiguity, and item dependency. It is targeted at text-completion exercises that are a common method for fast language proficiency tests. The empirical basis for the thesis consists of five difficulty datasets containing exercises annotated with learner performance data. The difficulty is expressed as the ratio of learners who fail to solve the exercise. In order to predict the difficulty for unseen exercises, we implement the four dimensions of the model as computational measures. For each dimension, the thesis contains the discussion and implementation of existing measures, the development of new approaches, and an experimental evaluation on sub-tasks. In particular, we developed new approaches for the tasks of cognate production, spelling difficulty prediction, and candidate ambiguity evaluation. For the main experiments, the individual measures are combined into an machine learning approach to predict the difficulty of C-tests, X-tests and cloze tests in English, German, and French. The performance of human experts on the same task is determined by conducting an annotation study to provide a basis for comparison. The quality of the automatic prediction reaches the levels of human accuracy for the largest datasets. If we can predict the difficulty of exercises, we are able to manipulate the difficulty. We develop a new approach for exercise generation and selection that is based on the prediction model. It reaches high acceptance ratings by human users and can be directly integrated into real-world scenarios. In addition, the measures for word difficulty and candidate ambiguity are used to improve the tasks of content and distractor manipulation. Previous work for exercise difficulty was commonly limited to manual correlation analyses using learner results. The computational approach of this thesis makes it possible to predict the difficulty of text-completion exercises in advance. This is an important contribution towards the goal of completely automated exercise generation for language learning

    Using blended instruction to teach academic vocabulary collocations: A case study

    Get PDF
    Learning second language vocabulary has always been a challenge for second language (L2) learners. Transferring new vocabulary to an active stage has been an even greater challenge.;In the 1990s, Lewis (2002a) proposed the Lexical Approach as a means to help L2 learners with vocabulary acquisition. This approach encouraged the teaching of vocabulary in chunks, or in other words, putting emphasis on collocations. Focus on vocabulary collocations was suggested by several researchers (Brown, 1974; Hinkel, 2004; Lewis, 2001). They supported the teaching of collocations via in-class exercises. Cobb (1999) and Kaur and Hegelheimer (2005) showed that the use of a concordancer---an online resource which provides information on collocation---was beneficial to learners\u27 development of active vocabulary. However, studies focusing on explicit teaching of academic vocabulary collocation via blended instruction, which consists of a combination of in-class and online instruction, were not found.;This case study examined how teaching academic vocabulary collocations affected the writing development of six students in an Intensive English Program (IEP). Collocation was presented and taught both in-class and via Moodle, the course management software used as the online environment. The study also looked at how these learners perceived blended instruction. These learners came from various language backgrounds. Data were collected via a questionnaire, in-class observations, and learners\u27 journals, writing samples, mid-course reflections, online logs, and interviews. The class instructor also provided data in the form of instructor\u27s journals and an interview.;The results demonstrated that prior to teaching collocations, the teacher needed to clarify the concept and its importance to learners. Moreover, the results showed that learners benefited from explicit teaching of vocabulary collocations. Regarding blended instruction, the learners perceived the online component as a review/practice tool rather than an integral part of the course. The study also revealed a certain lack of commitment with the online exercises, especially when these exercises were not directly affecting the learners\u27 grades
    corecore