70 research outputs found
Review of Wimba Voice 6.0 Collaboration Suite
Wimba Voice 6.0TM is a component of the Wimba Collaboraton SuiteTM 6.0, which is a set of tools for online communication that combines a series of interactive technologies. Wimba Voice allows teachers to complement their pedagogical approaches with five web-based applications: Voice Authoring, Voice Board, Voice Podcaster, Voice Presenter, and Voice Email. These applications add audio and video components to asynchronous communication and can be easily integrated into different course management environments (e.g., Angel, Blackboard, Moodle, WebCT). Consequently, Wimba Voice has attracted the interest of an increasing number of language educators who are striving to enhance teaching and learning through online oral instruction, practice, and collaboration. Its features allow for the creation of computer-assisted language learning (CALL) tasks that are justified by second language acquisition (SLA) tenets and can target various skills, although due to its audio and video capabilities, they may have greater appeal for listening, speaking, and pronunciation practice
Genre-based automated writing evaluation for L2 research writing: From design to evaluation and enhancement.
Research writing is craftsmanship central to the world of research and academia. It is the main means employed by scientific communities to disseminate and ratify knowledge, âinject[ing] light on dusty areasâ of academic enterprises (Barnett, 2005, p. 3). The journal articles, conference papers, grant proposals, theses, and dissertations, which are valued research-related genres, are viewed as major intellectual endeavors that earn their authors credentials and confer them academic status. For novice scholars such as graduate students, who are legitimate but peripheral participants in their scientific communities (Lave & Wenger, 1991), research writing is the first step towards accessing and actively engaging in the discourse of their discipline. The enculturation of these novice scholars in their disciplinary communities is high stakes, since dissemination of their research perceptibly impacts earning an advanced degree, professional growth, and academic recognition.https://lib.dr.iastate.edu/engl_books/1007/thumbnail.jp
Towards Effective Integration and Positive Impact of Automated Writing Evaluation in L2 Writing
The increasing dominance of English has elevated the need to develop an ability to effectively communicate in writing, and this has put a strain on second language education programs worldwide. Faced with time-consuming and copious commenting on student drafts and inspired by the promise of computerized writing assessment, many educational technology enthusiasts are looking to A WE [automated writing evaluation] as a silver bullet for language and literacy development (Warschauer & Ware, 2006, p. 175). This chapter reviews what AWE offers for learners and teachers and raises a number of controversies regarding A WE effectiveness with the underlying message that clear milestone targets need to be set with respect to A WE development, implementation, and evaluation in order to ensure positive impact of this technology on L2 writing. In support of this message, the chapter introduces an example-lADE, a prototype of contextbased A WE conceptualized and operationalized to address latent issues through a synthesis of theoretical premises and learning needs. Multifaceted empirical evaluation of lADE further provides insights into processes triggered by interaction with A WE technology and foregrounds a call for future research needed to inform effective application of AWE in L2 writing classrooms
Automated Writing Evaluation
Automated Writing Evaluation (AWE) comprises a suite of Web-based applications for computer-assisted assessment and learning. This historically controversial technology, solidly grounded in psychometric research, imposes the need for comprehensive inquiry into its context-specific utilizations in order to exploit its advantages appropriately and to devise effective classroom techniques for fostering writing development
Computer-Assisted Research Writing in the Disciplines
It is arguably very important for students to acquire writing skills from kindergarten through high school. In college, students must further develop their writing in order to successfully continue on to graduate school. Moreover, they have to be able to write good theses, dissertations, conference papers, journal manuscripts, and other research genres to obtain their graduate degree. However, opportunities to develop research writing skills are often limited to traditional student-advisor discussions (Pearson & Brew, 2002). Part of the problem is that graduate students are expected to be good at such writing because if they âcan think well, they can write wellâ (Turner, 2012, p. 18). Education and academic literacy specialists oppose this assumption. They argue that advanced academic writing competence is too complex to be automatically acquired while learning about or doing research (Aitchison & Lee, 2006). Aspiring student-scholars need to practice and internalize a style of writing that conforms to discipline-specific conventions, which are norms of writing in particular disciplines such as Chemistry, Engineering, Agronomy, and Psychology. Motivated by this need, the Research Writing Tutor (RWT) was designed to assist the research writing of graduate students. RWT leverages the conventions of scientific argumentation in one of the most impactful research genres â the research article. This chapter first provides a theoretical background for research writing competence. Second, it discusses the need for technology that would facilitate the development of this competence. The description of RWT as an exemplar of such technology is then followed by a review of evaluation studies. The chapter concludes with recommendations for RWT integration into the classroom and with directions for further development of this tool
Potential of Automated Writing Evaluation Feedback
This paper presents an empirical evaluation of automated writing evaluation (AWE) feedback used for L2 academic writing teaching and learning. It introduces the Intelligent Academic Discourse Evaluator (IADE), a new web-based AWE program that analyzes the introduction section to research articles and generates immediate, individualized, and discipline-specific feedback. The purpose of the study was to investigate the potential of IADEâs feedback. A mixed-methods approach with a concurrent transformative strategy was employed. Quantitative data consisted of responses to Likert-scale, yes/no, and open-ended survey questions; automated and human scores for first and final drafts; and pre-/posttest scores. Qualitative data contained studentsâ first and final drafts as well as transcripts of think-aloud protocols and Camtasia computer screen recordings, observations, and semistructured interviews. The findings indicate that IADEâs colorcoded and numerical feedback possesses potential for facilitating language learning, a claim supported by evidence of focus on discourse form, noticing of negative evidence, improved rhetorical quality of writing, and increased learning gains
Automated Writing Evaluation for non-native speaker English academic writing: The case of IADE and its formative feedback
This dissertation presents an innovative approach to the development and empirical evaluation of Automated Writing Evaluation (AWE) technology used for teaching and learning. It introduces IADE (Intelligent Academic Discourse Evaluator), a new web-based AWE program that analyzes research article Introduction sections and generates immediate, individualized, discipline-specific feedback. The major purpose of the dissertation was to implement IADE as a formative assessment tool complementing L2 graduate-level academic writing instruction and to investigate the effectiveness and appropriateness of its automated evaluation and feedback. To achieve this goal, the study sought evidence of IADE\u27s Language Learning Potential, Meaning Focus, Learner Fit, and Impact qualities outlined in Chapelle\u27s (2001) CALL evaluation conceptual framework.
A mixed-methods approach with a concurrent transformative strategy was employed. Quantitative data consisted of Likert-scale, yes/no, and open-ended survey responses; automated and human scores for first and last drafts; pre-/post test scores; and frequency counts for draft submission and for access to IADE\u27s Help Options. Qualitative data contained students\u27 first and last drafts as well as transcripts of think-aloud protocols and Camtasia computer screen recordings, observations, and semi-structured interviews.
The findings indicate that IADE can be considered an effective formative assessment tool suitable for implementation in the targeted instructional context. Its effectiveness was a result of combined strengths of its Language Learning Potential, Meaning Focus, Learner Fit, and Impact qualities, which were all enhanced by the program\u27s automated feedback. The strength of Language Learning Potential was supported by evidence of noticing of and focus on discourse form, improved rhetorical quality of writing, increased learning gains, and relative helpfulness of practice and modified interaction. Learners\u27 focus on the functional meaning of discourse and construction of such meaning served as evidence of strong Meaning Focus. IADE\u27s automated feedback characteristics and Help Options were appropriate for targeted learners, which speaks of adequate Learner Fit. Finally, despite some negative effects caused by IADE\u27s numerical feedback, overall Impact, exerted at affective, intrinsic, pragmatic, and cognitive levels, was found to be positive due to the color-coded type of feedback.
The results of this study provide valuable empirical knowledge to the areas of L2 academic writing, AWE, formative assessment, and I/CALL. They have important practical and theoretical implications and are informative for future research as well as for the design and application of new learning technologies
Understanding the âBlack-Boxâ of Automated Analysis of Communicative Goals and Rhetorical Strategies in Academic Discourse
Despite the appeal of automated writing evaluation (AWE) tools, many writing scholars and teachers have disagreed with the way such tools represent writing as a construct. This talk will address two important objections â that AWE heavily subordinates rhetorical aspects of writing, and that the models used to automatically analyze student texts are not interpretable for the stakeholders vested in the teaching and learning of writing. The purpose is to promote a discussion of how to advance research methods in order to optimize and make more transparent writing analytics for automated rhetorical feedback. AWE models will likely never be capable of truly understanding texts; however, important rhetorical traits of writing can be automatically detected (Cotos & Pendar, 2016). To date, AWE performance has been evaluated in purely quantitative ways that are not meaningful to the writing community. Therefore, it is important to complement quantitative measures with approaches stemming from a humanistic inquiry that would dissect the actual computational model output in order to shed light on the reasons why the âblack boxâ may yield unsatisfactory results
Innovative Implementation of a Web-Based Rating System for Individualizing Online English Speaking Instruction
The primary goal of computer-assisted language learning (CALL) in general, and of online language instruction in particular, is to create and evaluate language learning opportunities. To be effective, online language courses need to be guided by an integrated set of theoretical perspectives to second language acquisition (SLA), as well as by specific curricular goals, learning objectives and outcomes, appropriate tasks and necessary materials, and learnersâ characteristics and abilities â to name a few factors that are essential in both online and face-to-face teaching (Xu & Morris, 2007). Doughty and Long (2003) articulate pedagogical principles for computer-enhanced language teaching, which highlight the importance of exercising task-based activities, elaborating the linguistic input, enhancing the learning processes with negative feedback, and individualizing learning. Chapelle (2009) further puts forth a framework of evaluation principles that define the characteristics of tasks and materials drawing on SLA theories. Notably, she remarks that â[t]he groundwork for such evaluation projects is an iterative process of stating ideals for the materials based on the theoretical framework and providing a judgmental analysis of the degree to which the desired features actually appear in the materialsâ (Chapelle, 2009: 749). In other words, she calls for a judgmental analysis as pre-evaluation. With regards to online language instruction, pre-evaluation is rather challenging when it comes to individualizing learning in view of learnersâ characteristics and abilities, which are different in every iteration of the cours
- âŠ