255 research outputs found
Recommended from our members
A Quasi-Experimental Study of the Classroom Practices of English Language Teachers and the English Language Proficiency of Students, in Primary and Secondary Schools in Bangladesh
English in Action (EIA) is an English language teacher development project based in Bangladesh that was intended to run from 2008 to 2017, but which was extended at the request of the Government of Bangladesh, with additional funding from UKAID, for a further year to 2018. By the time of the design of this study (2014-2015) EIA was drawing to the end of upscaling (phase III, 2011-2014) and entering institutionalisation and sustainability (phase IV, 2014-17, extended 2018). Successive prior studies had indicated substantial success in improving both teachers’ classroom practices and student learning outcomes, over the pre-project baseline (e.g. EIA 2011, 2012). The 2014 Annual Review of EIA recommended that in the final phase, EIA should explore whether it would be possible to carry out a study that compared a ‘counterfactual’ or control-group of teachers and students, to the ‘EIA’ or treatment schools: i.e. a Randomised Control Trial or Quasi-Experimental study. A proposal for a Quasi-Experimental study was developed in collaboration with DFID’s South Asia Research Hub (SARH), which also provided the additional funding necessary to implement such a study.
The teachers and students who were the subject of this study, were the fourth cohort to participate in English in Action (together with teachers from ‘control’ schools, in the same Upazilas). This fourth EIA cohort included Schools, Teachers and Students from approximately 200 Upazilas (of approximately 500 in total) across Bangladesh, including some of the most disadvantaged areas (with reference to UNICEF deprivation index), such as Char, Hoar and Monga districts.
Teachers took part in a school-based teacher development Programme, learning communicative language teaching approaches through carrying out new classroom activities, guided by teacher development videos that showed teachers, students and schools similar to those across the country. Teachers also had classroom audio resources for use with students. All digital materials were available offline, on teachers own mobile phones, so there is no dilution of the Programmes core messages about teaching and learning, by some intermediary coming between the teacher and the materials. Teachers were supported through these activities, by other teachers in their schools, by their head teachers and by local education officers. Some teachers from each area were also given additional support and guidance from divisional EIA staff, to act as Teacher Facilitators, helping teachers work through activities and share their experiences at local cluster meetings. Whereas previous cohorts of teachers had attended eight local teacher development meetings over their participation in the project, for Cohort Four, this was reduced to four meetings, with a greater emphasis being placed on support in school by head teachers, as well as support from local education officers. This change was part of the move towards institutionalisation and sustainability of project activities within and through government systems and local officers.
The purpose of this study was both to provide the evaluation evidence required for the final phase of the EIA project and to contribute to the international body of research evidence on effective practices in teacher development in low-to-middle income country contexts
The Incongruity Factor: Random Answers Without Questions
A transcript of an interview Mr. McCormick had with himself. He reports that the interview, on the whole, went smoothly and conformed, in general, to accepted theories of communications
Marginal Costs and Editorial Spinoff: A Case History
Robert Wacker, the bigtime freelance writer who spoke at the Northeast AAACE meeting in New York, made a point that bears repeating
Graduate Training and Research Productivity in the 1990s: A Look at Who Publishes
The relationship between reputational rankings of political science departments and their scholarly productivity remains a source of discussion and controversy. After the National Research Council (1995) published its ranking of 98 political science departments, Katz and Eagles (1996), Jackman and Siverson (1996), and Lowry and Silver (1996) analyzed the factors that seemingly influenced those rankings. Miller, Tien, and Peebler (1996) offered an alternate approach to ranking departments, based both upon the number of faculty (and their graduates) who published in the American Political Science Review and upon the number of citations that faculty members received. More recently, two studies have examined departmental rankings in other ways. Ballard and Mitchell (1998) assessed political science departments by evaluating the level of productivity in nine important disciplinary and subfield journals, and Garand and Graddy (1999) evaluated the impact of journal publications (and other variables) on the rankings of political science departments. In general, Miller, Tien, and Peebler found a high level of correspondence between reputation rankings and productivity, Ballard and Mitchell did not, and Garand and Graddy found that publications in “high impact” journals were important for departmental rankings
Recommended from our members
Evaluating classroom practice: a critical analysis of approaches to evaluation in large scale teacher education or education technology programmes, in international development contexts
This study builds on and contributes to work in teacher education and educational technology, in international development contexts. Recent reviews, funded by the UK Department for International Development (DFID) have examined the characteristics of teacher education programmes (Westbrook et al. 2013) and educational technology programmes (Power et al. 2014), that show evidence of impact on teaching practice or learning outcomes. These both illustrate the importance of a strong focus on improving the quality of classroom practice in programme design, and both indicate some of the key characteristics of effective programme support for teachers. But in both reviews, the studies reviewed present problems of evidence. Such evidential problems arise in relation to reporting changes in: attitudes and understanding; teaching and learning practices; and learning outcomes.
In this article, we draw particular attention to evidence of classroom practice: in terms of extensiveness, of methodology, and of understanding the relationships between the variables considered. As such, the purpose of this article is to provide insight into three inter-related issues: the methodological challenges - of rigour, systematic observation, and extensiveness; the practical challenges - of human capacity for research activity, geographical remoteness, and cost; and the evidence requirements of different audiences - donors, policy makers, practitioners and the academic and research communities. This is done by considering these three issues, through a case study of English in Action, a large scale teacher education programme in Bangladesh, in which Educational Technology plays a central role in supporting both teacher professional development, and new classroom practices.
There are several implications from the recent reviews and the case study, that lead us to argue for greater development of evaluation approaches for classroom practice, based upon rigorous, systematic observation (using standardised observations, of objective behaviors). Such approaches must be capable of deployment at scale, and reliable implementation through relatively inexperienced field researchers, available and affordable in country. This may suggest certain kinds of large scale quantitative observation, that are rare in the global north. Is there an opportunity, for a collective accumulation of data, to deepen our basic understanding of classrooms and the actors within them
Graduate Training, Current Affiliation and Publishing Books in Political Science
Scores of studies have measured the quality of political science departments. Generally speaking, these studies have taken two forms. Many have relied on scholars\u27 survey responses to construct rankings of the major departments. For example, almost 50 years ago Keniston (1957) interviewed 25 department chairpersons and asked them to assess the quality of various programs, and, much more recently, the National Research Council (NRC 1995) asked 100 political scientists to rate the “scholarly quality of program faculty” in the nation\u27s political science doctoral departments. In response to these opinion-based rankings, a number of researchers have developed what they claim to be more objective measures of department quality based on the research productivity of the faculty (Ballard and Mitchell 1998; Miller, Tien, and Peebler 1996; Robey 1979). While department rankings using these two methods are often similar, there are always noteworthy differences and these have generated an additional literature that explores the relationship between the rating systems (Garand and Graddy 1999; Jackman and Siverson 1996; Katz and Eagles 1996; Miller, Tien, and Peebler 1996)
Recommended from our members
English Speaking and Listening Assessment Project - Baseline. Bangladesh
This study seeks to understand the current practices of English Language Teaching (ELT) and assessment at the secondary school level in Bangladesh, with specific focus on speaking and listening skills. The study draws upon prior research on general ELT practices, English language proficiencies and exploration of assessment practices, in Bangladesh. The study aims to provide some baseline evidence about the way speaking and listening are taught currently, whether these skills are assessed informally, and if so, how this is done. The study addresses two research questions:
1. How ready are English Language Teachers in government-funded secondary schools in Bangladesh to implement continuous assessment of speaking and listening skills?
2. Are there identifiable contextual factors that promote or inhibit the development of effective assessment of listening and speaking in English?
These were assessed with a mixed-methods design, drawing upon prior quantitative research and new qualitative fieldwork in 22 secondary schools across three divisions (Dhaka, Sylhet and Chittagong). At the suggestion of DESHE, the sample also included 2 of the ‘highest performing’ schools from Dhaka city.
There are some signs of readiness for effective school-based assessment of speaking and listening skills: teachers, students and community members alike are enthusiastic for a greater emphasis on speaking and listening skills, which are highly valued. Teachers and students are now speaking mostly in English and most teachers also attempt to organise some student talk in pairs or groups, at least briefly. Yet several factors limit students’ opportunities to develop skills at the level of CEFR A1 or A2.
Firstly, teachers generally do not yet have sufficient confidence, understanding or competence to introduce effective teaching or assessment practices at CEFR A1-A2. In English lessons, students generally make short, predictable utterances or recite texts. No lessons were observed in which students had an opportunity to develop or demonstrate language functions at CEFR A1-A2. Secondly, teachers acknowledge a washback effect from final examinations, agreeing that inclusion of marks for speaking and listening would ensure teachers and students took these skills more seriously during lesson time. Thirdly, almost two thirds of secondary students achieve no CEFR level, suggesting many enter and some leave secondary education with limited communicative English language skills. One possible contributor to this may be that almost half (43%) of the ELT population are only at the target level for students (CEFR A2) themselves, whilst approximately one in ten teachers (12%) do not achieve the student target (being at A1 or below). Fourthly, the Bangladesh curriculum student competency statements are generic and broad, providing little support to the development of teaching or assessment practices.
The introduction and development of effective teaching and assessment strategies at CEFR A1-A2 requires a profound shift in teachers’ understanding and practice. We recommend that:
1. Future sector wide programmes provide sustained support to the develop teachers' competence in teaching and assessment of speaking and listening skills at CEFR A1-A2
2. Options are explored for introducing assessment of these skills in terminal examinations
3. Mechanisms are identified for improving teachers own speaking and listening skills
4. Student competency statements within the Bangladesh curriculum are revised to provide more guidance to teachers and students
Recommended from our members
English Proficiency Assessments of Primary and Secondary Teachers and Students Participating in English in Action: Third Cohort (2014)
The purpose of the study was to assess the student learning outcomes of English in Action’s (EIA’s) school-based teacher development programme, in terms of improved English language competence (ELC),1 against recognised international frameworks (specifically, the Graded Examinations in Spoken English2 [GESE)]; Trinity College London 2013), which map onto the Common European Framework of Reference for Languages (CEFR)3. Measurably improved student learning outcomes are the ultimate test of success of a teacher development programme.
English Proficiency Assessments 2014 is a repeat of the study on the pilot EIA programme (Cohort 14) (EIA 2012), but focusing only on student ELC. The teachers and hence the students of Cohort 3 are substantially greater in number than in the pilot phase (347,000 primary students and almost 1.7 million secondary students compared with around 700 teachers, 35,000 primary students and over 83,000 secondary students in 2011). To enable this increase in scale, the programme has been delivered through a more decentralised model, with much less direct contact with English language teaching (ELT) experts, a greater embedding of expertise within teacher development materials (especially video), and a greater dependence upon localised peer support.
This report addresses the following research question: "To what extent do the students of Cohort 3 show improved post-intervention EL proficiencies, in speaking and listening, compared with the Cohort 1 2010 pre-intervention baseline?
Recommended from our members
Classroom Practices of Primary and Secondary Teachers Participating in English in Action: Third Cohort (2014)
This study reports on the third cohort of teachers and students to participate in EIA (2013–14). While the students and teachers in Cohort 3 underwent an essentially similar programme to those in Cohorts 1 and 2, they are much greater in number (there are over 8,000 teachers and 1.7 million students in Cohort 3, compared to 751 teachers and 118,000 students in Cohort 1). To enable ongoing increases in scale, the SBTD programme became increasingly decentralised, with less direct contact with English language teaching (ELT) experts, a greater embedding of expertise within teacher development materials (especially video) and a greater dependence upon localised peer support.
The purpose of this study was to ascertain whether there had been changes in the classroom practice of teachers and students participating in EIA Cohort 3 (2013–14). Previous research in language teaching has established that when teachers take up most of the lesson time talking, this can severely limit students’ opportunities to develop proficiency in the target language (Cook 2008); a general goal of English language (EL) teachers is to motivate their students to speak – and to practise using the target language (Nunan 1991). This study therefore focuses upon the extent of teacher and student talk, the use of the target language by both, and the forms of classroom organisation (individual, pair, group or choral work) in which student talk is situated.
The study addresses two research questions:
1. To what extent do the teachers of Cohort 3 show classroom practice comparable to the teachers of Cohort 1, particularly in relation to the amount of student talk and the use of the target language by teachers and students, post-intervention?
2. In what ways do the teachers of Cohort 3 show improved classroom practice (particularly in relation to the amount of student talk and use of the target language by teachers and students) in contrast to the pre-intervention baseline?
This study is a repeat of studies on Cohorts 1 & 2 (EIA 2011a, 2012a & 2014)
Pledge to Progress? Analyzing the Impact of the BLM Movement on Racial Mortgage Approval Rate Gaps
Following the surge of Black Lives Matter protests in 2020, prominent financial institutions announced their commitment to improving racial disparities in homeownership. Using the HMDA dataset from 2019-2022, this paper investigates the difference in home-loan approval rates between white and black borrowers in Ohio post Black Lives Matter movement using bank fixed effects. We found a statistically significant reduction in the approval rate gap between black and white borrowers post 2020
- …