What’s the state of the evidence on…? Oral Language Interventions

What are oral language interventions?

Oral language interventions, often considered under the rather unattractive term oracy, are can be described as “what the school does to support the development of children’s capacity to use speech to express their thoughts and communicate with others, in education and in life” (Alexander, 2010: 10). Although this has clear and obvious synergy with the teaching of literacy, or English at secondary school, the impact of such interventions goes beyond this subject. The development of effective oral communication skills has a learning benefit across all subjects because talk is a feature of all lessons. Oral language interventions can, in this respect, have a meta-cognitive aspect: they seek to develop not just the ability to communicate but also to think.

 Is there consensus about its impact?

 The EEF toolkit rates the research in this field highly. They note that there is extensive evidence, including a range of recent randomised controlled trials and meta-analyses, pointing towards oral language interventions having a moderate impact. In their translation of effect sizes, they characterise such interventions as having a +5 months impact.

Key research in this field

Comparative research into classroom talk in the UK, USA, India, Russia and France carried out in 2000 suggested that in the UK the quality of student participation in discussion was low. The report characterised student talk as being dominated by short answers to teacher questions; a closed process (Alexander, 2000). Following this report, a group of academics at the University of Cambridge set out to develop and trial methods of improving the quality of talk in UK classrooms. Their approaches became known as ‘Dialogic Teaching’ and ‘Thinking Together’.

Robin Alexander

Robin Alexander

Aside from the work of some independent academics in the United States (the work of Lauren Resnick is of particular note), the most significant research into the effectiveness of oral language inteventions has been carried out by Cambridge academics such as Neil Mercer and Robin Alexander, the figures who developed the strategies. A relatively small number of key names therefore feature very heavily in the literature on oral language interventions.

Examples of these studies include Neil Mercer’s work showing the positive impact of oral language interventions on verbal reasoning (Mercer, 2000) and attainment in science (Mercer et al, 2004). In 2006 Mercer and Sams published research showing a significant impact (an effect size of .59) on maths attainment of oral language interventions. All of these studies were methodologically comprehensive and designed with strict controls. The studies share a common conclusion: “improving the quality of children’s use of language for reasoning together would improve their individual learning and understanding” (Mercer and Sams, 2006: 526). Furthermore, the authors make the significant claim that this research presents hard evidence of the effectiveness of group work – although the authors couch it in the most impenetrable edu-babble imaginable (“validity of a sociocultural theory of education by providing empirical support for the Vygotskian claim that language-based social interaction has a developmental influence on individual thinking”, p.526)!

A great deal of the most up-to-date and relevant research in the field of oral language interventions is a legacy of the 2008 Bercow Report into provision for children with speech, language and communication needs (SLNC). The government’s response to this report led to the formation of the Better Communication Action Plan. A key strand of this plan was a comprehensive review of the literature, identifying what research suggested about best practice in developing oral language skills (Law, Beecham and Lindsay, 2010).

Research into the impact of oral language interventions is generally divided into the examination of 3 ‘waves’. Wave 1 interventions are general, whole-class strategies for developing oral language skills. Wave 2 consists of targeted interventions, focusing on pupils with an identified oral language need. Wave 3 concerns specialist interventions, aimed at the very small minority of students with particular, often unique, oral language needs. The bulk of research focuses on Wave 2 interventions.

Talk of the Town

Talk of the Town

 Where to go next?

The fact that there is already convincing research in this field has encouraged the EEF to fund a range of large-scale studies to further develop the evidence base. The particular focus of continued research is on classroom-based interventions to tackle SLCN. Key projects ongoing are:

  • Talk of the Town. A 62 school effectiveness trial testing the impact of the Talk of the Town framework. This project will report in 2016.
  • Language for Learning: An effectiveness trial involving 34 schools testing the impact of a specific 20-week programme, delivered by trained teaching assistants to improve the communication skills of students entering primary school with poor oral language skills. This has the potential to be a low-cost, easily replicable programme to tackle poor communication skills at the start of a child’s school career. It reports in Spring 2015.

If you teach in a school with significant numbers of children with SLCN and you are not familiar with The Communication Trust’s website, then hot-foot it there! It provides an excellent model for how teachers can use evidence to make informed choices about a range of interventions and what to look for in terms of outcomes.

References

Alexander, R., (2000), Culture and pedagogy; international comparisons in primary education, Oxford: Blackwell

Alexander, R., (2012), Improving oracy and classroom talk in English schools: achievements and challenges. Available online here.

Ainscow, M., Gallannaugh, Kerr, K., (2012), An Evaluation of The Communication Trust’s ‘Talk of the Town’ Project, 2011-12. Available online here.

Law, Beecham, Lindsay, (2012), Effectiveness, costing and cost effectiveness of interventions for children and young people with speech, language and communication needs (SLCN). Available online here.

Mercer, N. (2000) Words and Minds: how we use language to think together. London:Routledge.

Mercer, N., Dawes, L., Wegerif, R., & Sams, C. (2004) ‘Reasoning as a scientist: ways of helping children to use language to learn science’, British Educational Research Journal, 30:3, pp.367-385.

 Mercer, N., Sams, C., (2006), ‘Teaching Children how to use language to solve maths problems’, Language and Education, 20:6, pp.507-528

What’s the state of the evidence on…? Peer tutoring

This is the first in what will become a series of brief outlines (second briefing, on Oracy, now available) of the state of research in key areas of school policy and teaching practice. One objective of these posts will be to provide a summary definition of the intervention or policy, and to highlight judgments of their efficacy. A second objective will be to highlight the research that has been conducted in the field, to highlight its strengths and weaknesses, and to offer suggestions as to future research directions. I am not aiming to produce a  comprehensive review of literature and research, but if you feel there is an important study that has been omitted, please comment or contact me on Twitter.

What is Peer Tutoring?

Peer tutoring is a blanket term used to describe a range of specific approaches that share as a common feature learners working with one another to provide mutual support and instruction. The most commonly used peer tutoring approaches are cross-age learning, peer-assisted learning and reciprocal peer tutoring.

Peer Tutoring (image credit: Maswlsey Community Primary School)

Peer Tutoring (image credit: Maswlsey Community Primary School)

Cross-age tutoring

This involves the paring of an older student in a tutor role with a younger student as a tutee.

Peer-assisted learning

This is a structured programme in which participants take part in two or three sessions of approximately 30 minutes each week, generally in order to develop their maths or reading skills. It is also extensively used in higher education establishments.

Reciprocal peer tutoring

Reciprocal peer tutoring sees students of similar age and ability working collaboratively, alternating between the roles of tutor and tutee.

Is there consensus about its impact?

There is a broad consensus that peer tutoring is an effective strategy. The EEF Toolkit rates peer tutoring highly, suggesting that it has a significant impact on outcomes. It ranks it as having a “+6 months” effect size.  Research suggests that tutors can benefit from peer tutoring as much as tutees.  Hattie, in Visible Learning, concludes that peer tutoring has an effect size of .55.

Key research in this field

In 2013 a meta-analysis of the academic impact of peer tutoring was conducted by a team of American academics led by Lisa Bowman-Perrott. Her research summarised the key reasons why peer tutoring programmes tend to have a positive impact: the core components of peer tutoring (frequent opportunities to respond , increased time on task, regular and immediate feedback) have each been “empirically linked with increased academic achievement” (Bowman-Perrott et al, 2013: 39).

Bowmann-Perrott et al based their study on an analysis of multiple single-case pieces of research: investigations conducted with a limited number of students in a single institution. They limited their analysis to investigations meeting “standards for high-quality single case research design”: of 1,775 articles focusing on peer tutoring published between 1966 and 2011 only 26 (1.46%) met these criteria (Bowman-Perrott et al, 2013: 44). In total, their meta-analysis included a total of 938 participants.

This meta-analysis found that the mean effect size of peer tutoring was .69 at elementary grade level and .74 at secondary grade level. “The overall effect size was found to be moderately large, indicating that greater academic gains were achieved by students engaged in peer tutoring interventions than nonpeer tutoring instructional arrangements” (Bowman-Perrott et al, 2013: 49). It should also be noted that the study calculated effect sizes using TauU, a “relatively new effect size measure” that is not directly comparable to the more familiar Cohen’s d effect size calculation (Bowman-Perrott et al, 2013: 41).

John Fantuzzo

John Fantuzzo

As with any area, there are some researchers and academics who have made peer tutoring their specialism. John Fantuzzo, currently at Penn Graduate School of Education, has dedicated much of his career to investigating the impact of peer tutoring. As well as the academic impact of peer tutoring, Fantuzzo has written about the positive social and behavioural impact of peer tutoring programmes (Ginsburg-Block, Rohrbeck, Fantuzzo, 2006).

Keith Topping is another researcher whose work on peer tutoring has been influential and robust. Topping’s particular area of expertise is the use of peer tutoring to develop reading, although he has also been part of studies examining the impact of peer tutoring in maths. In 2011 Topping led a RCT in 80 Scottish primary schools that investigated the impact of peer tutoring on reading. Topping found that peer tutoring led to significant gains in reading and tutor and tutee self-esteem (Topping et al, 2011: 3). The findings of this study led to the EEF deciding to undertake a follow-up project in England. Topping notes that effective tutor training is a significant factor in the success of peer tutoring programmes. His works, particularly 2001’s Thinking, Reading, Writing, are the key starting point for any teacher considering implementing a peer tutoring reading programme.

Keith Topping

Keith Topping

Where to go next?

There is a great deal of evidence from a range of well-designed and rigorously analysed single case studies suggesting that peer tutoring is effective. The cumulative nature of this research lends weight to the findings but, as Bowman-Perrot et al note that, “missing from the peer tutoring literature are recent reviews that report effect sizes with confidence intervals” (Bowman-Perrot et al, 2013: 40). Building on the work of Topping et al (2011), the EEF is currently carrying out a RCT that seeks to develop research in this field, involving multiple schools in a range of contexts. Their focus is on a cross-age tutoring programme targeting improvements in Year 7 and Year 9 literacy. This study will publish its results in Spring 2015 and it should prove to be a major contribution to this field.  Likewise, the EEF is also funding a RCT exploring the impact of peer tutoring in maths.  The Shared Maths project also builds on work led by colleagues of Topping in Scotland, and it will publish results in Autumn 2015.

References

Bowman-Perrott, L., Davis, H., Vannest, K., Williams, L., Greenwood, C., Parker, R., (2013), ‘Academic Benefits of Peer Tutoring: A Meta-Analytic Review of Single-Case Research’, School Psychology Review, 42:1, pp.39-55

Ginsburg-Block, M., Rohrbeck, C., Fantuzzo, J., (2006), ‘A meta-analytic review of social, self-concept and behavioral conduct outcomes of peer assisted learning’, Journal of Educational Psychology, 98, pp.732-749.

Topping, K., (2001), Thinking, Reading, Writing: A Practical Guide to Paired Learning with Peers, Parents and Vounteers, London:Continuum

Topping, K., Miller, D., Thurston, A., McGavock, K., Conlin, N., (2011), ‘Peer tutoring in reading in Scotland: thinking big’, Literacy, 45:1, pp.3-9

Performance Management Panic

It’s performance management time again. All around the country, teachers are sitting down with their line managers to review the year that is drawing to an end and to set targets for 2014-15. With a greater than ever focus on the place of research in schools you can bet that from Newquay to Newcastle, Canterbury to Carlisle, there are teachers who have been set the target of leading research in their schools. For some this will have been expected and requested; a logical focus, based on their skills and interests. For others, however, this new responsibility will be unexpected, and will cast them beyond their existing experiences or skills. Where, then, should such a teacher start?

Firstly, it would be a great idea to become more familiar with the background story of research in schools and to develop a sense of its worth and value. Getting hold of a copy of Tom Bennett’s Teacher Proof (get the school to buy it from the CPD budget!) will open eyes to the unevidenced snake-oil that has been peddled to schools over recent years and should foster a greater sense of scepticism. A contrast to Bennett’s humorous, machine-gun delivery is Ben Goldacre’s (the Guardian’s ‘Bad Science’ columnist) DfE commissioned report Building Evidence into Education. This outlines how teaching can become an evidence-driven profession in the way that medicinedid in the final decades of the twentieth century.

Secondly, reach out for help. Find out what research experience there is already within the school; look for colleagues who are already interested or receptive. You will be more likely to be successful if you have a team of people working with you, sharing ideas and spreading the load. Look to other local schools to see what they are doing about research. If there is a Teaching School nearby then it has a duty to be engaging and leading research: get in touch with them to find out what help and support they can offer. Finally, if you’re not already doing it, start to read and follow the range of bloggers, almost all practicing teachers, who are writing about their research experience. Alex Quigley’s blog would be a great place to start.

Next, you need to get clarity about what the school’s expectations of research are. I have written about the different types of research that exist in Degrees of Engagement. By research, does your head mean classroom-based action research, exploring small scale changes to practice? If so, then studying the example of schools that have embedded an action research culture is a great place to start. Read the blogs written by headguruteacher on the subject or look at an example of a school’s action research journal at www.sandagogy.co.uk.

Alternatively, they may want to see a more rigorous, quantitative piece of research that explores the impact of pupil premium money in the school, say. As I have previously written, I think that this is where schools should really be focusing their attention and resources. It is only through wider engagement with methodologically sound, quantitative research that we can build a secure evidence base for the profession. If this is what the aim is (and I hope it is!), then the EEF’s DIY evaluation guide is the place to start. This document is absolutely superb. As an introductory guide to undertaking high-quality research in a school it really cannot be faulted.

Finally, be realistic. Aim for quality, rather than quantity. One well-planned, rigorous and convincing piece of research will have more use and value than a glut of ad hoc, disorganised and anecdotal classroom studies. Try to focus on something that can be repeated next year, building a more convincing and useful data set. Think about targeting an intervention or policy that others schools are also likely to be using (and investigating), to maximise the relevance of the work. Stick at it. Don’t give up. Good luck, and let me know how it goes!

Degrees of engagement, part 2: producers

Engagement with research is about consumption and production.  In my previous post I argued that barriers to being effective consumers of research can, and really should, be overcome.  The effective production of research is a tougher challenge where the lack of ‘research literacy’ within schools is a more significant obstacle.

Educational research can take a range of appearances.  One level, not uncommon in schools, is to undertake action research.  There are lots of resources available on line and in print that help teachers to plan and carry out small scale projects to investigate the impact of changes to practice.  Sometimes teachers conduct these independently, sometimes as part of a school-wide professional learning community.  There are plenty of great examples of schools where this practice is an embedded part of teaching and professional development.

Research purists can get sniffy about the value of this.  Certainly, most of it is qualitative, rather than quantitative and by its very nature, action research is limited in terms of wider applicability.  All of this is true, but offsetting these concerns is the fact that it represents an effort to think critically about one’s practice, to be open-minded to change and innovation.  If the worst case scenario of small scale action research is that it’s a waste of time then the best case is that it can be a direct route into more meaningful and rigorous research.

By this I mean a determined effort to produce a piece of research that is quantitative, methodologically sound, reliable.  For schools receiving the pupil premium this is a must. The two questions that Ofsted will be guaranteed to ask about pupil premium are: What are you spending it on? and; What impact is it having?  Small scale, anecdotal, qualitative research won’t cut the mustard in response to this questions.  But it’s not just applicable to schools with intakes targeted by pupil premium.  All schools have a duty to deliver value and to monitor the impact of what they do.

However, designing a piece of research that assesses the impact of an intervention demands a real level of research literacy.  For a school that has no one on its staff with the skills and experience of designing controlled research, this is a problem.  Notwithstanding the fantastic resources produced by the EEF to help schools design investigations into the impact of interventions, there is a real need for research literate teachers to act as guides and mentors to schools beyond their own.  Hopefully this can become a feature of the school-to-school support networks that are now taking off around the country. If you’ve got the skills you’ve got a duty to share them!

Finally, there is the participation in the very highest level of research.  By this I mean the sort of multi-school investigation that might involve thousands of students.  Here, engagement in the production of research is a partnership between research literate teachers in schools and educational researchers with the logistical experience and know-how to conduct mass experiments.  Again, the work of the EEF is ground-breaking here.  Partnerships such as those recently announced by the EEF are at the heart of answering the question I posed at the end of my blog about consumers of research.  If research is going to be used to inform decisions, then consumers in schools have to see its relevance, applicability and value.

So a hierarchy of research in schools, in terms of usefulness and value.  Action research at the bottom, quantitative, rigorous single school studies in the middle, multi-school, professionally coordinated studies at the top of the pile.

Making sense of effect sizes

In my previous post I wrote about the importance of teachers becoming effective, informed consumers of research.  I mentioned the work of John Hattie in the post, citing his work as an example to the kind of research that schools should, as a bottom line, be engaging with.  It goes without saying that Hattie’s meta-analysis of educational research findings in Visible Learning has become one of the most talked-about and influential books in teaching.  For some teachers, however, the presentation of the findings in terms of effect size is real barrier to understanding.  What does an effect size of 0.29 mean?  What does that ‘look like’?  What is an effect size?!

The calculation of effect sizes had its origins in the desire to compare studies that focused on a similar problem but used different measures.  For example, imagine that there are two studies both focusing on the impact of reducing class sizes.  One reports its findings in terms of the difference in measured reading age of primary school pupils, the other report focuses on the impact of smaller class sizes on American SAT scores.  The calculation of an effect size for each study allows their findings to be directly compared.  In layman’s terms, an effect size is a quantification of the difference between two groups: the sample, and the test group.  It is reported as a number, and the higher the effect size number the bigger the visible difference between the control and test group.  But reporting effect size as a number does not necessarily aid easy understanding.  Efforts have therefore been made to ‘translate’ the raw number into something more easily visualised.

The EEF Teaching and Learning Toolkit is based on effect size.  Here, the effort to ‘translate’ the effect size data is based around the idea of ‘months’ progress’.  Effect sizes are mapped to a scale that suggests how many extra months’ worth of student progress an intervention is equivalent to, compared to average student progress.  On this scale an effect size of 0.27 to 0.35 is considered to be the equivalent of four months’ additional progress, an effect size of 0.70 to 0.78 to eight months’ additional progress.  This is useful, and certainly makes things easier to understand, but when I first came across the Toolkit I was still left looking for another way of visualising the concept.

What makes most sense to me now when thinking about what an effect size ‘looks like’ is the rule-of thumb comparison first suggested by Jacob Cohen in 1988.  It is, unapologetically, a simplification but like many simplifications it breaks down a barrier and facilitates a deeper understanding.

Cohen, a psychologist, pointed out that an effect size of 0.2 is as visible as the height differences between 15 and 16 year old girls: something that could not be detected by the naked eye.  Asked to decide whether a Year 11 girl was 15 or 16 based simply on how tall she was you’d scarcely have a better than 50-50 chance of being right.  I’d be pretty sceptical about introducing something if the difference it made would only be that obvious!  Moving up the scale, an effect size of 0.5 (into the >0.4 zone that Hattie calls the ‘zone of desired effects’) is more visible.  Cohen’s rule of thumb comparison points to this equating to the height difference between 14 and 18 year old girls.  Spotting whether girls were in Year 13 or Year 10, say, based on their height would strike me as more obviously achievable: indeed, you’d have a 60% chance of being right if you always said that the taller girl was in Year 13.  Introducing something with this order of effect size (developing more effective teacher questioning skills, say) should lead to a pretty visible difference.   An effect size of 0.8, then, is more obvious still: the difference between 13 and 18 year old girls.  A shift from 14 years to 13 years doesn’t sound like it should lead to much of a difference, but this is about when the pubescent growth spurt really kicks in.  As a rule of thumb 80% of Year 8 girls will be shorter than the average Year 13 girl’s height.  An intervention that is going to lead to a difference that’s as obvious as the difference in heights between Year 8 and Year 13 girls has to be worth trying, right?

Degrees of engagement, part 1: consumers

There is an undeniable grass roots interest in educational research building today.  However many schools, including those who are keen and interested in this movement, cite the lack of ‘research literacy’ amongst staff as a barrier to real engagement.  How can this be tackled?  I’m making no original statement when I say that engagement with research can be divided into issues to do with consuming research and producing research.  This post will focus on the first of these aspects.

Firstly, from a personal perspective, having completed an MA in Education, I would strongly advocate that school leadership teams encourage staff to undertake further academic research.  The process of re-engaging (critically!) with research, studying how to design effective research, and evaluate findings was invaluable.  In my opinion, heads should make this a priority: if you’ve no one on the SMT with a further degree in education or school leadership then you should look at getting someone, or yourself, signed up!  But this is not a ‘quick fix’ and it’s not an option that is possible, either financially or logistically, for some people.  How, then, to develop research literacy in a more direct manner?

As a basic standard, senior leadership teams should be active consumers of research, referring to it in their decision making process.  For schools in receipt of the pupil premium, accountable for how they decide to spend their allocation, this is a vital necessity.  Here, the hurdle to initial understanding is not a significant one.  It is hard to imagine that there can be many leadership teams that have not heard of the work of people such as John Hattie or seen the resources produced organisations such as the EEF.    The first stop towards becoming research literate is to become more sceptical.  Less inclined to make a ‘gut decision’.  Less credulous in the face of simple solutions to complex problems.  In short, I think that at a strategic level there is no significant obstacle to engaging with research as a consumer.

Solve all your problems instantly!

Solve all your problems instantly!

For an individual classroom teacher I would argue that there is perhaps a greater challenge.  Much of the accessible evidence available to teachers does focus on the relative impact of whole-school interventions rather than changes to an individual’s practice.  However, there are clear research findings that focus on teaching approaches that would allow a research-minded classroom teacher to evaluate their practice and make decisions about self-improvement.

In many respects the most significant hurdle to embedding the effective use of research within teaching over the coming years could be the paucity of relevant, coherent research!  A future blog post will consider the production side of the issue.

Haven’t we been here before?

Teaching is not at present a research-based profession. I have no doubt that if it were teaching would be more effective and more satisfying.

Despite its near perfect 140 character format this is not a recent tweet from Tom Bennett or Ben Goldacre, but in fact the opening line of a speech given by Professor David Hargreaves to the (now defunct) Teacher Training Agency in 1996. His speech was the catalyst for the last significant debate about the role of research in education. The fact that today, nearly twenty years later, this is still a valid statement and the relationship between educational research and teaching is again a hot topic, should prompt us to consider what lessons can be learnt from the past.

To read Hargreaves’ speech today is to visit familiar territory. At the heart of his argument was an appeal to the example of evidence-based medicine that has been reworked for today’s audience by Ben Goldacre. His criticisms – that educational research was methodologically weak, small-scale, inaccessible and irrelevant – were confirmed by the government-commissioned Hillage Report of 1998 and are still applicable today. Hargreaves’ prescriptions for improvement back in 1996 chime with those expressed at ResearchED and the like today: more teacher involvement in the commissioning and undertaking of educational research; a greater focus on assessing the impact of varying classroom strategies; a drive to improve the quality of research design and analysis.

Hargreaves: glum

Hargreaves: glum

So what has changed? In one respect, at least, the terms of the debate seem to have shifted. Hargreaves glumly predicted that, “Educational researchers may not be enthusiastic about these suggestions, which are perhaps too radical for them.” He was right. Over the next couple of years Hargreaves was subject to serious attack from the educational research establishment. Most strikingly, Martyn Hammersley of the Open University went so far as to suggest that educational research should have no link whatsoever to the classroom and that “evidence-based medicine threatens to assist attacks on the professionalism of doctors… [evidence-based teaching] does not seem likely to enhance the professionalism of teachers, quite the reverse. It seems more likely further to demoralise and undermine the professional judgement of practitioners.” However the strong sense amongst teachers that educational research was irrelevant and solipsistic meant that this fratricide did not prompt any real interest or engagement: teachers remained uninterested and disengaged.

It is the shift in the location of the debate that gives us the best chance of avoiding past failures and driving a real change in the role of educational research. Looking back to the furore around the Hargreaves speech it is now clear that at the heart of the failure to renew the role of educational research was a failure to reach out to and connect with the consumers of research. In the absence of demand-side pressure from teachers, educational researchers continued as they always had.

By contrast, today’s debate about the role of research in education is grass-roots and demand-led. Rather than an appeal to reform from within the research community it is teachers, the consumers of educational research, who are becoming more discerning and demanding. Better connected than ever before by social media and technology (witness the rapid trajectory of researchED from Tweet to established conference-holding movement!), teachers are already sharing best practice and research findings online. Since 2010, greater independence and the move away from LEA control has also driven schools to be more proactive and innovative in the way in which they seek improvements: Learning Alliances and Teaching Schools have started to take a real lead in this area. It’s also really encouraging to see that there is a commitment from the DfE to push the development of research in schools. The provision of research modules on National College qualifications and the establishment of the Education Endowment Foundation, with its superb Toolkit can only be a good thing. Finally, I would suggest that at a wider cultural level there is an increased sense that effective research and data analysis can inform practice and drive improvement: witness the success Nate Silver’s ‘The Signal and the Noise’, for example.

researchED: grassroots

researchED: grassroots

There are challenges ahead. The potential for resistance to evidence-driven change from the ‘this is how it has always been done here’ brigade within the teaching profession itself should not be underestimated. However, I am positive that the key obstacles to making teaching a research-based profession can be overcome and that we can claim the “prize” that Ben Goldacre wrote about in March 2013.