Saturday 6th September 2014 saw the second national researchED conference take place; an event for those interested in teaching and research and the complicated relationship between the two. Tom Bennett (@tombennet71) and Helene O’Shea (@hgaldinoshea) captained the ship, welcoming onboard an array of speakers, all who possess a wealth of expertise in their own specialist area.
While I’ve been to a few education-based events before (both larger national gatherings and more local teachmeets etc.), being my first research conference I didn’t arrive with any predefined expectations of the day. On arrival, I was greeted with a customary lanyard and a less customary branded wicker bag, complete with free branded pen and educational paper. Impressive. After a warm welcome, delegates were invited to attend up to seven different sessions of their choice, all lasting roughly one hour. Session leaders had knowledge in their various different curriculum/research/government fields and the workshops reflected this.
Though I’m sure you’re fascinated to know which public transport route I took and what I had for my lunch, I’ll spare you the details and simply note the ‘takeaways’ I left with from the day. Ideally, this penultimate sentence of my intro would see me writing about how much better equipped I now feel to a) source accurate research around my own subject and pedagogy of Literacy and SEN, and b) know how to carry out my own effective research studies into the best methods of teaching and learning. However, I left the conference feeling a little more perplexed by educational research and yet, at the same time, very much refreshed.
Nick Rose (@turnfordblog), a teacher/ researcher/ psychologist
presented his audience with a healthy challenge to approach pedagogical theories and highly regarded, well-known teaching programmes with caution. He was unapologetic in his quest to inform those that were present of the lack of authentic evidence behind some of the most widely used teaching methods we know of in the world of education. His ‘hit list’ included the likes of preferred learning styles (including VAK), right/left brain theory, NLP and even… wait for it… brain gym.
Rose proposed that educationalists need to develop a real ‘professional skepticism’ around research in education. He commented that schools tend to have a very low immune system, allowing a whole variety of costly approaches and strategies to pass through the door without thoroughly vetting their validity first. This is an interesting concept and one we need to be aware of. Nick’s own personal account of the conference can be found here.
David Didau (@learningspy), a teacher/ consultant / author
offered a number of interesting points to consider when looking into edu-research. He quoted Henri Bergson who famously said:
This supported his claim that brains are not rational but rather illogical and, as humans, we therefore fall into some of the well-known traps below:
Anchoring Effect: a tendency to use anchors or reference points to make decisions and evaluations, sometimes leading us astray.
Sunk Cost Fallacy: following through with a project because of our investment (time/money/effort), irrespective of whether evidence would suggest that is the best thing to do.
He outlined that progress is not a linear journey but a complex messy one. Didau posed questions such as “How should we measure true progress?” and “With what educational ‘unit’ of measurement should we assess?”
He stated that evidence is not the same as proof, offering a comment on those strategies that ARE well researched and understood to be effective within the classroom environment. These include the ‘Spacing effect’ and the ‘Testing effect’ both of which are explained in his full presentation, available here.
David left the audience with a quote from Carl Sagan:
John Thomsett (@johntomsett), a Headteacher from Huntingdon School in York and
driver of a new research project along with his Lead Researcher, Alex Quigley
outlined a number of essentials to consider around educational research. He quoted Tom Bentley, who said:
What is the point of research if it doesn’t alter the way you work/plan/teach?
While depth of teachers’ subject knowledge and choice of pedagogical approach is undeniably critical in the development of strong teaching and learning, if neither are realigned to best meet the needs of students as a result of research findings, there’s very little point in getting engaged in it at all. If the research suggests what you are doing currently is right, great! That’s welcome affirmation to keep on doing what you’re doing.
Interestingly, Tomsett commented on his blog this week,
“Perpetual self-doubt is a relatively healthy condition in which to exist. At an event like yesterday’s [researchED] I look to take away some learning and what I took away yesterday made me doubt myself and our developmental priorities just a little bit.”
Research is a grey area and one that so many professions have wrestled with, both in the past and still today. But if we fail to recognise its obvious benefits, we are doing a disservice to our students.
Dylan Wiliam (@dylanwiliam), a teacher ‘guru’, researcher, writer,
Emeritus Professor of Educational Assessment at the IoE
gave a great talk entitled, “Why teaching will never be a research-based profession (and why that’s a GOOD thing)”.
You can find a link to his full presentation here.
One of the key points Wiliam made saw him challenge the audience to consider what part ethics plays in educational enquiry. He claimed that researchers have a moral obligation to pursue fair studies that are valuable to school teachers and students, rather than ones carried out simply in an effort to validate one’s own already-held opinion. He raised the interesting point that many published research studies already available in the public arena are selective in the results they share, omitting details of findings that do not support the cause behind their study.
While Wiliam sees a lot of value in Randomised Control Trials (RCTs) as a method of research, he recognises four main drawbacks.
- Clustering: in comparing two students within the same school, despite potentially being in different groups (ie. one in an active group and the other in a control group), there will inevitably be some similarities through their shared experiences in school etc.
- Power: the various teachers/leaders/students involved in an RCT may not follow direct instructions, thus reducing the fairness of th test.
- Implementation: there are nearly always logistical barriers to carrying out RCTs, which includes aspects such as timetabling, time allowed for interventions outside of curriculum subjects, relevant staff to support etc.
- Context: Perhaps the best way to sum up this point is to quote a blog I came across recently. Dave Algoso, Director of Programmes at Reboot (a social impact firm dedicated to inclusive development and accountable governance) states,
“I think the danger here comes from a false level of precision. We talk about RCTs as having a scientific rigor that distinguishes them from pseudo-experimental approaches. There is some truth to this. However, if the calculated average effect of a program is stripped of all the caveats and nuance about the things we were unable to measure and calculate, then we risk being overconfident in our knowledge. Science brings a potentially inflated sense of our own expertise. RCTs, and the development industry as a whole, would benefit from less certainty and greater humility.”
Food for thought.
Wiliam also made reference to the Educational Endowment Foundation (EEF) toolkit, highlighting how research studies have led educationalists to rate various aspects of teaching and learning based on their supposed level of positive impact on students. According to this list, interventions such as peer tutoring and phonics score very highly (of which I am in full agreement), in comparison to others such as teaching assistants and ability grouping, the latter actually being the only one listed that shows a negative impact score. While there may be some validity in some of these results, Wiliam leads us to question the authenticity behind the scores.
For example, when considering ability grouping, Wiliam makes the point that in a large majority of cases the best and most experienced teachers are usually assigned to the top sets in any given cohort where groups are set by ability. Similarly, lower sets often do not get the access to the differentiated teaching they require to make solid progress. He argued that the gap widens in these cases, often as a result of the top set moving so fast that no students within the middle range of ability can progress to join those at the ‘top’ and, in contrast, the lower sets move far too slowly, thus preventing weaker students to make sufficient progress. This is a great challenge to schools and school leaders and one that, in my opinion, must be addressed in order for all students to make the most positive progress possible. Dylan Wiliam advised that those within the teaching profession should continue to improve their practice through the process of disciplined enquiry.
As a final reflection on researchED 2014, while my impression of educational research is perhaps a little more hazy than it was prior to the event, I’ve returned confident that authentic enquiry into “what works” in this profession is crucial. I’m quite sure that it is a responsibility of ours as educators to ensure we are providing students with the best foundation possible for their future. This includes a willingness to invest time and energy into exploring what “best practice” really is within education.
Videos from the event can be found here.
You can also follow researchED on twitter @researchED1.