So based on my last post about Canada’s math wars I had a number of people stop to comment about direct instruction in general, including Robert Craigen who kindly linked to the PCAP (Pan-Canadian Assessment Program).
(Note: I am not a Canadian. I have tried my best based on the public data, but I may be missing things. Corrections are appreciated.)
For PCAP 2010, close to 32,000 Grade 8 students from 1,600 schools across the country were tested. Math was the major focus of the assessment. Math performance levels were developed in consultation with independent experts in education and assessment, and align broadly with internationally accepted practice. Science and reading were also assessed.
The PCAP assessment is not tied to the curriculum of a particular province or territory but is instead a fair measurement of students’ abilities to use their learning skills to solve real-life situations. It measures learning outcomes; it does not attempt to assess approaches to learning.
Despite the purpose to “solve real-life situations” the samples read to me more like a calculation based test (like the TIMSS) rather than a problem solving test (like the PISA) although it is arguably somewhere in the middle. (More about this difference in one of my previous posts.)
Despite the quote that “it does not attempt to assess approaches to learning”, the data analysis includes this graph:
Classrooms that used direct instruction achieved higher scores than those who did not.
One catch of note (although this is more of a general rule of thumb than an absolute):
Teachers at public schools with less poverty are more likely to use a direct instruction curriculum than those who teach at high-poverty schools, even if they are given some kind of mandate.
This happened in my own district, where we recently adopted a discovery-based textbook. There was major backlash at the school with the least poverty. This seemed to happen (based on my conversations with the department head there) because the parents are very involved and conservative about instruction, and there’s understandably less desire amongst the teachers to mess with something that appears to work just fine. Whereas with schools having more students of poverty, teachers who crave improvement are more willing to experiment.
While the PCAP data does not itemize data by individual school, there are a two proxies that are usable to assess level of poverty:
Lots of books in the home is positively correlated to high achievement on the PCAP (and in fact is the largest positive factor related to demographics) but also positively correlated to the use of direct instruction.
Language learners are negatively correlated to achievement on the PCAP (moreso than any other factor in the entire study) but also negatively correlated to an extreme degree in the use of direct instruction.
It thus looks like there’s at least some influence of a “more poverty means less achievement” gap creating the positive correlation with direct instruction.
Now, the report still claims the instruction type is independently correlated with gains or losses (so that while the data above is a real effect, it doesn’t account for everything). However, there’s one other highly fishy thing about the chart above that makes me wonder if the data was accurately gathered at all: the first line.
It’s cryptic, but essentially: males were given direct instruction to a much higher degree than females.
Unless there’s a lot more gender segregation in Canada than I suspected, this is deeply weird data. I originally thought the use of direct instruction must have been assessed via the teacher survey:
But it appears the data instead used (or least included) how much direct instruction the students self-reported:
The correlation of 10.67 really ought to be close to 0; this indicates a wide error in data gathering. Hence, I’m wary of making any conclusion at all of the relative strength of different teaching styles on the basis of this report.
Robert also mentioned Project Follow Through, which is a much larger study and is going to take me a while to get through; if anyone happens to have studies (pro or con) they’d like to link to in the comments it’d be appreciated. I honestly have no disposition for the data to go one way or the other; I do believe it quite possible a rigid “teaching to the test” direct instruction assault (which is what two of the groups in the study seemed to go for) will always beat another approach with a less monolithic focus.