Canada’s math wars and bad use of the PISA

Canada went through a bit of a panic recently when the PISA 2012 scores came out.

canadascores

[Source.]

Oh no! Scores are dropping! There must be something done wrong, so it’s time to change policy:

“If you look at what’s been happening, predominantly over the last decade, there’s been an unprecedented emphasis on discovery learning,” said Donna Kotsopoulos, an associate professor in Wilfrid Laurier University’s education faculty and former teacher.

Robert Craigen, a University of Manitoba mathematics professor who advocates basic math skills and algorithms, said Canada’s downward progression in the international rankings – slipping from sixth to 13th among participating countries since 2000 – coincides with the adoption of discovery learning.

[Source.]

As I pointed out in a recent post, PISA essentially measures problem solving, and it seems strange to beef up calculation in an attempt to improve problem solving, especially considering Canada’s performance on the TIMSS which does tend to measure calculation. While Canada as a whole hadn’t participated in TIMSS since 1999 (they did in 2015 although the report isn’t out yet), some provinces did:

Ontario 8th grade: 2003 (521), 2007 (517), 2011 (512)
Ontario 4th grade: 2003 (511), 2007 (512), 2011 (518)
Quebec 8th grade: 2003 (543), 2007 (528), 2011 (532)
Quebec 4th grade: 2003 (506), 2007 (519), 2011 (533)

canadastat

So: Ontario 8th grade had a minor dip in 8th and rise in 4th grade, both nearly within statistical significance, and Quebec fluctuated down and then up in 8th grade and had an overall rise in 4th grade.

This does not sound like the sort of data to cause major shift in education policy. If anything, the rising numbers on 4th grade (where lack of drill gets decried the most) indicate that discovery curriculum has helped rather than hurt with calculation skills. (Ontario, for instance, while requiring 4th graders to be able to multiply up to 9, does not require memorizing multiplication tables.)

Let’s also lay on the table these quotes on the troubled nature of PISA scores themselves:

What if you learned that Pisa’s comparisons are not based on a common test, but on different students answering different questions? And what if switching these questions around leads to huge variations in the all- important Pisa rankings, with the UK finishing anywhere between 14th and 30th and Denmark between fifth and 37th?

… in Pisa 2006, about half the participating students were not asked any questions on reading and half were not tested at all on maths, although full rankings were produced for both subjects.

While I wouldn’t say the scores are valueless, I think using them as the sole basis of educational policy shift is troubling. Even if we take PISA scores at face value, the wide-open nature of the actual questions which mimic a discovery curriculum indicate you’d want more discovery curriculum, not less.

Unlearning mathematics

I was reading the comment thread in an old post of mine when I hit this gem by Bert Speelpenning:

Here is a short list of things that kids in math class routinely unlearn in their journey from K through 12:
* when you add something, it gets bigger
* when you see the symbol “+” you are supposed to add the numbers and come up with the answer
* the answer is the number written right after the “=” symbol
* you subtract from the bigger number
* a fraction is when you don’t have enough to make a whole
* a percentage can only go up to 100
* the axes on a graph look like an L
* straight lines fit the equation y=mx+b
* the values (labels) on the axes must be evenly spaced
* putting a “-” in front of something makes it negative
* a reciprocal is a fraction that has 1 on top.

What are some other things our students unlearn?

Which things are acceptable to teach initially in a way that will later be changed? When is unlearning problematic?

Which things are impossible to avoid having the unlearning effect? (For instance, even if the teacher avoids saying it explicitly, it’s hard for students to avoid assuming “when you add something, it gets bigger” before negative numbers get introduced.)

TIMSS, PISA, and the goals of mathematics education

It is tempting when hearing about student performance on an international or national test is to assume they measure some monolithic mathematical ability. When a country is doing well on a test mathematical teaching is doing fine, and when a country is doing worse math teaching needs to be looked at and changed.

Additionally, it is contended any countries that are doing well should have their strategies mimicked and any countries doing badly should have their strategies avoided.

One issue with these thoughts is that the two major international tests — the TIMSS and PISA — measure rather different things. Whether a country is doing well or not may depend on what you think the goals of mathematics education are.

Here are some samples from PISA:

PISA Sample #1

pisasample1

PISA Sample #2

You are asked to design a new set of coins. All coins will be circular and coloured silver, but of different diameters.

Researchers have found out that an ideal coin system meets the following requirements:

· diameters of coins should not be smaller than 15 mm and not be larger than 45 mm.

· given a coin, the diameter of the next coin must be at least 30% larger.

· the minting machinery can only produce coins with diameters of a whole number of millimetres (e.g. 17 mm is allowed, 17.3 mm is not).

Design a set of coins that satisfy the above requirements. You should start with a 15 mm coin and your set should contain as many coins as possible.

PISA Sample #3

A seal has to breathe even if it is asleep in the water. Martin observed a seal for one hour. At the start of his observation, the seal was at the surface and took a breath. It then dove to the bottom of the sea and started to sleep. From the bottom it slowly floated to the surface in 8 minutes and took a breath again. In three minutes it was back at the bottom of the sea again. Martin noticed that this whole process was a very regular one.

After one hour the seal was
a. At the Bottom
b. On its way up
c. Breathing
d. On its way down

Here are samples of TIMSS questions:

TIMSS Sample #1

Brad wanted to find three consecutive whole numbers that add up to 81. He wrote the equation

(n – 1) + n + (n + 1) = 81

What does the n stand for?

A)The least of the three whole numbers.
B)The middle whole number.
C) The greatest of the three whole numbers.
D)The difference between the least and greatest of the three whole numbers.

TIMSS Sample #2

Which of these is equal to y^3?

A) y + y + y
B) y x y x y
C) 3y
D) y^2 + y

TIMSS Sample #3

To mix a certain color of paint, Alana combines 5 liters of red paint, 2 liters of blue paint, and 2 liters of yellow paint. What is the ratio of red paint to the total amount of paint?
A) 5:2
B)9:4
C)5:4
D)5:9

The PISA tries to measure problem-solving, while the TIMSS focuses on computational skills.

This would all be a moot point if countries who did well on one test did well on the other but this is not always the case.

Possibly the most startling example is the United States, which scored below average in the 2012 PISA

pisacountrychart

but above average in the 2011 8th grade TIMSS, right next to Finland

timss2011

This is partly explained by the US having more students in than any in the world “who thought of math as a set of methods to remember and who approached math by trying to memorize steps.”

The link above chastises the US for doing badly at the PISA without mentioning the TIMSS. It’s possible to find articles with reversed priorities. Consider this letter via some Finnish educators:

The mathematics skills of new engineering students have been systematically tested during years 1999-2004 at Turku polytechnic using 20 mathematical problems. One example of poor knowledge of mathematics is the fact that only 35 percent of the 2400 tested students have been able to do an elementary problem where a fraction is subtracted from another fraction and the difference is divided by an integer.

If one does not know how to handle fractions, one is not able to know algebra, which uses the same mathematical rules. Algebra is a very important field of mathematics in engineering studies. It was not properly tested in the PISA study. Finnish basic school pupils have not done well in many comparative tests in algebra (IEA 1981, Kassel 1994-96, TIMSS 1999).

That is, despite the apparently objective measure of picking some test or another as a comparison, doing so asks the question: what’s our goal in mathematics education?

Abstract and concrete simultaneously

In most education literature I have seen going from concrete to abstract concepts as a ladder.

abstractladder

The metaphor has always started with concrete objects before “ascending the ladder” to abstract ideas.

I was researching something for a different (as yet unrevealed) project when I came across the following.

ascentconcrete

[Source.]

That is, someone used the same metaphor but reversed the ladder.

This is from a paper on the Davydov curriculum, used in parts of Russia for the first three grades of school. It has the exotic position of teaching measuring before counting. Students compare objects with quantity but not number — strips of paper, balloons, weights:

Children then learn to use an uppercase letter to represent a quantitative property of an object, and to represent equality and inequality relationships with the signs =, ≠, >, and B, or A<B. There is no reference to numbers during this work: “A” represents the unmeasured length of the board.

[Source.]

A later exercise takes a board A and a board B which combine in length to make a board C, then have the students make statements like “A + B = C” and “C – B = A”.

ABCcompare

Number is eventually developed as a matter of comparing quantities. A small strip D might need to be used six times to make the length of a large strip E, giving the equation 6D = E and the idea that number results from repetition of a unit. This later presents a natural segue into the number line and fractional quantities.

The entire source is worth a read, because by the end of the third year students are doing complicated algebra problems. (The results are startling enough it has been called a “scam”.)

I found curious the assertion that somehow students were starting with abstract objects working their way to concrete ones. (The typical ladder metaphor is so ingrained in my head I originally typed “building their way down” in the previous sentence.) The students are, after all, handling boards; they may be simply comparing them and not attaching numbers. They give them letters like A and B, sure, but in a way that’s no less abstract than naming other things in the world.

After enough study I realized the curriculum was doing something clever without the creators being aware of it: they were presenting situations that (for the mind) were concrete and abstract at the same time.

For a mathematician’s perspective, this is impossible to do, but the world of mental models works differently. By handling a multitude of boards without numbers and sorting them as larger and smaller, an exact parallel is set up with the comparison of variables that are unknown numbers. Indeterminate lengths work functionally identical to indeterminate number.

This sort of thing doesn’t seem universally possible; it’s in this unique instance the abstract piggybacks off the concrete so nicely. Still it may be possible to hack it in: for my Q*Bert Teaches the Binomial Theorem video I used a split-screen trick of presenting concrete and abstract simultaneously.

qbertsplit

Although the sequence in the video gave the concrete example first, one could easily imagine the concrete being conjoined with an abstract example cold, without prior notice.

(For a more thorough treatment of the Davydov curriculum itself, try this article by Keith Devlin.)

My basic issue with cognitive load theory

The idea of “working memory” — well established since the 1950s — is that the most objects someone can hold in their working memory is 7 plus or minus 2. There have been some revisions to the idea since (mainly that the size of the chunks matter; for instance, learners in languages that use less syllables for their numbers have an easier time memorizing number sequences).

This was extrapolated in the 1980s to educational theory via “cognitive load theory” by stating that the learner’s working memory capacity should not be exceeded; this tends to be used to justify “direct instruction” where the teacher lays out some example problems and the students repeat problems matching the examples. The theory here is by matching examples students suffer as little cognitive load as possible.

Cognitive load theory has some well-remarked problems with a lack of falsification and a lack of connection with modern brain science. These issues likely deserve their own posts.

My issue with cognitive load theory as applied to education is more basic: the contention that direct instruction requires less working memory than any discovery-based alternative. It certainly is asserted often

All problem-based searching makes heavy demands on working memory. Furthermore, that working memory load does not contribute to the accumulation of knowledge in long-term memory because while working memory is being used to search for problem solutions, it is not available and cannot be used to learn.

but the assertion does not match what I see in reality.

To illustrate, here’s a straightforward example — defining convex and concave polygons — done with three discovery-type lessons and direct instruction.

Discovery Lesson #1

Click on the image below to use an interactive application. Use what you learn to write a working definition of “convex” and “concave”.

convexact

Then draw one example each of a convex polygon and a concave polygon. Justify why your pictures are correct.

Discovery #2

The polygons on the left are convex; the polygons on the right are concave. Give a working definition for “convex” and “concave”.

convconc

Then draw one example each of a convex polygon and a concave polygon (not copying any of the figures above). Justify why your pictures are correct.

Discovery #3

convconc

The polygons on the left are convex; the polygons on the right are concave. Try to decide looking at the picture the difference between the two.

…after discussion…

A convex polygon is a polygon with all interior angles less than 180º.
A concave polygon is a polygon with at least one interior angle greater than 180º. The polygons on the left are convex; the polygons on the right are concave.

Draw one example each of a convex polygon and a concave polygon (not copying any of the figures above). Justify why your pictures are correct.

Direct Instruction

A convex polygon is a polygon with all interior angles less than 180º.
A concave polygon is a polygon with at least one interior angle greater than 180º. The polygons on the left are convex; the polygons on the right are concave.

convconc

Draw one example each of a convex polygon and a concave polygon (not copying any of the figures above). Justify why your pictures are correct.

Analysis

Parsing and understanding technical words creates a demand on memory. The hardcore cognitive load theorist would claim such a demand is less than that of having the student create their own definition, but is that really the case? The student using their own words can rely on more comfortable and less technical vocabulary than the one reading the technical definition. The technical definition is easy to misunderstand and the intuitive visualization is only clear to a student if they have the subsequent examples.

Discovery #1 does not appear to have heavy cognitive load. On the contrary, being able to an immediately switch between “convex” and “concave” upon passing the 180º mark is much more tactile and intuitive than either of the other lessons. Parsing technical language creates more mental demands than simply moving a visual shape.

There might be a problem of a student in Discovery #1 or Discovery #2 coming up with an incorrect definition, but that’s why discovery is hard without a teacher present.

Discovery #3 is exactly identical to the direct lesson except the definition and examples are reversed places. Having a non-technical intuition built up before trying to parse the technical definition makes it easier to read; again it appears to have less cognitive demand.

Overestimating and underestimating

One of the basic assumptions of cognitive load theorists seems to be that the mental demands of discovery are given all at once. Usually the demands involve some sort of scaffolding. For instance, in Discovery #3 the intuitive discussion of the pictures and then definition are NOT given at the same time. Only after students have settled on an idea of the difference between the shapes — essentially reducing down to one mental object — is the definition given, which as I already pointed out is easier to read for a student who now has some context.

On the other hand, cognitive load theorists seem to underestimate the demands of direct instruction. While exact entire sentences tend not to be parsed by the student in definitions (this would clearly fail the “only seven units” test) mathematical language routinely has dense and specific enough language that breaking any supposed limit is quite easy. Using the direct instruction example above, taking everything in on one go would require a.) parsing and accepting the new terms “convex” b.) same for “concave” c.) recalling definitions of “polygon” d.) same for “interior angles” e.) keeping in mind the visual of greater and less than 180º f.) keeping track of “at least one” meaning 1, 2, 3, or more and g.) parsing the connection between a-f and the examples given below.

There are obviously counters to some of these — the definitions for instance should be internalized to a degree they are easy to grab from long term memory — but the list doesn’t look that different from a “discovery” lesson, and doesn’t possess the advantage of reducing pressure on vocabulary and language.

The overall concern

In truth, working memory is well-understood for memorizing digit sequences (called digit span) but the research gets fuzzy as processes start to include images and sounds. Any sort of declaration (including my own) that the working memory is busted by a particular task when the task involves mixed media is essentially arbitrary.

On top of that, the brain is associative to such an extent that memory feats are possible which appear to violate these conditions. For instance, there is a memory trick I used to perform for audiences where they would give me a list of 20 objects and I would repeat the list backwards. The trick works by pre-memorizing a list of 20 objects quite thoroughly — 1 for pencil, 2 for swan, say — and then associating the list with those objects. If the first object given was “yo-yo” I would imagine a yo-yo hanging off a pencil. The trick is quite doable by anyone and — given the fluency of the retrieval — suggests that association of images have a secondary status that exceeds that of standard “working memory”. (This is also how the competitors of the World Memory Championship operate, allowing them feats like memorizing 300 random words in 5 minutes.)

Students missing test questions due to computer interface issues

I’ve had a series where I’ve been looking at Common Core exams delivered by computer looking for issues. Mathematical issues did crop up, but the more subtle and universal ones were about the interface.

Part 1: Observations on the PARCC sample Algebra I exam
Part 2: Observations on the PARCC sample Algebra II exam
Part 3: Observations on the PARCC sample Geometry exam
Part 4: Observations on the SAGE sample exam

While the above observations were from my experience with design and education, I haven’t had a chance to experience actual students trying the problems.

Now that I have, I want to focus on one problem in particular which is on the AIR samples for Arizona, Utah, and Florida. First, here is the blank version of the question:

azmerit15blank

Here is the intended correct answer:

azmerit15correct

Student Issue #1:

azmerit15firstissue

In this case, it appears a student didn’t follow the “Drag D to the grid to label this vertex” instruction.

However, at least one student did see the instruction but was baffled how to carry it out (the “D” can be easy to miss the way it is on the top of a large white-space). Even given a student who missed that particular instruction, is the lack of dragging a letter really the reason you want students to miss the points?

Also, students who are used to labeling points do so directly next to the point; dragging to label is an entirely different reflex. Even a student used to Geogebra would get this problem wrong, as points in Geogebra are labeled automatically. I do not know of any automated graphical interface other than this test which require the user to add a label separately.

Student Issue #2:

azmerit15thirdissue

Again, it appears possible the full directions were not read, but a fair number of students were unaware line connection was even possible, because they missed the existence of the “connect line” tool.

In problems where the primary activity was to create a line this was not an issue, but since the primary mathematical step here involves figuring out the correct place to add a point, students became blind to the line interface.

In truth I would prefer it if the lines were added automatically; clearly their presence is not what is really being tested here.

Student Issue #3:

azmerit15secondissue

This one’s in the department of “I wouldn’t have predicted it” problems, but it looks like the student just tried their best at making a parallelogram and felt like it was fine to add another point as long as it was close to “C”. The freedom of being allowed to add extra points suggests this. If the quadrilateral was formed automatically with the addition of point “D” (as I already suggested) this problem would be avoided. Another possibility would be to have the D “attached” the point as it gets dragged to the location, and to disallow having more than one point being present.

Induction simplified

When first teaching about the interior angles of a polygon I had an elaborate lesson that involved students drawing quadrilaterals, pentagons, hexagons, etc and measuring and collecting data and finally making a theory. They’d then verify that theory by drawing triangles inside the polygons and realizing Interior Angles of a Triangle had returned.

I didn’t feel like students were convinced or satisfied, partly because the measurements were off enough due to error there was a “whoosh it is true” at the end but mostly because the activity took so long the idea was lost. That is, even though they had scientifically investigated and rigorously proved something, they took it on faith because the path that led to the formula was a jumble.

I didn’t have as much time this year, so I threw this up as bellwork instead:

inductioninteriorB

Nearly 80% of the students figured out the blanks with no instructions from me. They were even improvising the formulas. Their intuitions were set, they were off the races, and it took 5 minutes.

Follow

Get every new post delivered to your Inbox.

Join 145 other followers