Follow Through, the largest US government educational experiment ever

I. Unconditional war

On January 8, 1964, President Lyndon B. Johnson declared an “unconditional war on poverty in America.”

The effort spawned the creation of Medicare, Medicaid, food stamps, the Economic Opportunity Act of 1964, and the Elementary and Secondary Education Act. (The ESEA has been renewed every 5 years since; when renewed in 2001, it went by the moniker “No Child Left Behind”.)

1965 also saw the launch of the Head Start Program, designed to provide early childhood education to low income children while involving the parents.

The program was designed to promote the growth and development of parents and their children. The Planning Committee for Head Start felt that children would benefit from their parents’ direct involvement in the program. They agreed that the best way for parents to learn about child development was by participating with their children in the daily activities of the program.
Sarah Merrill

At the time, parental involvement was controversial:

Although parent involvement was written into law in 1967, their role in governance was spelled out for the first time in 1970 through Part B in the Head Start Policy Manual. This policy was also known as 70.2. Policy 70.2 defined the responsibilities of Policy Councils at the program, delegate, and agency levels. At that time, many Head Start grantees—especially those in public school settings—called Washington, DC and threatened to leave Head Start because 70.2 gave so much authority to parents.
Sarah Merrill

This point is important for what’s to come.

II. 352,000

In 1967, Congress authorized funds to expand Head Start under a program called Follow Through.

Congress authorized Follow Through in 1967 under an amendment to the Economic Opportunity Act to provide comprehensive health, social, and educational services for poor children in primary grades who had experienced Head Start or an equivalent preschool program. The enabling legislation anticipated a large-scale service program, but appropriations did not match this vision. Accordingly, soon after its creation, Follow Through became a socio-educational experiment, employing educational innovators to act as sponsors of their own intervention programs in different school districts throughout the United States. This concept of different educational improvement models being tried in various situations was called “planned variation.”
Interim Evaluation of the National Follow Through Program, page 22

In other words, Congress approved a service program which had to be cut down due to lack of funds to an experimental program.

Various sponsors — 22 in all — picked particular models that would be used for a K-3 curriculum (although it should be noted that due to the social service origin not every sponsor had a curriculum right away — more on that later). Four cohorts (the first group entering in fall 1969, the last fall 1972) went through the program before it was phased out, the last being very scaled down:

cohortdata

The sponsors had classrooms spread throughout the entire country implementing curriculum as they saw fit.

mapsites

[Source.]

Note that this was not a case of them putting in their colleagues; teachers were chosen from sites and given their curriculum via trainers or handbooks. Other teachers taught “comparison groups” not using the interventions that were chosen to be as similar as possible to the experimental groups. The idea was to see if students using the sponsor’s curriculum would outperform the comparison groups.

Teachers were not always happy being forced to participate:

New Follow Through teachers sometimes resisted changing their teaching strategies to fit the Follow Through models, and they found support for their resistance among others who felt as powerless and as buffeted as they.
Follow Through Program 1975-1976, page 31

A wide swath of measures was chosen to assess quality.

testlist

[Source.]

Notice the number that says “Sponsor” — those are questions submitted by the sponsors, knowing the possibility of a mismatch between the curriculum learned and the curriculum tested.

Not all of the data above was used in the final data analysis. By the end of the experiment the main academic measure was the Metropolitan Achievement Test Form F Fourth Edition. Note the only minor use of the MAT in the chart above representing the early years (noted by SAT/MAT — MAT and Stanford word problems were mixed together). Sponsor questions, for instance, dropped by the wayside.

In 1976 the program ended and the program as a whole was analyzed — 352,000 Follow through and comparison children — resulting in a report from 1977 called Education as Experimentation: A Planned Variation Model.

The best summary of the results comes from three charts, which I present directly from the book itself. The dots are the averages, the bars represent maximums and minimums:

basicskillseffect

cognitiveskillseffect

affectiveskills

“Basic skills” represents straightforward reading and arithmetic, “cognitive skills” represent complex problem solving, and “affective skills” dealing with feelings and emotional areas.

The report makes some attempt to combine the data, but the different programs are so wildly dissimilar I don’t see any validity to the attempt. I’d first like to focus on five of them: SEDL, Parent Education, Mathemagenic Activities, Direct Instruction, and Behavior Analysis.

III. SEDL

The Southwest Educational Development Laboratory (SEDL) model is a bilingual approach first developed for classrooms in which 75 percent of the pupils are Spanish-speaking, but it can be adapted by local school staffs for other population mixes. In all cases the model emphasizes language as the main tool for dealing with environment, expressing feelings, and acquiring skills, including nonlinguistic skills. Pride in cultural background, facility and literacy in both the native language and English, and a high frequency of “success” experiences are all central objectives.
Follow Through Program Sponsors, page 31

The SEDL is a good example to show how difficult it is to compare the sponsors; in this case rather than forming a complete curriculum, the emphasis of SEDL is on helping Spanish speakers with a sensitive and multicultural approach. The goals were not to gain basic skills in arithmetic, oral skills are emphasized over written, and based on the target sample there was a higher difficulty set on improving reading.

Given these factors, the result (smaller effect on basic skills, larger effect on cognitive and affective development) seems to be not surprising at all.

IV. Parent Education

This sponsor perhaps makes it clearest that Follow Through started as a social service program, not an education program.

A fundamental principle of the BOPTA model is that parents and school personnel can, and want to, increase their ability to help their children learn. Also, parents and school personnel together can be more effective than either can alone. The sponsor’s goal is to assist both school and home to develop better child helping skills and ways to implement these skills cooperatively and systematically. These child helping skills are derived from careful study of child development, learning, and instructional theory, research, and practice. The, approach is systematically eclectic and features both diagnostic sequential instruction and child-initiated discovery learning.
Follow Through Program Sponsors, page 37

The results from the data for this program were roughly around average; basic skills did slightly better than cognitive skills. However, the idea of including home visits training makes a much different set of variables than just training the teacher.

Related but even more dissimilar was the Home School Partnership:

A parent aide program, an adult education program, and a cultural and extra-curricular program are the principal elements of this model. The model aims to change early childhood education by changing parent, teacher, administrator, and child attitudes toward their roles in the education process. It is believed this can be done by motivating the home and school to work as equal partners in creating an environment that supports and encourages learning.
Follow Through Program Sponsors, page 25

This is a program that had no educational component at all — it was comparing parent intervention versus no parent intervention, which led to confusion:

The instructional component of this program is in disarray. Since there is no in-class instructional model, teachers are on their own. Some are good, but in too many classes bored children and punitive teachers were observed.
Follow Through Program 1975-1976, page 66

Note that in both cases, however, as mentioned earlier: the idea of home parental involvement was innovative and controversial enough on its own it created a burden the other projects did not have. (To be fair, teachers as in-class aides occur in the other programs.)

V. Mathemagenic Activities

This sponsor ran what most people would consider closest to a modern “discovery” curriculum.

The MAP model emphasizes a scientific approach to learning based on teaching the child to make a coherent interpretation of reality. It adheres to the Piagetian perspective that cognitive and affective development are products of interactions between the child and the environment. It is not sufficient that the child merely copy his environment; he must be allowed to make his own interpretations in terms of his own level of development.

An activity-based curriculum is essential to this model since it postulates active manipulation, and interaction with the environment as the basis for learning. Individual and group tasks are structured to allow each child to involve himself in them at physical and social as well as intellectual levels of his being. Concrete materials are presented in a manner that permits him to experiment and discover problem solutions in a variety of ways.

The classroom is arranged to allow several groups of children to be engaged simultaneously in similar or different activities. Teachers’ manuals including both recommended teaching procedure and detailed lesson plans for eight curriculum areas (K-3) are provided in the model. Learning materials also include educational games children can use without supervision in small groups or by themselves. Art, music, and physical education are considered mathemagenic activities of equal importance to language, mathematics, science, and social studies.

Follow Through Program Sponsors, page 33

MAP did the best of all the sponsors at cognitive skills and merely over baseline on basic skills.

The term “mathemagenic” was 60s/70s term that seems not to be in use any more. A little more detail from here about the word:

In the mid-1960’s, Rothkopf (1965, 1966), investigating the effects of questions placed into text passages, coined the term mathemagenic, meaning “to give birth to learning.” His intention was to highlight the fact that it is something that learners do in processing (thinking about) learning material that causes learning and long-term retention of the learning material.

When learners are faced with learning materials, their attention to that learning material deteriorates with time. However, as Rothkopf (1982) illustrated, when the learning material is interspersed with questions on the material (even without answers), learners can maintain their attention at a relatively high level for long periods of time. The interspersed questions prompt learners to process the material in a manner that is more likely to give birth to learning.

There’s probably going to be interest in this sponsor due to the obscurity and actual performance, but I don’t have a lot of specific details other than what I’ve quoted above because. It’s likely the teacher manual that was used during Follow Through is buried in a university library somewhere.

VI. Direct Instruction

This one’s worth a bigger quote:

directinstructdescript

[Source; quotes below from the same source or here.]

This one’s often considered “the winner”, with positive outcomes on all three tests (although they did not get the top score on cognitive skills, at least it improved over baseline).

What I find perhaps most interesting is that the model does not resemble what many think of as direct instruction today.

Desired behaviors are systematically reinforced by praise and pleasurable activities, and unproductive or antisocial behavior is ignored.

The “carrot rather than stick” approach reads like what currently labeled “progressive”. The extremely consistent control is currently labeled “conservative”.

In the classroom there are three adults for every 25 to 30 children: a regular teacher and two full-time aides recruited from the Follow Through parent community. Working very closely with a group of 5 or 6 pupils at a time, each teacher and aide employs the programmed materials in combination with frequent and persistent reinforcing responses, applying remedial measures where necessary and proceeding only when the success of each child with a given instructional unit is demonstrated.

The ratio here is not 1 teacher lecturing to 30 students. It is 1 to 5.

Emphasis is placed on learning the general case, i.e., developing intelligent behavior, rather than on rote behavior.

While the teacher explains first, the teacher is not giving mute examples. They are trying to make a coherent picture of mathematics.

Before presenting the remaining addition facts, the teacher shows how the facts fit together–that they are not an unrelated set of statements. Analogies teach that sets of numbers follow rules. Fact derivation is a method for figuring out an unknown fact working from a known fact. You don’t know what 2+5 equals, but you know that 2+2 equals 4; so you count.

2 + 2 = 4
2 + 3 = 5
2 + 4 = 6
2 + 5 = 7

Then the children are taught a few facts each day so that the facts are memorized.

This is the “counting on” mentioned explicitly in (for example) Common Core and possibly the source of the most contention in all Common Core debates. This differs from those who self-identify with “direct instruction” but insist on rote-first.

Also of note: employees included

a continuous progress tester to reach 150 to 200 children whose job it is to test the children on a 6 week cycle in each core area.

Assessment happened quite frequently; it is not surprising, then, that students would do well on a standardized test compared with others when they were very used to the format.

VII. Behavior Analysis

The behavior analysis model is based on the experimental analysis of behavior, which uses a token exchange system to provide precise, positive reinforcement of desired behavior. The tokens provide an immediate reward to the child for successfully completing a learning task. He can later exchange these tokens for an activity he particularly values, such as playing with blocks or listening to stories. Initial emphasis in the behavioral analysis classroom is on developing social and classroom skills, followed by increasing emphasis on the core subjects of reading, mathematics, and handwriting. The goal is to achieve a standard but still flexible pattern of instruction and learning that is both rapid and pleasurable.

In the behavior analysis classroom, four adults work together as an instructional team. This includes a teacher who leads the team and assumes responsibility for the reading program, a full-time aide who concentrates on small group math instruction, and two project parent aides who attend to spelling, handwriting, and individual tutoring.

Follow Through Program Sponsors, page 9

I bring up this model specifically because

a.) It often gets lumped with Direct Instruction (including in the original chart you’ll notice above), but links academic progress with play in a way generally not associated with direct instruction (the modern version would be the Preferred Activity Time of Fred Jones, but that’s linked more to classroom management than academic achievement).

b.) It didn’t do very well — second to last in cognitive achievement, barely above baseline on basic skills — but I’ve seen charts claiming it had high performance. This is even given it appears to have included assessment as relentlessly as Direct Instruction.

c.) It demonstrates (4 to a class!) how the model does not resemble a standard classroom. This is true for the models that involve a lot of teacher involvement, and in fact none of them seem comparable to a modern classroom (except perhaps Bank Street, which is a model that started in 1916 and is still in use; I’ll get to that model last).

Let’s add a giant grain of salt to the proceedings —

VIII. Data issues

There was some back-and-forth criticizing the statistical methods when Education as Experimentation: A Planned Variation Model was published in 1977. Quite a few papers were written 1978 and 1981 or so, and a good summary of the critiques are at this article which claims a.) models were combined that were inappropriate to combine (I agree with that, but I’m not even considering the combined data) b.) Questionable statistics were used (particularly getting fussy about reliance on analysis of covariance) and c.) the test favored particular specific learnings (so if a class was strong in, say, handwriting, that was not accounted for).

I think the harshest of data critique came before the 1977 report even came out. The Comptroller General of the U.S. made a report to Congress in October 1975 and was blistering:

reliability

The “data analysis contractor” mentioned presenting reservations are the same Abt Publications that came out with the 1977 report.

The report also mentions part of the reason why 22 sponsors are not given in the comparison graph:

Another result of most LEAs not being restricted in their choice of approaches is that some sponsors were associated with only a few projects. The evaluation design for cohort three–the one OE plans to rely most heavily on to determine model effectiveness–requires that a sponsor be working with at least five projects where adequate testing had been done to be compared with other sponsors.

Only 7 of the 22 sponsors met that requirement.

By the end, some sponsors were omitted from the 1977 report altogether. The contractor was also dubious about analysis of covariance:

In an effort to adjust for the initial differences, the data analysis contractor used a statistical technique known as the analysis of covariance … however, the contractor reported that the Follow Through data failed to meet some requirements believed necessary for this technique to be an effective adjustment device.

Additionally:

Further, no known statistical technique can fully compensate for initial differences on such items as pretest scores and socioeconomic characteristics. Accordingly, as OE states in its June 1974 summary, “the basis for determining the effects of various Follow Through models is not perfect.” Our review of the March 1974 report indicated that, for at least four sponsors, the adjustments were rather extensive. Included among the four is the only sponsor that produced significant differences on all four academic measures and the only two sponsors that produced any academic results significantly below their non-Follow-Through counterparts.

This issue was noted as early as 1973, calling out the High Scope, Direct Instruction, and the Behavior Analysis models specifically.

Substantial analysis problems were encountered with these project data due to non-equivalence of treatment and comparison groups.

Interim Evaluation of the National Follow Through Program 1969-1971

(High Scope was one of the models on the “open framework” end of the scale; students experience objects rather than get taught lessons.)

The extreme data issues with Follow Through may be part of the reason why quasi-experiments are more popular now (taking some natural comparison between equivalent schools and adjusting for all the factors via statistics). When the National Mathematics Advisory Panel tried to locate randomized controlled studies, their report in 2008 only found 8 that matched their criteria, and most of those studies only lasted a few days (the longest lasted a few weeks).

IX. Conclusions

These days Follow Through is mostly brought up by those supporting direct instruction. While the Direct Instruction model did do well,

a.) The “Direct Instruction” does not resemble the direct instruction of today. The “I do” – “now you do” pattern is certainly there, but it occurs in small groups and with general ideas presented up front like counting on and algebraic identities. “General rather than rote” is an explicit goal of the curriculum. The original set up of a teacher only handling five students at a time with two aides is not a comparable environment to the modern classroom.

b.) The group that made the final report complained about the inadequacy of the data. They had misgivings about the very statistical method they used. The Comptroller of the United States in charge of auditing finances felt that the entire project was a disaster.

c.) Because the project was shifted from a social service project to an experimental project, not all the sponsors were able to handle a full educational program. At least one of the sponsors had no in-class curriculum at all and merely experimented with parental intervention. The University of Oregon frankly ran their program very efficiently and had no such issue; this lends to a comparison of perhaps administrative competence but not necessarily curricular outlook. For instance, the U of O’s interim report from 1973 noted that arithmetic skills were no better than average in the early cohorts, so adjusted their curriculum accordingly.

arithmeticcheck

d.) While Direct Instruction did best in basic skills, on the cognitive measures the model that did best was a discovery-related. Based on descriptions of all the models Mathemagenic perhaps the closest to what a modern teacher thinks of as an inquiry curriculum.

e.) Testing was relentless enough in Direct Instruction they had an employee specifically dedicated to that task, while some models (like Bank Street) did no formal testing at all during the year.

Of the two other models noted in the report as being in the same type as Direct Instruction, Behavior Analysis did not do well academically at all and the Southwest Educational Development Laboratory emphasis on language and “pride in cultural background” strikes a very different attitude than the controlled environment of Direct Instruction’s behaviorism.

X. A Lament from Bank Street

Before leaving, let’s hear from one of the groups that did not perform so well, but was (according to reports) well managed: Bank Street.

In this model academic skills are acquired within a broad context of planned activities that provide appropriate ways of expressing and organizing children’s interests in the themes of home and school, and gradually extend these interests to the larger community. The classroom is organized into work areas fine; with stimulating materials that allow a wide variety of motor and sensory experiences, as well as opportunities for independent investigation in cognitive areas and for interpreting experience through creative media such as dramatic play, music, and art. Teachers and paraprofessionals working as a team surround the chidren with language that they learn as a useful, pleasurable tool. Math, too,is highly functional and pervades the curriculum. The focus is on tasks that are satisfying in terms of the child’s own goals and productive for his cognitive and affective development.

Follow Through Program Sponsors, page 7

Bank Street is still around and has been for nearly 100 years. While their own performance tests came out positive, they did not do well on any of the measures from Abt’s 1977 report.

In 1981, one of the directors wrote:

The concepts of education we hold today are but variations of the fundamental questions that have been before is since the origins of consciousness. Socrates understood education as “discourse”, a guidepost in the search for wisdom. He valued inquiry and intuition. In contrast, Plato conceived of the State as the repository of wisdom and the overseer of all human affairs, including education. He was the first manager. As so has it always evolved: Dionysian or Apollonian, romanticism or classicism, humanism or behaviorism. All such concepts are aspects of one another. They contribute to evolutionary balance. They allow for alternative resolutions to the same dilemmas and they foster evolutionary change. Thus, a model is not a fixed reality immobilized in time. It is, as described above, a system, an opportunity to structure and investigate a particular modality, to be influenced by it and to change it by entering into its methods. The Bank Street model does not exist as a child-centered, humanistic, experientially-based approach standing clearly in opposition to teacher-centered, behaviorist modalities. These polarities serve more to define the perceived problem than they do to describe themselves.

Follow Through: Illusion and Paradox in Educational Experimentation

Is “one, two, many” a myth?

xkcd comic

Cue letters from anthropology majors complaining that this view of numerolinguistic development perpetuates a widespread myth. — From the alternate text to xkcd comic #764

The “one, two, many” theory is that cultures developed words for “one” and “two” before anything else, and any numbers after are referred to as “many”.

Do cultures with a one, two, many system exist?

Yes. Blake’s Australian Aboriginal Languages points out Aborigines felt no need to count, and while they all had words for “one” and “two” only some made it to “three” and “four”. 1

The Walpiri, for example, only has words for “one”, “two”, and “many”, as shown in this excerpt from The Story of 1:

(The entire hour-long documentary is online, if you’re curious.)

Intriguingly, there’s a recent study that suggests that Aborigines without counting words can manage counting nonetheless:

In tests, the children were asked to put out counters that matched the number of sounds made by banging two sticks together. Thus, said Butterworth, they had to mentally link numbers in sounds and in actions, which meant they couldn’t rely on sights or sounds alone.

“They therefore had to use an abstract representation of, for example, the ‘fiveness’ of the bangs and the ‘fiveness’ of the counters,” he said. “We found that Warlpiri and Anindilyakwa children performed as well as or better than the English-speaking children on a range of tasks, and on numerosities up to nine, even though they lacked number words.”

Sometimes assuming one-two-many can be taken too far, as is the case with …

The Pirahã people

The Pirahã of the Amazon have been cited as a using a “one-two-many” system of counting.

They are truly an extraordinary case of a tribe, and if the (admittedly controversial) claims of the linguist Daniel L. Everett are true, they have no tense to describe things that are not physically present; hence they cannot talk about the future nor tell stories about the past, nor name exact abstractions (like colors).

Abstractions include ordinal and cardinal numbers, which appear to be absent from the language. What they instead have are words for

“small size or amount”: hóì (falling tone)
“large size or amount”: hòí (rising tone)
“cause to come together” (loosely “many”): /bá à gì sò/

(Source: On the absence of number and numerals in Pirahã)

The original confusion was that the words for “small”, “large”, and “many” could be in certain context mean “one”, “two”, and “many”, but they don’t genuinely stand for the numbers; a single large fish would only be called hóì (falling tone) as a joke. Hence using the Pirahã as an example is based on a misunderstanding. 2

Linguistic evidence

Consider the cardinal words versus the ordinal words in English (if you ever mix them up, ordinal numbers referring to the order things are in).

one – first
two – second
three – third
four – fourth
five – fifth
six – sixth

While the words are initially mismatched (indicating that “first” and “second” were developed separately from the abstract notions of the numbers “one” and “two”) after three the words match linguistically.

This occurs in Spanish

uno – primera
dos – segunda
tres – tercera
quatro – cuarto
cinco – quinta
seis – sexta

and in many other languages.

So there is some circumstantial support to the content that “one” and “two” have some special significance, although the same evidence could as easily be used to claim the ordering of “first” and “second” was the real first development, and the cardinal numbers were instead developed all at once (or at least up to five).

Two-counting cultures

Here’s numbering according to the Gumulgal of Australia:

urapon
ukasar
ukasar-urapon
ukasar-ukasar
ukasar-ukasar-urapon
ukasar-ukasar-ukasar

That is, their counting occurs using the words for one and two. So while their counting words are limited linguistically, they nonetheless can use them to count farther. This is much more common than the case of the Walpiri who don’t bother with counting past two at all. This map indicates the two counting cultures still in existence:

[Map adapted from John Barrow’s Pi in the Sky. For a more specific look at tribes in Paupa New Guinea, there’s an extensive reference online.]

While these tribes developed words to count past 2, the linguistic evidence demonstrates they started with the words for “one” and “two” before the later numbers.

xkcd’s possible myths

So was xkcd right or not? It depends on what is meant as their myth:

1. Cultures exist with only words for one, two, and many.

This isn’t a myth, as already explained above.

2. It’s common for cultures to have only words for one, two, and many.

This one’s definitely a myth, so by this meaning xkcd is correct. A fair number of cultures use a limited base 2 system, but not developing counting at all is rare. (Note that developing a word for 2 is not the same as counting — the idea of a “pair” can be separate from what we think of as “two”.)

3. Every strand of counting development started with a one, two, many system.

This one’s a touch foggier — there’s historical evidence that one and two were special in the development of language, and arguably cultures that started with counting up to five or ten simply were using received knowledge from cultures that went through the entire development process.

1 The situation is slightly more complicated than Blake claims. For example, the Anindilyakwa mentioned in the study above usually only use words for one, two, and many, (and the children involved in the study only knew those words) but also have rarely used words for up to 19 for rituals.

2 The idea of “six apples” can be understood without fully abstracting the number “six”. In the Tsimshian language there are separate number words for flat objects, round objects, men, long objects, and canoes.

Anatomy of a Political Math-Ed Reaction

Normally I pass on the murky political waters my profession dips into, but Scott McLeod sent me a link I couldn’t resist discussing because it regards historical mathematics.

First, the original source of confusion:

Mayan numbers taught in Somis school to help students learn math

A group of sixth- and seventh-graders still crack open their textbooks and practice regular math skills most days. But once a week, they turn their math attention to history, culture and places far from Somis.

Teacher Jill Brody’s class started learning about Mayan math in September, part of the school’s efforts to incorporate “ethno-mathematics” into some of its classes.

It is clear to me as a teacher that this is referring to an enrichment activity, and not some sort of overarching system (like New Math or Reform Math). Again from the article:

The school isn’t replacing regular math classes, just adding the ethno-mathematics lessons, she said.

I am also guessing the classroom did not study only Mayan numerals (it’d be hard to fill even a quarter) but the newspaper gave the impression it was the only area being studied.

Otherwise the only thing that bothers me is designating math history lessons under the term “ethno-mathematics” — I find the claim dubious that one part of math history is different from another, so I’d prefer the umbrella term.

Now, the reaction:

Stupid education fad of the day: “Mayan Math”

Today’s stupid education fad of the day?

“Mayan Math.” I kid you not . . .

This is creepily similar to the idiotic “lattice multiplication” lessons in Everyday Math that justify using incoherent, inefficient methods of multiplying because that’s the way the ancient Egyptians did it.

1. The newspaper never called it “Mayan Math” in the same category as “New Math”. It simply is a type of lesson.

2. Teaching mathematics history is not a “fad” and has been present even in highly traditional classrooms for a while. I have Mayan worksheets floating around from the 1950s.

3. Lattice multiplication doesn’t come from the Egyptians (either the Indians or Arabs, depending on your reference). The blog 360 has written up the subject in detail. One of the authors of the blog has also defended the use of the practice. What I should emphasize is that lattice multiplication intrinsically has nothing to do with Reform anything; it’s another algorithm just like the “traditional” one, with the disadvantage that it takes longer to set up and the advantage that it is easier to spot mistakes. The fact that Everyday Math does include the algorithm is unrelated to the overall philosophy, other than a willingness to change the status quo.

I do have sympathy for those suspicious of “discovery” curriculum, in that it can go very badly with an unskilled teacher, but that doesn’t mean essentially unrelated material should be pulled into the same critiques.

What’s the Oldest Mathematical Artifact? (II)

Candidate #2. The Ishango Bones

ishmap

(part 1 here)

The Lembobo Bone is dated at 35000 BC, but before we go backwards in time let’s step forward to 20000 BC to look at another pair of bones. The first is extremely famous and is the most common inclusion in any math history book that wants to name check prehistory; the second is (as of the time of this writing) wildly obscure.

The first Ishango bone was found in 1960 by Belgian geologist Jean de Heinzelin, working near Lake Edward on the border of the Congo and Uganda.

boneA

[Image credit: Science Museum of Brussels.]

There are three rows around the bone containing sets of tally marks:

First row: 19, 17, 13, 11
Second row: 7, 5, 5, 10, 8, 4, 6, 3
Third row: 9, 19, 21, 11

(A more detailed diagram is here at the Wikipedia article.)

The presence of so many numbers on such an early artifact has sparked all sorts of commentary. It has lead some to speculate the first row is a table of prime numbers, or that the bone as a whole represents another lunar calendar (spanning six months):

When I examined this tiny petrified bone in the Musée d’Historie Naturelle in Brussels, I found that the engraving, as nearly as microscopic examination could differentiate the deteriorated markings, was made by thirty-nine different points and was notational. It seemed, more clearly than before, to be lunar.
— Alexander Marshack, The Roots of Civilization

Jean de Heinzelin, the discoverer of the original bone, on his deathbed disclosed the existence of a second bone. Scholarly work followed, and the bone was only revealed to the public in 2007. Here it is:

second bone

[Image credit: Royal Belgian Institute of Natural Sciences.]

There are six sets of marks:

14 long marks, 6 short marks
6 long marks
18 long marks
6 long marks
20 long marks (with complex secondary marks)
6 long marks, 2 short marks

Detailed work on the second bone is still a wide open topic. The link above suggests the marks reflect a multi-base system, a suggestion I find implausible for several reasons, most of all because no number system in history used 18 tally marks or more in a row to represent a single digit.

Now, you may be puzzled why I’ve mentioned these artifacts in the first place, given the Lebombo Bone comes roughly 15000 years earlier. However, some still call the first Ishango Bone (no math text I know of has mentioned the second) the oldest mathematical artifact because they have a particular definition of mathematical:

…most scholars do not consider recording dates to be proper mathematics.
— Simon Singh, The Ishango Bone – Is This The World’s Oldest Mathematical Artefact?

That is, mere tally marks on the Lembobo Bone are not enough to qualify as mathematical.

From an ethnomathematical standpoint, I find this absurd. Being able to count to 29 is an accomplishment not every culture has made, and even what is a simple act for us now required in history a leap of mathematical imagination. Additionally, there is a difference between counting by 1-1 correspondence and counting by sequence, and in the case of the (likely lunar calendar) Lembobo Bone, being able to match the two types of counting required an even greater mathematical intuition than, say, making tally marks to correspond with the number of one’s sheep.

Still, the Ishango Bones deserve some sort of title, perhaps with an appropriate adjective; I’ve read of it as “the earliest complex mathematics” or “the earliest logical mathematics” but I believe most fitting would be to call it “the earliest substantial mathematics”.

To keep a running time line, then:
35000 BC Lembobo Bone: earliest mathematics where counting is used for practical purposes
20000 BC Ishango Bones: earliest substantial mathematics

In parts 3 and 4 of this series I’m going to look at some artifacts that are even older, but (as far as I know) not yet written about by mathematicians.

What’s the Oldest Mathematical Artifact? (I)

Candidate #1. The Lebombo Bone

bordermap

A small piece of the fibula of a baboon, marked with 29 clearly defined notches, may rank as the oldest mathematical artefact known. Discovered in the early seventies during an excavation of the Border Cave in the Lebombo Mountains between South Africa and Swaziland, the bone has been dated to approximately 35,000 B.C. In a description of the bone, Peter Beaumont, an archaeologist who has done extensive work on Border Cave, has noted that the 7.7 cm long bone resembles calendar sticks still in use today by Bushmen clans in Namibia.
— from The oldest mathematical artefact by Bogoshi, Naidoo, and Webb

lembobo pic

The above information has been repeated more or less verbatim across various sources which want to mark the beginning of math history.

But how accurate is it?

29 clearly defined notches

Count on the above picture: you will likely get a count of 29 or 30. The extra possible mark is on the left-hand side of the bone; it looks like it is possible 3 tally marks in quick succession. However, the middle “tally mark” is just a blemish on the image; taking a different image of the same stick and inverting the colors makes the picture clearer:

boneinvert

So: there are 29 notches, although the leftmost one is truncated enough one gets the impression the bone is broken. So does the count stop at 29 or does it continue?

There is some mathematical evidence that perhaps the leftmost notch is indeed the last: it is placed at roughly a 25 degree angle; the second steepest (tally mark #8) is only a 12 degree angle. I interpret this as the ancient tally-cutter “dragging” the diagonal mark, making it more likely to be a starting or ending mark.

29 also appears elsewhere in ancient counting, including a supposed lunar calendar in the Lascaux caves (painted 15,000 BC):

llunar

the bone has been dated to approximately 35,000 B.C

The latest data has put some the bones of the Border Cave at an earlier date than originally calculated in the 70s; however, the Lebombo bone has not received similar treatment.

resembles calendar sticks still in use today by Bushmen clans in Namibia.

Search for “calendar stick” and “Namibia” and the references are not to actual modern calendar sticks, but to the Lebombo bone. I was not able to find a single reference to modern calendar sticks — or an calendar sticks at all — in Namibia other than through mentions of this bone. Given how much Peter Beaumont knows about African archaeology, I’m going to presume this is a gap in the research literature, but it’s definitely a bizarre one.

The oldest mathematical artefact

This claim for “oldest” was made in 1987, but there are now other contenders for the title, including some discoveries made only announced this year; they will be the subject of future posts.

Two Unusual Old Babylonian Tablets

These are both off the Cuneiform Digital Library Initiative.

MS 4515:

maze

MS 4516:

patterns

Higher resolution versions of these can be found at the links.

Wolfram Alpha and Babylonian

EDIT: Wolfram Alpha has been updated to fix the problem mentioned below.

One of the more curious features of Wolfram Alpha is if you type a single number, in addition to giving you trivia it will (if you so choose) render historical versions of that number. Here’s some selections in Babylonian numerals:

alphababy

Here are what Babylonian numerals are supposed to look like. I originally thought Wolfram Alpha wasn’t using base 60 like it’s supposed to be, but I realized later they are just separating out the renderings (like 38 above) so they look like separate symbols in base 10. However, there’s still weird errors going on (like with 10 above, which is clearly not the right symbol) but I have no idea what’s the pattern to the errors. Anyone know?