My First Two Days of Geometry

My first two days went well, so I thought I’d share. Plus, if you happen to be another geometry teacher, I need your help with something (I’ll get to that later).

DAY ONE

I have the desks divided into partners. I’ve got a seating chart; the students have their names and faces on the projector so they can find their seat. (I don’t paste things to the desks themselves; I’ve seen students tear off cards / names / whatever and try to swap them.)

As students come in I gave them a bingo card, with the instruction to fill the integers 1-24 in random positions.

bingocard

Then we play syllabus bingo.

syllabusbingo

I have to get out a lot of facts to start, and it’s kind of dull and students don’t totally pay attention, so I have the beeping and flashing of lights and some Jolly Ranchers for the students who say BINGO.

There is a disadvantage to the random order, but it turns out not too bad, and if something seems awry I can always go off sequence to fill in details.

One of the entries (“News of the Day”) needs a little explanation. Even with a closer I often seem to have a few spare minutes remaining at the end of a class, so rather than having students millling about randomly I like to share bits of math / science / engineering news they likely haven’t heard of. At the level I teach the students are on the phase where they are deciding what they really want to do with their lives, and I don’t mind giving a little nudge in the STEAM direction. This time around it was the Lexus Hoverboard (which yes, I know, is cheating, but still neat):



That takes up about half the period. Then it’s time for Counterexamples.

counterexamples

(Click the image for a DOC file, although you will need to customize it for your own classroom.)

This is terrific as simultaneously an icebreaker (“11. No Amphi students like to draw.”) and a minor check of prior knowledge (“17. All functions are linear.”)

DAY TWO

As students came in I had them pick six pieces of paper of their favorite color (off a table in the back).

I started with a game of hot potato using a basketball, and when I called time the person holding the ball came up to the computer and saw this:

describepic

I asked them to describe the image using only words, no hand gestures, and had the rest of the class copy the picture to the best of their ability. (The trapezoid someone said “was like the Pizza Hut sign”.)

The generally poor performance on the task led me to nudge to the importance of vocabulary. I then had them take their five blank papers and fold them into a book (and staple the edge). This book will be their vocabulary book which they will slip into a back flap of their composition notebook and be able to use throughout the year. Then I gave eight words:

1. Point 2. Line 3. Line segment 4. Ray 5. Plane 6. Scalene triangle 7. Isosceles triangle 8. Equilateral triangle

and had them either use the glossary in their textbook or the data plan of their cell phones to look them up and define in their own words. (6-8 might seem strange to toss in, but our textbook assumes the students know the words already, so I thought I’d get them out of the way.)

This took a while for some students. It reached the phase where 3 or 4 students were working while the rest were done, so I had another volunteer student come up and play the describe-a-picture game with another picture:

describe32

This time I encouraged them to use their vocabulary to help things out (students referred to their newly-made glossary as the activity was happening). It went better than the first time.

I didn’t do any notational specifics (ray AB having a line with an arrow over it, etc.) but those details will hit on day 3.

By the time round 2 of the game was done everyone had finished their vocabulary books, so I did some more hot potato and had people share their definitions. Understanding was key. In one case a person didn’t know what the definition they wrote down meant (for “plane”) so we worked through an interpretation.

OK WE’RE UP TO THE BIT WHERE I NEED HELP

I told them we were going to play the game one more time, but we as a class were going to draw a picture then describe it by writing a paragraph down. A volunteer came and did the drawing and everyone did the writing, and I told them I was going to solicit help from the fine teachers that I happen to know in other states and even other countries and send their descriptions and try to have those teachers get their geometry classes to draw a copy based solely on those descriptions.

Yes, I mean you guys. Do you teach geometry? Could I use your class? Pretty please? Comment below or email me (see “About”) and I will hook you up. It will be fun!

RESUME NORMALCY

The write-up was the closer, and classes did have a few spare minutes, so I showed the Hendo Hoverboard. Engineering!

Advertisement

Direct instruction and the PCAP 2010 (Math Wars, continued)

So based on my last post about Canada’s math wars I had a number of people stop to comment about direct instruction in general, including Robert Craigen who kindly linked to the PCAP (Pan-Canadian Assessment Program).

(Note: I am not a Canadian. I have tried my best based on the public data, but I may be missing things. Corrections are appreciated.)

For PCAP 2010, close to 32,000 Grade 8 students from 1,600 schools across the country were tested. Math was the major focus of the assessment. Math performance levels were developed in consultation with independent experts in education and assessment, and align broadly with internationally accepted practice. Science and reading were also assessed.

The PCAP assessment is not tied to the curriculum of a particular province or territory but is instead a fair measurement of students’ abilities to use their learning skills to solve real-life situations. It measures learning outcomes; it does not attempt to assess approaches to learning.

Despite the purpose to “solve real-life situations” the samples read to me more like a calculation based test (like the TIMSS) rather than a problem solving test (like the PISA) although it is arguably somewhere in the middle. (More about this difference in one of my previous posts.)

pcapsamples

Despite the quote that “it does not attempt to assess approaches to learning”, the data analysis includes this graph:

directinstructgraph

Classrooms that used direct instruction achieved higher scores than those who did not.

One catch of note (although this is more of a general rule of thumb than an absolute):

Teachers at public schools with less poverty are more likely to use a direct instruction curriculum than those who teach at high-poverty schools, even if they are given some kind of mandate.

This happened in my own district, where we recently adopted a discovery-based textbook. There was major backlash at the school with the least poverty. This seemed to happen (based on my conversations with the department head there) because the parents are very involved and conservative about instruction, and there’s understandably less desire amongst the teachers to mess with something that appears to work just fine. Whereas with schools having more students of poverty, teachers who crave improvement are more willing to experiment.

While the PCAP data does not itemize data by individual school, there are a two proxies that are usable to assess level of poverty:

canadachart

Lots of books in the home is positively correlated to high achievement on the PCAP (and in fact is the largest positive factor related to demographics) but also positively correlated to the use of direct instruction.

Language learners are negatively correlated to achievement on the PCAP (moreso than any other factor in the entire study) but also negatively correlated to an extreme degree in the use of direct instruction.

It thus looks like there’s at least some influence of a “more poverty means less achievement” gap creating the positive correlation with direct instruction.

Now, the report still claims the instruction type is independently correlated with gains or losses (so that while the data above is a real effect, it doesn’t account for everything). However, there’s one other highly fishy thing about the chart above that makes me wonder if the data was accurately gathered at all: the first line.

It’s cryptic, but essentially: males were given direct instruction to a much higher degree than females.

Unless there’s a lot more gender segregation in Canada than I suspected, this is deeply weird data. I originally thought the use of direct instruction must have been assessed via the teacher survey:

pcapsurvey

But it appears the data instead used (or least included) how much direct instruction the students self-reported:

canadastudentsurvey

The correlation of 10.67 really ought to be close to 0; this indicates a wide error in data gathering. Hence, I’m wary of making any conclusion at all of the relative strength of different teaching styles on the basis of this report.

Robert also mentioned Project Follow Through, which is a much larger study and is going to take me a while to get through; if anyone happens to have studies (pro or con) they’d like to link to in the comments it’d be appreciated. I honestly have no disposition for the data to go one way or the other; I do believe it quite possible a rigid “teaching to the test” direct instruction assault (which is what two of the groups in the study seemed to go for) will always beat another approach with a less monolithic focus.

Canada’s math wars and bad use of the PISA

Canada went through a bit of a panic recently when the PISA 2012 scores came out.

canadascores

[Source.]

Oh no! Scores are dropping! There must be something done wrong, so it’s time to change policy:

“If you look at what’s been happening, predominantly over the last decade, there’s been an unprecedented emphasis on discovery learning,” said Donna Kotsopoulos, an associate professor in Wilfrid Laurier University’s education faculty and former teacher.

Robert Craigen, a University of Manitoba mathematics professor who advocates basic math skills and algorithms, said Canada’s downward progression in the international rankings – slipping from sixth to 13th among participating countries since 2000 – coincides with the adoption of discovery learning.

[Source.]

As I pointed out in a recent post, PISA essentially measures problem solving, and it seems strange to beef up calculation in an attempt to improve problem solving, especially considering Canada’s performance on the TIMSS which does tend to measure calculation. While Canada as a whole hadn’t participated in TIMSS since 1999 (they did in 2015 although the report isn’t out yet), some provinces did:

Ontario 8th grade: 2003 (521), 2007 (517), 2011 (512)
Ontario 4th grade: 2003 (511), 2007 (512), 2011 (518)
Quebec 8th grade: 2003 (543), 2007 (528), 2011 (532)
Quebec 4th grade: 2003 (506), 2007 (519), 2011 (533)

canadastat

So: Ontario 8th grade had a minor dip in 8th and rise in 4th grade, both nearly within statistical significance, and Quebec fluctuated down and then up in 8th grade and had an overall rise in 4th grade.

This does not sound like the sort of data to cause major shift in education policy. If anything, the rising numbers on 4th grade (where lack of drill gets decried the most) indicate that discovery curriculum has helped rather than hurt with calculation skills. (Ontario, for instance, while requiring 4th graders to be able to multiply up to 9, does not require memorizing multiplication tables.)

Let’s also lay on the table these quotes on the troubled nature of PISA scores themselves:

What if you learned that Pisa’s comparisons are not based on a common test, but on different students answering different questions? And what if switching these questions around leads to huge variations in the all- important Pisa rankings, with the UK finishing anywhere between 14th and 30th and Denmark between fifth and 37th?

… in Pisa 2006, about half the participating students were not asked any questions on reading and half were not tested at all on maths, although full rankings were produced for both subjects.

While I wouldn’t say the scores are valueless, I think using them as the sole basis of educational policy shift is troubling. Even if we take PISA scores at face value, the wide-open nature of the actual questions which mimic a discovery curriculum indicate you’d want more discovery curriculum, not less.

Unlearning mathematics

I was reading the comment thread in an old post of mine when I hit this gem by Bert Speelpenning:

Here is a short list of things that kids in math class routinely unlearn in their journey from K through 12:
* when you add something, it gets bigger
* when you see the symbol “+” you are supposed to add the numbers and come up with the answer
* the answer is the number written right after the “=” symbol
* you subtract from the bigger number
* a fraction is when you don’t have enough to make a whole
* a percentage can only go up to 100
* the axes on a graph look like an L
* straight lines fit the equation y=mx+b
* the values (labels) on the axes must be evenly spaced
* putting a “-” in front of something makes it negative
* a reciprocal is a fraction that has 1 on top.

What are some other things our students unlearn?

Which things are acceptable to teach initially in a way that will later be changed? When is unlearning problematic?

Which things are impossible to avoid having the unlearning effect? (For instance, even if the teacher avoids saying it explicitly, it’s hard for students to avoid assuming “when you add something, it gets bigger” before negative numbers get introduced.)

TIMSS, PISA, and the goals of mathematics education

It is tempting when hearing about student performance on an international or national test is to assume they measure some monolithic mathematical ability. When a country is doing well on a test mathematical teaching is doing fine, and when a country is doing worse math teaching needs to be looked at and changed.

Additionally, it is contended any countries that are doing well should have their strategies mimicked and any countries doing badly should have their strategies avoided.

One issue with these thoughts is that the two major international tests — the TIMSS and PISA — measure rather different things. Whether a country is doing well or not may depend on what you think the goals of mathematics education are.

Here are some samples from PISA:

PISA Sample #1

pisasample1

PISA Sample #2

You are asked to design a new set of coins. All coins will be circular and coloured silver, but of different diameters.

Researchers have found out that an ideal coin system meets the following requirements:

· diameters of coins should not be smaller than 15 mm and not be larger than 45 mm.

· given a coin, the diameter of the next coin must be at least 30% larger.

· the minting machinery can only produce coins with diameters of a whole number of millimetres (e.g. 17 mm is allowed, 17.3 mm is not).

Design a set of coins that satisfy the above requirements. You should start with a 15 mm coin and your set should contain as many coins as possible.

PISA Sample #3

A seal has to breathe even if it is asleep in the water. Martin observed a seal for one hour. At the start of his observation, the seal was at the surface and took a breath. It then dove to the bottom of the sea and started to sleep. From the bottom it slowly floated to the surface in 8 minutes and took a breath again. In three minutes it was back at the bottom of the sea again. Martin noticed that this whole process was a very regular one.

After one hour the seal was
a. At the Bottom
b. On its way up
c. Breathing
d. On its way down

Here are samples of TIMSS questions:

TIMSS Sample #1

Brad wanted to find three consecutive whole numbers that add up to 81. He wrote the equation

(n – 1) + n + (n + 1) = 81

What does the n stand for?

A)The least of the three whole numbers.
B)The middle whole number.
C) The greatest of the three whole numbers.
D)The difference between the least and greatest of the three whole numbers.

TIMSS Sample #2

Which of these is equal to y^3?

A) y + y + y
B) y x y x y
C) 3y
D) y^2 + y

TIMSS Sample #3

To mix a certain color of paint, Alana combines 5 liters of red paint, 2 liters of blue paint, and 2 liters of yellow paint. What is the ratio of red paint to the total amount of paint?
A) 5:2
B)9:4
C)5:4
D)5:9

The PISA tries to measure problem-solving, while the TIMSS focuses on computational skills.

This would all be a moot point if countries who did well on one test did well on the other but this is not always the case.

Possibly the most startling example is the United States, which scored below average in the 2012 PISA

pisacountrychart

but above average in the 2011 8th grade TIMSS, right next to Finland

timss2011

This is partly explained by the US having more students in than any in the world “who thought of math as a set of methods to remember and who approached math by trying to memorize steps.”

The link above chastises the US for doing badly at the PISA without mentioning the TIMSS. It’s possible to find articles with reversed priorities. Consider this letter via some Finnish educators:

The mathematics skills of new engineering students have been systematically tested during years 1999-2004 at Turku polytechnic using 20 mathematical problems. One example of poor knowledge of mathematics is the fact that only 35 percent of the 2400 tested students have been able to do an elementary problem where a fraction is subtracted from another fraction and the difference is divided by an integer.

If one does not know how to handle fractions, one is not able to know algebra, which uses the same mathematical rules. Algebra is a very important field of mathematics in engineering studies. It was not properly tested in the PISA study. Finnish basic school pupils have not done well in many comparative tests in algebra (IEA 1981, Kassel 1994-96, TIMSS 1999).

That is, despite the apparently objective measure of picking some test or another as a comparison, doing so asks the question: what’s our goal in mathematics education?

(Continued in Canada’s math wars and bad use of the PISA.)

Abstract and concrete simultaneously

In most education literature I have seen going from concrete to abstract concepts as a ladder.

abstractladder

The metaphor has always started with concrete objects before “ascending the ladder” to abstract ideas.

I was researching something for a different (as yet unrevealed) project when I came across the following.

ascentconcrete

[Source.]

That is, someone used the same metaphor but reversed the ladder.

This is from a paper on the Davydov curriculum, used in parts of Russia for the first three grades of school. It has the exotic position of teaching measuring before counting. Students compare objects with quantity but not number — strips of paper, balloons, weights:

Children then learn to use an uppercase letter to represent a quantitative property of an object, and to represent equality and inequality relationships with the signs =, ≠, >, and B, or A<B. There is no reference to numbers during this work: “A” represents the unmeasured length of the board.

[Source.]

A later exercise takes a board A and a board B which combine in length to make a board C, then have the students make statements like “A + B = C” and “C – B = A”.

ABCcompare

Number is eventually developed as a matter of comparing quantities. A small strip D might need to be used six times to make the length of a large strip E, giving the equation 6D = E and the idea that number results from repetition of a unit. This later presents a natural segue into the number line and fractional quantities.

The entire source is worth a read, because by the end of the third year students are doing complicated algebra problems. (The results are startling enough it has been called a “scam”.)

I found curious the assertion that somehow students were starting with abstract objects working their way to concrete ones. (The typical ladder metaphor is so ingrained in my head I originally typed “building their way down” in the previous sentence.) The students are, after all, handling boards; they may be simply comparing them and not attaching numbers. They give them letters like A and B, sure, but in a way that’s no less abstract than naming other things in the world.

After enough study I realized the curriculum was doing something clever without the creators being aware of it: they were presenting situations that (for the mind) were concrete and abstract at the same time.

For a mathematician’s perspective, this is impossible to do, but the world of mental models works differently. By handling a multitude of boards without numbers and sorting them as larger and smaller, an exact parallel is set up with the comparison of variables that are unknown numbers. Indeterminate lengths work functionally identical to indeterminate number.

This sort of thing doesn’t seem universally possible; it’s in this unique instance the abstract piggybacks off the concrete so nicely. Still it may be possible to hack it in: for my Q*Bert Teaches the Binomial Theorem video I used a split-screen trick of presenting concrete and abstract simultaneously.

qbertsplit

Although the sequence in the video gave the concrete example first, one could easily imagine the concrete being conjoined with an abstract example cold, without prior notice.

(For a more thorough treatment of the Davydov curriculum itself, try this article by Keith Devlin.)

Students missing test questions due to computer interface issues

I’ve had a series where I’ve been looking at Common Core exams delivered by computer looking for issues. Mathematical issues did crop up, but the more subtle and universal ones were about the interface.

Part 1: Observations on the PARCC sample Algebra I exam
Part 2: Observations on the PARCC sample Algebra II exam
Part 3: Observations on the PARCC sample Geometry exam
Part 4: Observations on the SAGE sample exam

While the above observations were from my experience with design and education, I haven’t had a chance to experience actual students trying the problems.

Now that I have, I want to focus on one problem in particular which is on the AIR samples for Arizona, Utah, and Florida. First, here is the blank version of the question:

azmerit15blank

Here is the intended correct answer:

azmerit15correct

Student Issue #1:

azmerit15firstissue

In this case, it appears a student didn’t follow the “Drag D to the grid to label this vertex” instruction.

However, at least one student did see the instruction but was baffled how to carry it out (the “D” can be easy to miss the way it is on the top of a large white-space). Even given a student who missed that particular instruction, is the lack of dragging a letter really the reason you want students to miss the points?

Also, students who are used to labeling points do so directly next to the point; dragging to label is an entirely different reflex. Even a student used to Geogebra would get this problem wrong, as points in Geogebra are labeled automatically. I do not know of any automated graphical interface other than this test which require the user to add a label separately.

Student Issue #2:

azmerit15thirdissue

Again, it appears possible the full directions were not read, but a fair number of students were unaware line connection was even possible, because they missed the existence of the “connect line” tool.

In problems where the primary activity was to create a line this was not an issue, but since the primary mathematical step here involves figuring out the correct place to add a point, students became blind to the line interface.

In truth I would prefer it if the lines were added automatically; clearly their presence is not what is really being tested here.

Student Issue #3:

azmerit15secondissue

This one’s in the department of “I wouldn’t have predicted it” problems, but it looks like the student just tried their best at making a parallelogram and felt like it was fine to add another point as long as it was close to “C”. The freedom of being allowed to add extra points suggests this. If the quadrilateral was formed automatically with the addition of point “D” (as I already suggested) this problem would be avoided. Another possibility would be to have the D “attached” the point as it gets dragged to the location, and to disallow having more than one point being present.

Induction simplified

When first teaching about the interior angles of a polygon I had an elaborate lesson that involved students drawing quadrilaterals, pentagons, hexagons, etc and measuring and collecting data and finally making a theory. They’d then verify that theory by drawing triangles inside the polygons and realizing Interior Angles of a Triangle had returned.

I didn’t feel like students were convinced or satisfied, partly because the measurements were off enough due to error there was a “whoosh it is true” at the end but mostly because the activity took so long the idea was lost. That is, even though they had scientifically investigated and rigorously proved something, they took it on faith because the path that led to the formula was a jumble.

I didn’t have as much time this year, so I threw this up as bellwork instead:

inductioninteriorB

Nearly 80% of the students figured out the blanks with no instructions from me. They were even improvising the formulas. Their intuitions were set, they were off the races, and it took 5 minutes.

Observations on the SAGE sample exam

Earlier this year I wrote a multi-part series on the PARCC test samples, picking at potential pitfalls and interface issues. This was done with the assumption this would be my state’s new test.

Then (allegedly) the price tag on the bid came in too high, and we went with the American Institutes for Research instead. They have a contract to administer the Smarter Balanced test (so for the part of the US doing that one, this should interest y’all) but the test we will be seeing is customized for Arizona, presumably out of their test banks. This is close to the situation in Utah, which has a sample of what they are calling the SAGE test. Since there is no Arizona sample yet I decided to try my hand at Utah’s.

I’d like to think my approach to PARCC was gently scolding, but there’s no way around it: this test is very bad. One friend’s comment after going through some problems: “I’m starting to think this is an undergrad psych experiment instead.”

Question #1 is straightforward. Question #2 is where the action starts to happen:

sageproblem

Adding a point on the line gets the r-value closer to 1, but with no information on the exact coordinate points (those are nowhere near the grid lines) or the original r-value I believe this problem is impossible as written.

sageproblem3

Question #3 is fairly sedate although they screwed up by neglecting to specify they wanted positive answers; (17, 19) and (-17, -19) both work but the problem implies there is only one valid pair. I’d like to draw attention to the overkill of the interface, which includes pi and cube roots for some reason. There seem to be multiple “levels” to the numerical interface, with “digits and decimal point and negative sign” being the simplest all the way up to “including the arctan if for some reason you need that” but without much rhyme or reason to the complexity level for a particular problem.

Case in point:

sageproblem4

The percents in the problem imply the answer will also be delivered as x%, but there is absolutely no way to type a percent symbol in the line (just typing % with the keyboard is unrecognized). So something like 51% would need to be typed as .51. Fractions are also unrecognized.

sageproblem5

Here’s the Common Core standard:

CCSS.MATH.CONTENT.HSA.SSE.B.4
Derive the formula for the sum of a finite geometric series (when the common ratio is not 1), and use the formula to solve problems. For example, calculate mortgage payments.

sageproblem6

I could linger on the bizarre conditional clause that makes this problem (*why* would one ever need to have a line with a y-intercept greater than one given in table form yet also perpendicular to some other particular line is beyond me) but instead I’ll point out the interface to the right, which is how all lines are drawn. (Just lines: there seems to be no way to draw parabolas and so forth like in the PARCC interface.) To add a line you click on “Add Arrow” (not intuitive naming) and click a starting point and an ending point. Notice that the line does not “complete” itself but rather hangs as an odd fragment on the graph. Also, fixing mistakes requires clicking “delete” and then the line, except if you click right on the line the points do not disappear so you have to repeat delete-click-delete-click on each of the points to clear everything out.

Oh, and the super-tiny cursor button is what you click if you want to move something around rather than delete and add. There was not enough room to have a button called “Move”?

sageproblem7

First off, “objective function” is not a Common Core vocabulary word and linear programming is not in Common Core besides, at least not as presented in this question.

CCSS.MATH.CONTENT.HSA.REI.B.3
Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters.

CCSS.MATH.CONTENT.HSA.REI.D.12
Graph the solutions to a linear inequality in two variables as a half-plane (excluding the boundary in the case of a strict inequality), and graph the solution set to a system of linear inequalities in two variables as the intersection of the corresponding half-planes.

Besides that, the grammar is very sloppy. It should say “the objective function z = -3x + 4y” without lumping it in a set of five statements where the student has to fish and presume the function being meant is the first line because it is the only one that includes all three variables in function form.

sageproblem8

I include this problem only to indicate how wide the swerves in difficulty of this test are. First linear programming, then a simple definition, and then…

sageproblem9

I came up with sqrt(2) and 2, but notice how the number line only accepts “0 1 2 3 4 5 6 7 8 9 . -” in the input. There is no way to indicate a square root.

Fractions are also right out, so one possible answer that does work (0.25 and .5) is very hard to get to. (I confess I was stumped and had to get hinted by a friend.)

sageproblem11

I drove my eyes crazy trying to get the right numbers on the axis to match up, especially on Survey 1 which is not even placed against the number line. I thought the PARCC snap-to-half-grid was bad, but this is a floating snap-to-half grid which means it is very unclear if one has in fact aligned one’s graph with 6.7.

sageproblem12

My average is going to be imaginary. The level of input that each problem allows is again quite erratic.

Incidentally, I found no “degrees” button which I guess means all arcsines and so forth are supposed to be in radians. (I was incidentally taught arcsin means the unrestricted inverse — that is, it is not a function and gives all possible answers — but they’re using it here to mean the function with restricted domain.)

sageproblem13

This (very easy) geometry problem requires a very nasty number of clicks (I ended up using 12) for something that can be done by hand in 10 seconds. With practice I could do it in 30 but my first attempts involved misclicks. Couldn’t the student just place the point and that would be enough? Why is the label step necessary? How many points are deducted if the student forgets to drag C somewhere semi-close to the point? How close is close enough?

sageproblem14

Since this is a small experimental probability set, I just made sure there were 10 trials. I do not believe this is what the test makers intended.

sageproblem15

Is my letter “D” close enough? I could easily see this being accepted by a human but the parameters of the computer-grader are uncertain.

sageproblem2

This question is extremely vague. What is considered acceptable here? Does it have to just look slightly bell-curvy? Since there is no axis label one could just claim the y-axis maximum is very high and the graph is normal distribution without clicking any squares at all.

sageproblem16

First, note how it is an undocumented feature the arrows will “merge” to a point if they are on the same position. I was first confused by this problem because I had no idea how to draw it.

Also, notice how I’m having trouble here affecting a slope of 1 and -1 if I attempt to make the graph look “correct” by spanning the entire axis.

The correct side to shade is indicated by a single dot, which is puzzling and potentially confusing.

sageproblem17

Their logic here is if the digits repeat, it is a rational number. It took me several read-throughs to discover that the first number does, in fact, repeat. By their same logic if I wrote

2.718281828…

it would be a repeating number, but of course it is e. The repeated digits should use a bar over them to reduce both the ambiguity and the scavenger-hunt-for-numbers quality of the problem as it stands.

sageproblem18

I am fairly certain the Common Core intent is to only have linear inequality graphs, not absolute value:

CCSS.MATH.CONTENT.HSA.REI.D.12
Graph the solutions to a linear inequality in two variables as a half-plane (excluding the boundary in the case of a strict inequality), and graph the solution set to a system of linear inequalities in two variables as the intersection of the corresponding half-planes.

This standard could be stretched, perhaps

CCSS.MATH.CONTENT.HSA.REI.D.10
Understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line).

but the intent is for the overall conceptual understanding that graph = solutions, not permission to run wild with inequality graphs.

sageproblem19

I am fairly certain this is computer-graded. I can think of many ways to phrase what I presume is the intended answer (“at the least either a or b has to be irrational”) but the statement is open enough other answers could work (“neither a nor b can be zero”).

It is possible I am in error on something, in which case I welcome corrections. (I don’t know Utah too well — it is possible Utah made some additions to the standards which null out a few of my objections.) Otherwise, I would plead with the companies working on these tests to please check them carefully for all the sorts of issues I am pointing out above.

Telling left from right

I had a discussion last week when reviewing slope that went like this:

Student: Wait, how can you tell if the slope is positive or negative just by looking?

Me: Well, if you imagine traveling on the line from left to right, if you’re moving up the slope is positive and moving down the slope is negative.

Student: …What?

Me: (points) So, starting over here … (slides hand) … and traveling this way … this slope is moving up. Starting over here … (slides hand) … this slope is moving down.

Student: But I don’t understand where you start.

Me: You start on the left.

Student: I’m still confused.

Me: (delayed enlightenment) Wait … can you tell your right from your left?

Student: No.

This isn't picture that was up at the time, but it's in the same genre.

This isn’t the picture that was up at the time, but it’s in the same genre.

Left-right confusion (LRC) affects a reasonably large chunk of the population (the lowest estimate I’ve heard is 15%) but is one of those things teachers might be blissfully unaware is a real thing. (Note that LRC is at something of a continuum and affects women more than men.)

My own mother (who was a math teacher) has this problem, and has to use her ring finger whenever she needs to tell her right from her left. She reports that thinking about the graph as “reading a book” lets her get the slope direction correct.