Tales from the BudComm – what, no QA? Part 2 - Granite Grok

Tales from the BudComm – what, no QA? Part 2

textbooksPart 1 here. The question I asked if the School District Administration does any kind of QA when a new / replacement program is introduced is one that I should NEVER have to ask.  Rather, why didn’t the School District Edu-mucky mucks ask it of themselves?  Failing that, why didn’t the School Board?  After all, aren’t they supposed to be in charge and providing competent oversight (versus just being, as I have seen around the State) rather than mere rubber stamps for “the professionals”?

Anyways, given the answer that I received from the Superintendent:

For all of our curricular materials in the district we use

a variety information to determine if the material is of benefit to teaching and learning in the Gilford School District. We use internal assessments; i.e. given by the district such as quizzes, tests, projects and other assignment in the classroom. We use external assessments, i.e. SAT, NWEA, AP, NH Statewide Assessment and National Assessment of Educational Progress. These are national assessments that provide information on how our students do compared to other students in the country and how our state does compared to other states. We use our graduation rate as another factor. We like to look at student growth over time, how the students have performed on assessments year to year. We also receive internal feedback from teachers regarding curricular material.

We review this data with the School Board twice a year and on a regular basis with teachers.

At the risk if being told “you got an answer – it just isn’t the one you want”, the above DOESN’T even approach the on ramp of answering my question.  All of the verbiage points to one thing – after the fact.  REAL QA starts well before the process of even selecting a new curriculum.  It says NOTHING that tells me that any kind of thought went into that important aspect of change.

I ended Part 1 with “Blister, Paint, answer”.

Superintendent Beitler,

I have asked a very simple question: what are the metrics that you use to determine:

  • How much better is Program B than the Program A you wish to replace?
  • What would be the milestones / metrics established before you make the switch, knowing it will vary from area to area?
  • What is the resulting cost/benefit analysis on the outcome (e.g., how many bucks spent to achieve X% overall improvement)?

Your answer is, well, just a whole flock of words strung together that appear to make sense – until one really goes back to the above questions and analyze your answer.  With that said, it is easy to see that all of your verbiage answers NOTHING. At least if I burn a 100 one-dollar bills in my wood stove, I can guess how many BTUs it will generate ahead of time.  Then with the proper equipment and techniques, I can measure precisely what the result was.  Then I can burn other types of paper and see what the delta is for the money spent.  You’re just burning $100 bills.

What you essentially just told me was that “we are spending thousands of dollars – and we have no clue”.  You have no analysis and no prediction of outcome.

Seriously, do you really believe that I am going to believe you that an SAT going to properly reflect a change in 3rd grade math curriculum?  Ditto AP classes and exams or the other standardized tests for the upper grades?  Are graduation rates going to reflect that a change from Program A 3rd grade math to Program B?  Of COURSE not – especially since Gov. Lynch signed legislation that students have to stay in school until 18 years old – hard for the district to take credit for that.

So new question for you – please explain how SAT, AP, et all and graduation rates effectively give you objective and measurable results of swapping said hypothetical 3rd grade math programs. I have the last ten years of standardized tests – they aren’t all that great.  What are the proficiency rates splits for ALL of these assessments and please accompany the table with chart(s).  I know you keep stats on kids going to graduation – how many of them had to take remedial classes before taking “regular” classes (and I have written about this nationwide before so I have a pretty good idea of what the percentages are).

>> We use internal assessments; i.e. given by the district such as quizzes, tests, projects and other assignment in the classroom

Fine – you are measuring student aptitudes and outcomes during the delivery of a class.  Tell me, sir, what did you put in place so that you could directly correlate the changes brought about by Program B from a baseline derived in Program A?  DO you have charts showing the results?  What are some of the the predefined metrics that should have been established for previous changes / revamps – and what were the objective measurable  B to A changes?  You DO have those, right, over a baseline spanning at least two years?  A couple of charts, backed up with actual data, would be quite helpful.

Otherwise, plainly speaking, you’re just spitting into the wind and wasting taxpayer monies doing it.  Right now, unless you can prove to me later, you have no rigorous standards in place in which to judge what the “next fad” (because without careful planning and metrics, that’s all you are doing is playing “Fad Merry-Go-Round”.

So why would we give you more money?

Your answer is totally unacceptable – a nebulous, vacuous set of words strung together in Edu-babble.

Try again – and show us hard data showing pre-change vs post-change outcomes.  Or at least, be honest, and tell us you either haven’t done so lately – or never ever.

After all, it’s our money you’re spending.

Thoughts?

>