Sometimes, I just shake my head. This started when we were going over “Federal Projects” of which one is Title II whereupon the Feds allocate monies to the Various States and then the State Board of Educations then allocate their share to towns and cities based on grant requests from said towns and cities. Briefly said, this is money to “improve learning and school staff” (programs to make their teaching capability better). Involved in this is curriculum upgrades – new courses like, say, a new match curriculum for kids in 3rd-5th classes. Knowing that past experience, we used to flip courses a LOT and while I saw the costs, I never heard what the outcomes were (gee, silly me for wondering, eh?). So silly engineer me asked the following question (emphasis mine):
Scott, Kirk, and Steve, I am following up on a question that I asked during the Federal Projects subcommittee (in the line of “how do we know if taxpayers are getting sufficient bang for their bucks”):
With respect to the “revamp” question, even if no new curriculum is being introduced, I would still like to know what metrics, especially financial and educational improvement outcomes, were used previously. It goes to the heart of the issue of “we’re spending all this time, effort, and other peoples’ money – what did we get for it?”. Can you, or someone else on the team, dig up that information? What were/are the procedures that you use to determine, in a postmortem / multi-dimensional analysis, whether you got it right, just even, or a failure (to learn from)?
Any ETA on this? It would be nice to know how it is determined that a new program is worse, even, or better on a results basis than what it replaced. Methodology counts (and costs)!
You know, control group and experimental group. Baseline and testing analysis. This should be rather simple stuff, right? And here is the answer I got last night from the Superintendent (again, emphasis mine):
For all of our curricular materials in the district we use a variety information to determine if the material is of benefit to teaching and learning in the Gilford School District. We use internal assessments; i.e. given by the district such as quizzes, tests, projects and other assignment in the classroom. We use external assessments, i.e. SAT, NWEA, AP, NH Statewide Assessment and National Assessment of Educational Progress. These are national assessments that provide information on how our students do compared to other students in the country and how our state does compared to other states. We use our graduation rate as another factor. We like to look at student growth over time, how the students have performed on assessments year to year. We also receive internal feedback from teachers regarding curricular material.
We review this data with the School Board twice a year and on a regular basis with teachers.
Now, I have multiple thoughts (and “delivery” styles) that I am contemplating on using. However, I wish to “outsource” this question based on your ideas of the answer: how should I respond?