If you talk to teachers about marking student work, most of them will tell you that it is the least favourite part of their job. In many cases, evaluating student work is seen as a means to an end, and that end is having a spreadsheet full of numbers that can be synthesized into a “grade” on a published report. Consequently for many educators, at all levels, assessment of student understanding and mastery of the core knowledge and skills is the part of the teaching and learning process that gets the least time and attention. It is viewed as a necessary “evil”, is frequently viewed as drudgery, and is all too often done on the couch while watching television. Having said that, children, parents, and receiving schools, colleges and universities who are trying to gauge student ability and performance, depend upon the numbers generated by this process to give them an accurate picture. I once, briefly, had a teacher working for me who felt that if he simply gave all of his students high marks, he could avoid any questioning of his teaching or his students’ achievement. Besides giving little meaningful feedback, he also demonstrated the weak point of much student assessment – it is subjective, and it is disproportionately dependent upon the skill and professional integrity of individual teachers.
Assessment is a tricky thing. To be effective, it needs to be based on clearly articulated outcomes, valid methodology, and either criterion or rubric based evaluation tools. It also needs to be validated against external standards or benchmarks to ensure that a student’s stated performance in one classroom or school is a dependable predictor of their future performance elsewhere. This external validity is of critical importance to not only to parents and students, but to educators as well.
Recently in the media, there has been the re-emergence of the annual debate about the Province’s FSA (Foundation Skills Assessment) testing for Grades 4 and 7. The detractors decry the testing as a narrowly focused tool for school and teacher assessment. In a sense, they are correct. As a stand-alone measure, the FSA does not provide reliable, comparable school to school or classroom to classroom data. To see it as such is a misuse of the information.
On the other hand, as a snapshot of student performance and a measure of skill development in language arts and mathematics, standardized testing like the FSA or Canadian Test of Basic Skills (CTBS) or DIBELS such as we also use at KGMS is a critically important tool not just to look at how our children are doing, but to inform instruction and to benchmark school assessment standards. Over the next month, many of our students will be completing the FSA in Grades 4 and 7. It is our plan to use the resultant information to help us to measure their progress; to identify peaks and valleys in their performance; and, to help us to improve our own practice in order to more effectively meet their needs.
At KGMS the lines sometimes become blurred between achievement in terms of our programme expectations, and the norms outlined in provincial guidelines. This kind of external measure helps us to bridge that gap and ensure smoother transitions both ways for our students.