Chapter 4 of Popham’s book brings to light a few things that I doubt many of us consider when writing our own tests, giving a book written test, or prior to our state-mandated ACT administration. Three primary areas for this chapter focus on the validity of a test (or more importantly the validity of the test-based inference), reliability of the test (can it consistently provide similar results), and bias (is there anything that offends or unfairly penalizes test takers).
I really like the idea of being able to ascertain the content-related validity of a teacher written exam by using content specialist to assess your assessment. Again, as Popham states, you need not do this on every single exam you write. I do feel this process has some merit when considering final exams and possibly even unit exams. You want to ensure you are making quality inferences about your students’ ability when based on a teacher constructed exam.
* “The chief method of carrying out such content-related validity studies is to rely on human judgement.” (p. 47)
* “Valid assessment-based inferences about students don’t always translate into brilliant instructional decisions; however, invalid assessment-based inferences about students almost always lead to dim-witted or, at best, misguided instructional decisions.” (p. 51) – don’t know how I feel about ‘dim-witted’, but point taken
* “A teacher who is instructing students from racial/ethnic groups other than the teacher’s own racial/ethnic group might be wise to ask a colleague (or a parent) from those racial/ethnic groups to serve as a one-person bias review committee.” (p. 58)
Hope you are keeping up…we have a meeting this Thursday and we will discuss Chapters 1-5 (I will post the Ch 5 blog later today or tomorrow)
“Content specialist review” made me think of real time with teachers who teach the same content to peer review assessments. Something I know the science department as missed this year. Additionally I was just listening to NPR and heard – people tend to live up to stereotypes of which they are aware. Case in point girls who are asked to give their gender on a math do worse than comparable students on comparable tests. I have not fact checked – but it definitely makes you think.
I have become quite amazed throughout our book chapter readings how much more thought I need to place upon my assessments and really being able to draw informative inferences towards student progress based upon those assessments. More importantly, I am really starting to wonder how valid my assessments are. A review is definitely needed and establishing common assessments within each department begins the process of making such assessments more valid and reliable.