Two things happened over the last week that just gets my blood pressure up. First, my son got his PSAT scores back. For those that are not familiar with the PSAT's, these are the preparatory tests given a year or so before you are expected to take the SAT. Now, my son did well, not great, but well. But what really upset me was his impression that he wouldn't get into school or perhaps the school he wants to go to because of one test. He asked the question that I have asked the past 40 years when I took my PSAT's: how can doing the work FASTER mean that you are SMARTER? As he said, "They didn't test if I was a good reader, only faster."
Meanwhile, a project I have been following is finally ready for implementation. The developers were very proud of the way the pre-test and post test looked, with explanations given when a user gets the multiple choice question wrong. However, the project's goal is to change attitudes and professional culture. So how can a pre-test/post test multiple choice test with the "correct" answers assess these attitude or cultural belief changes? More importantly, how can we expect changes over the course of a week? Doesn't it take time?
I keep reading about the changes in Instructional Design, changes in the tools we use, even changes in the theoretical basis of instruction. Yet, we still fall back to the old measures. This is like using a magnifying glass to verify nano-technology research. We just can't see everything that is going on.
A New Way to Assess Complex Learning
I have written previously about the need to change assessment. I especially liked Ewa's expansion of my original idea:
test those who were not present in the training but worked on a project / assignment, with the ones who were. Assessment after 3-6 months of the project and the methodology used should be team-based: teams assess each other in front of the other teams that participated in the training. This way you'd see the advantages of networking and flow of information as well as assessment on how useful your training was: if people didn't use the methods you might want to reconsider its adaptation for the organization. I think this should be the model for business school or any other applied science education. Project- based Companies that want to practice continuous learning could also apply this model.
In terms of the SAT's, why must the tests be timed? If it is so important that students be ranked depending on the timing, then students need only finish the test, then punch in when they are finished and a time is generated. Is this just to generate a false sense of pressure so schools can see who does well under exam pressure? And shouldn't the SAT's include measures of the use of tools? I don't want to imply that those who do well on the SAT will not do well in college. But many of my "best students" come ill prepared for the college classroom as they are used to doing only what is given them to do. They also are paralyzed when they have to deviate from the written material and give their own insights. Education is becoming much more complex and the assessment tools should be also. Why isn't there a movement towards standardized portfolio assessment as a means of measuring high school student work (as NEAP does)? I wonder if this would be a better measurement of student preparedness?
I hope that students realize that there are alternatives in getting into good colleges. How many students are encouraged to enter national academic competitions (often this is reserved for only the "top students", those that take tests well)? Certainly demonstrating your ability in a National Writing competition, science fair, or even having an article published on a top the student has an interest in counts for more than a timed multiple choice test.
My hope is that the schools my son is interested in looks at the overall picture. But from my own personal experience, I know this doesn't happen. It is easier to cull the pool of applicants using a superficial mechanism when you have 10-20,000 of them. However, I am proof that there are many ways to achieve the same goal. Although it has taken many years and many rejection letters, I have straight A's and at the end of my Ph.d. program, with only the data analysis and my committee between my completing my degree.
I also would like organizations to stop looking at employees as nothing more than computers which just need the correct "programming" (training) to spit out the desired output (testing). There is a lot of untapped protential, and until companies learn how to measure this, they will wasting resources on busy work. They need only look at Coke's use of this old "numbers" model in developing New Coke, without really looking at the complexity of customer behavior.
And for all of you out there that say that this is too much work, I have never used a multiple choice test in my 17 years of teaching at the university level, and I only used multiple choice tests that were imposed on me by the organization before that. I have had up to 50 students in a class where I was the sole teacher, and still I used essay exams or projects. It can be done.