As my previous posts have discussed, I am part of a training project for the flipped classroom. As part of the training, my teaching was observed and the students filled out an evaluation (anonymously) on the class. In reviewing their comments and feedback, I began to think that perhaps I had made a mistake in being part of the project.
I currently am looking for a new job for next year. As part of the application process, most schools ask for teaching evaluations. However, I fear that the teaching evaluations for this class will be less than stellar due to the gap between student expectations as to what a good teacher is and the way it is defined by the flipped classroom. Even the questions on the evaluation (does instructor explain the concepts well) is geared towards evaluating the traditional classroom. In a flipped classroom, the instructor does not lecture which many students equate with "explain the concepts."
My Goals in the Flipped Classroom
I teach two different courses this semester. In all, I teach 5 courses regularly for the department. However, the one course I chose for the flipped classroom, group communication, has been plaguing me for the last 2 semesters. While the students enjoy the hands on classroom activities, many fail to make the connection between the readings I assign and the activities. As such, for this class, I wanted to create a greater link between the readings and the class activities.
Unlike many of the others in the project who were trying to loosen teacher control of student learning, I needed to tighten control (by making the link more obvious) while still allowing student direction for learning within the classroom. The first step was to rewrite my goals so they reflected the messiness of the course content while also indicating the importance to the students.
For example, I added the following goals to the course syllabus:
1)Challenge assumptions about effective group processes and communication
2) Apply communication skills (written, oral, and non-verbal) and processes in multiple real world group (especially small group) settings
3) Develop numerous communication strategies in order to participate and contribute to group processes and products in both professional and academic work environments
4) Understand and analyze basic communication research and studies. Learn to collect, analyze, and use communication data in order to thrive in difficult social environments, workplace group problem solving activities, and the participation of dysfunctional teams.
The next step was to create a mechanism that both motivated students to read and provided some framework for their learning. I did this by creating a question of the day. It turns out that my ineptitude in creating good multiple choice questions, actually was to my advantage in developing critical reading skills for my students. One reason I always gave essay questions was because my multiple choice questions often were too open for interpretation. However, these are exactly the type of questions that create good environments for discussion.
Difficulties for contingent faculty
These ambiguous questions of the day, however, are problematic for students that have been educated in the NCLB environment. They have been taught that there is only one answer and a set process. In addition, teachers in K-12 are assessed based on their ability to get their students to understand what the "correct" answer is.
In addition to students coming into class with assumptions about how they will be assessed, they make certain assumptions about the teacher based on their ability to "get the question right." In other words, if they get the question wrong, it is because they weren't taught or they didn't study. However, often the case is that there was a difference in interpretation of either the reading or the question. Students that can argue towards an answer other than the one I give, supporting it with information and evidence from class and the readings, demonstrate a deeper level of understanding to the material. Unfortunately, students interpret this is the "teacher doesn't know what she is doing" or "teacher is not teaching us." The students want a definitive final answer.
One of the other problems I have in this class is the use of team based learning at our university. Many of the students are part of teams in which faculty support learning and encourage support through team structure as part of the concepts of team based learning. However, my class is not based on team based learning even though there are group activities. In fact, my class is experiential which means I am hoping for conflict, social loafing, and groupthink so we can discuss these issues as it pertains to group communication. Students don't like these ambiguous learning environments and this can be reflected in course evaluations.
So, as the number of faculty who are contingent grows, new ways to teach and changing the skills that must be learned to something that is more challenging (such as critical thinking and problem solving skills) will be more difficult to implement, especially if we don't start creating new ways to measure teaching effectiveness. I have yet to see a job application that provides space for a digital portfolio of student work, faculty research, or blogs/social media. Teaching effectiveness for most colleges and universities are still student evaluations and supervisor observations (which can be difficult to obtain unless you take part in a pilot program). The only way to change the learning/teaching culture in universities is to change the way potential faculty are evaluated.
About Me
- V Yonkers
- Education, the knowledge society, the global market all connected through technology and cross-cultural communication skills are I am all about. I hope through this blog to both guide others and travel myself across disciplines, borders, theories, languages, and cultures in order to create connections to knowledge around the world. I teach at the University level in the areas of Business, Language, Communication, and Technology.
Showing posts with label learning assessment. Show all posts
Showing posts with label learning assessment. Show all posts
Wednesday, March 20, 2013
Saturday, January 9, 2010
Should feedback be anonymous?
Imagine this evaluation for a student:
John is one of the laziest students I have ever had. He sat in the back of the class and slept through most of my presentations. When I called on him, he simply answered, "I don't know." Not only is he lazy, but he's stupid.
Compare this to one of the comments I received on my teacher evaluation.
The instructor did not know anything about the topic. When I asked her a question in class, she wasn't able to answer it so I could understand it.
In both cases, the feedback is not really useful. In the first case, an instructor would be slammed if they wrote this type of comment on a student report card, if they were even asked for feedback on the student (it does not happen at the university level). In both cases, it would be important to have more information on both the writer and the person being evaluated. What is their relationship? How long have they known each other? How well was the student doing in the class? This would at least give some context.
More importantly, however, is that the person receiving the feedback would be able to critique the feedback and determine if it was justified or not.
Who is responsible for learning?
As you can probably tell, I received student feedback for my fall courses this semester. In addition to this feedback, which I like to evaluate and help direct me in preparing for my current courses, I also received direction at the one school on what our syllabus should include. These included such items as:
There are two reasons for this level of detail in a syllabus:
Student learning outcomes
This can have very disturbing outcomes for how we are preparing students for the workplace and student work ethics. As students are given detailed instructions on how to learn, and expect only to be assessed on those "standards", they will not stretch beyond what is given them. Why should they?
Likewise, assessing the "softer" skills will be more difficult and could result in a law suit. This means that all creativity will be knocked out of the students. Expect to see creativity in the workplace from those WITHOUT education. The entrepreneurs of tomorrow will probably be the drop-outs of today because the educational system has not supported the expansion of ideas.
Interestingly enough, I see our society becoming more like those of eastern Europe during communist rule. While working in Hungary, I found that the most successful workers were those that were used to doing what they were told to do, always deferring to the person above them (as they would get the blame). The entrepreneurs in Hungary were those that had been unsuccessful in school and the civil service.
Anonymity in evaluations
This brings me back to the question in my title. In the former system, universities implemented student questionnaires to get a feeling for how students perceived their learning. As most courses were not standardized, one measure for effectiveness was student satisfaction. However, if a student did not feel that evaluation was "objective" and their grade might be affected by giving frank feedback, most schools made these evaluations anonymous. Of course, back when these were first implemented, students were not used to giving teachers feedback. Instructors were still perceived as "gods" and students were reluctant to give critical feedback.
Additionally, most of the evaluations were hand written (as were assignments). A good teacher could still determine who wrote what, if they were engaged with their students. I often was able to contextualize the comments based on who the student was who wrote it. Because I received the comments long after grades were due, even if I was unprofessional enough, I couldn't change the student's grade. I received some valuable feedback through these evaluations, especially if I was able to figure out who the student was. In some cases, I was also able to disregard the comments, knowing that a certain student would never like my teaching style.
Now we come to the age of computerized evaluations. First, this means that students self-select if they will fill out the questionnaire. When it was given in the class (the teacher leaving the classroom, but college staff administering the evaluation), more students were apt to participate, even if they were indifferent. With the evaluations being online, the indifferent student will be less likely to take the time out to fill it out. Second, there is no context anymore with the evaluations. I am given a summary of the results and a summary of the comments. So did the student hate everything about my class or just the fact that they had to take the class (they would have hated it regardless of the teacher--typical for public speaking classes). These disembodied comments make it difficult for me to determine what I could have done better. Finally, these evaluations are now used to justify the hiring or retention of faculty member, the "effectiveness" of a college, and even strategic planning.
However, my students have never felt intimidated to express their displeasure in my class. I often hear complaints about how my assignments are "subjective" (I think the word they are really looking for is complex), they have too much work to do (even though I have had to outline my work expectations at the beginning of the course, and this does not change), and how other classmates aren't as smart, hardworking, etc... So why should they worry about putting their name to a course evaluation? Especially if the instructor will not see the evaluation until after final grades have been given?
So back to my opening comment. What would happen if students were not given individual grades for each course, but were asked to evaluate (anonymously) each student on skills such as critical thinking, class preparedness, content mastery, communication skills, ability to express themselves in text, technology, etc... Each semester, these evaluations would be aggregated for a grade, and comments would be summarized. At the end of four years, these evaluations would be sent to the school to review and decide if the student qualified for a degree. In addition, these aggregated scores would be sent to all potential employers. What kind of uproar do you believe would result? Wouldn't students want to know who graded them how and why? Why is this different for instructors?
John is one of the laziest students I have ever had. He sat in the back of the class and slept through most of my presentations. When I called on him, he simply answered, "I don't know." Not only is he lazy, but he's stupid.
Compare this to one of the comments I received on my teacher evaluation.
The instructor did not know anything about the topic. When I asked her a question in class, she wasn't able to answer it so I could understand it.
In both cases, the feedback is not really useful. In the first case, an instructor would be slammed if they wrote this type of comment on a student report card, if they were even asked for feedback on the student (it does not happen at the university level). In both cases, it would be important to have more information on both the writer and the person being evaluated. What is their relationship? How long have they known each other? How well was the student doing in the class? This would at least give some context.
More importantly, however, is that the person receiving the feedback would be able to critique the feedback and determine if it was justified or not.
Who is responsible for learning?
As you can probably tell, I received student feedback for my fall courses this semester. In addition to this feedback, which I like to evaluate and help direct me in preparing for my current courses, I also received direction at the one school on what our syllabus should include. These included such items as:
- school vision and learning goals,
- university vision and learning goals
- attendance policies
- policies for making up tests
- policies for late assignments
- accommodations for students with disabilities
- teaching methods
- classroom atmosphere
- work load expectations
- academic integrity
- grading policies
- Cell phone use
There are two reasons for this level of detail in a syllabus:
- As opposed to when I started teaching in college (and was one of the few to have a syllabus of more than 1 page, some teachers not even having a syllabus), the syllabus IS a contract (isn't even considered LIKE a contract) which can be used in a court of law.
- New federal laws and guidelines passed in 2008 (NCLB for higher ed) requires that students be given this information. Rather than the administration spending time and money doing this in orientations (many times not required or students may be unable to attend), they incorporate it into the classroom. While this ensures a student will be informed of their rights, it also puts legal responsibility on the instructor, and gives the instructor even more administrative work to do (so why aren't instructors paid twice the salaries of top administrators?).
Student learning outcomes
This can have very disturbing outcomes for how we are preparing students for the workplace and student work ethics. As students are given detailed instructions on how to learn, and expect only to be assessed on those "standards", they will not stretch beyond what is given them. Why should they?
Likewise, assessing the "softer" skills will be more difficult and could result in a law suit. This means that all creativity will be knocked out of the students. Expect to see creativity in the workplace from those WITHOUT education. The entrepreneurs of tomorrow will probably be the drop-outs of today because the educational system has not supported the expansion of ideas.
Interestingly enough, I see our society becoming more like those of eastern Europe during communist rule. While working in Hungary, I found that the most successful workers were those that were used to doing what they were told to do, always deferring to the person above them (as they would get the blame). The entrepreneurs in Hungary were those that had been unsuccessful in school and the civil service.
Anonymity in evaluations
This brings me back to the question in my title. In the former system, universities implemented student questionnaires to get a feeling for how students perceived their learning. As most courses were not standardized, one measure for effectiveness was student satisfaction. However, if a student did not feel that evaluation was "objective" and their grade might be affected by giving frank feedback, most schools made these evaluations anonymous. Of course, back when these were first implemented, students were not used to giving teachers feedback. Instructors were still perceived as "gods" and students were reluctant to give critical feedback.
Additionally, most of the evaluations were hand written (as were assignments). A good teacher could still determine who wrote what, if they were engaged with their students. I often was able to contextualize the comments based on who the student was who wrote it. Because I received the comments long after grades were due, even if I was unprofessional enough, I couldn't change the student's grade. I received some valuable feedback through these evaluations, especially if I was able to figure out who the student was. In some cases, I was also able to disregard the comments, knowing that a certain student would never like my teaching style.
Now we come to the age of computerized evaluations. First, this means that students self-select if they will fill out the questionnaire. When it was given in the class (the teacher leaving the classroom, but college staff administering the evaluation), more students were apt to participate, even if they were indifferent. With the evaluations being online, the indifferent student will be less likely to take the time out to fill it out. Second, there is no context anymore with the evaluations. I am given a summary of the results and a summary of the comments. So did the student hate everything about my class or just the fact that they had to take the class (they would have hated it regardless of the teacher--typical for public speaking classes). These disembodied comments make it difficult for me to determine what I could have done better. Finally, these evaluations are now used to justify the hiring or retention of faculty member, the "effectiveness" of a college, and even strategic planning.
However, my students have never felt intimidated to express their displeasure in my class. I often hear complaints about how my assignments are "subjective" (I think the word they are really looking for is complex), they have too much work to do (even though I have had to outline my work expectations at the beginning of the course, and this does not change), and how other classmates aren't as smart, hardworking, etc... So why should they worry about putting their name to a course evaluation? Especially if the instructor will not see the evaluation until after final grades have been given?
So back to my opening comment. What would happen if students were not given individual grades for each course, but were asked to evaluate (anonymously) each student on skills such as critical thinking, class preparedness, content mastery, communication skills, ability to express themselves in text, technology, etc... Each semester, these evaluations would be aggregated for a grade, and comments would be summarized. At the end of four years, these evaluations would be sent to the school to review and decide if the student qualified for a degree. In addition, these aggregated scores would be sent to all potential employers. What kind of uproar do you believe would result? Wouldn't students want to know who graded them how and why? Why is this different for instructors?
Tuesday, December 15, 2009
Assessment and preparing for the 21st century
As I am trying to get finish up my grading, issues of assessment and evaluation keep coming up.
Some of the questions/problems/issues I am dealing with include:
Should we be using rubrics? Don't they limit creativity and motivation? My students will only do what is on the rubric. I am restricted to grading them only on the rubric. So some students who just do a mediocre non-creative job may get the same grade (or higher because I used the rubric) than those that used more creativity, not really doing what I outlined as expected, but addressing the problem in a whole new way.
If this is what we are training our future workers to do, we will have the "best" workers (those with the highest grade) doing only what their managers told them to do, thus stifling the possibilities and creativity of our workers.
Should we be telling our students our expectations for every assignment, giving them detailed instructions? How many times have you received detailed instructions on the job? Shouldn't we be creating the skill to negotiate "outcomes" with our superiors? Situations change, factors we can't control require us to change outcomes, and there might be a disconnect between strategists and front line workers. Companies would benefit if workers and managers began to communicate about outcomes be and open to the changing environment.
If graduates only expect to do the work that a manager has outlined for them, an organization loses incentive, ideas, and reality checks from both those at the top and those on the front line.
Should all grading be "fair"? How is "fair" defined and aren't there instances when a standardized means of evaluation is in fact "unfair"? I have students that excel in class. They are engaged, apply concepts or take ideas to new levels. But they don't "evaluate" well. Neither papers nor tests get at their level of understanding that I see in class. They do the work, but it does not translate into their grades. Is this fair? They will be great employees on day. In addition to their academic work, they are good team players, contribute to the class, have a good work ethic, act responsibly in class, and add to an overall good environment. Others that test well are over critical, stifle others ideas, create a negative environment, and really don't do much work, but are good writers and/or good test takers. I'd rather have the ones that don't grade well than those that do grade well. So how fair am I in my assessment?
So, a word of warning to all those in the training and HR departments: make sure you are using other measures to identify the ideal employees. GPA's don't really reflect a student's potential. A word of warning to educators: we need to re-evaluate how we assess in the current climate of "standardization." We may be doing our society a dis-service.
Some of the questions/problems/issues I am dealing with include:
Should we be using rubrics? Don't they limit creativity and motivation? My students will only do what is on the rubric. I am restricted to grading them only on the rubric. So some students who just do a mediocre non-creative job may get the same grade (or higher because I used the rubric) than those that used more creativity, not really doing what I outlined as expected, but addressing the problem in a whole new way.
If this is what we are training our future workers to do, we will have the "best" workers (those with the highest grade) doing only what their managers told them to do, thus stifling the possibilities and creativity of our workers.
Should we be telling our students our expectations for every assignment, giving them detailed instructions? How many times have you received detailed instructions on the job? Shouldn't we be creating the skill to negotiate "outcomes" with our superiors? Situations change, factors we can't control require us to change outcomes, and there might be a disconnect between strategists and front line workers. Companies would benefit if workers and managers began to communicate about outcomes be and open to the changing environment.
If graduates only expect to do the work that a manager has outlined for them, an organization loses incentive, ideas, and reality checks from both those at the top and those on the front line.
Should all grading be "fair"? How is "fair" defined and aren't there instances when a standardized means of evaluation is in fact "unfair"? I have students that excel in class. They are engaged, apply concepts or take ideas to new levels. But they don't "evaluate" well. Neither papers nor tests get at their level of understanding that I see in class. They do the work, but it does not translate into their grades. Is this fair? They will be great employees on day. In addition to their academic work, they are good team players, contribute to the class, have a good work ethic, act responsibly in class, and add to an overall good environment. Others that test well are over critical, stifle others ideas, create a negative environment, and really don't do much work, but are good writers and/or good test takers. I'd rather have the ones that don't grade well than those that do grade well. So how fair am I in my assessment?
So, a word of warning to all those in the training and HR departments: make sure you are using other measures to identify the ideal employees. GPA's don't really reflect a student's potential. A word of warning to educators: we need to re-evaluate how we assess in the current climate of "standardization." We may be doing our society a dis-service.
Wednesday, October 7, 2009
Dumbing down in the World? Defining "smart"
Let me start with a question to reflect on. Make sure you think of this before you read on. Think of the teacher that had the greatest impact on your learning throughout your life. How did they influence you? What did they do to influence your learning? What characteristics did they have?
Now read on.
I recently had a conversation with my adviser about "good teachers." He begins his introduction to teaching course asking his students to reflect on these questions. This conversation came back as I read a post by Ken Allen on "Is the world dumbing down?".
In the post, he has a clip of Branford Marsailis who speaks about what he has learned from his students. Basically, he says he has not learned anything, as his students just want to be told they are good, and don't want to work hard to learn. My adviser also mentioned how one of his students felt that much of teaching education focuses on the "touchy-feely stuff" and not on the learning outcomes.
This got me to thinking about which teachers had influenced me the most. There were four teachers who had the greatest impact on my learning. One thing that they had in common was they challenged me, but always let me know that they had confidence that I would meet any challenge.
Miss Relation was my reading and 3rd grade teacher. I can still remember when she was so impressed with how well I did with multiple digit multiplication and complex math concepts (such as sets). As I learned to read, she would always re-enforce it with, "I knew you could do it. See?" But she would never take, "I can't" as an excuse. She was in it with me, guiding me, having confidence that I could do it.
Miss McDonough was one of the toughest teachers I had (5th grade), but when I accomplished something, she would let me know how proud she was that I stuck with it and was able to master it. At no time would she give up on any student. You would achieve her high standards or she (and possibly you) would die trying. Her utter confidence in every student (I never heard her say a negative thing about a student...they weren't smart, they were lazy, what were they thinking?) made you want to show her you could do it.
My middle school math teacher was the first to let me know that I was really good in math...during a time when women were not expected to be good in math. Everything that we did, he would point out the good job I did. This confidence in me, made me confident in myself and I excelled in math as a result.
Finally, one of my professors in graduate school, allowed us to co-create our own curriculum. I loved this class as the students found the readings, presented the content, but were guided by very insightful questions from the professor. He treated us (master and Ph.d students) as knowledgeable students that he could learn from. Some of his questions would make you stop and think (and sweat if you weren't prepared). He was very low key and respectful of the students, which made you want to do the best you could. I still remember many of the discussions we had in the class, and the project I worked on (tariffs and counter-tariffs for the Steel Industry).
Another trait that all of these teachers had was that they knew ME and what I needed to learn. They did not use a cookie cutter approach to teaching and took time out to know what I knew and how I thought. They then used this to help me learn better.
What I learned from poor teachers
Likewise, the teachers that I look on with humiliation and anger, even to this day, taught me what a good teacher does not do.
As I mentioned in my comment to Ken:
The teachers that impeded by learning only looked at the standards and never bothered to look at what I actually knew. They also had a very narrow view of what "learning" and "knwoledge" was, then labeled those outside of those norms as "not quite smart". I can remember being moved from the "smart" reading group to the "slow" reading group in 1st grade. The major problem was that the teacher taught reading in one way only, and those that did not learn that way were then labeled "slow". It was humiliating for me and I lost all confidence in my studies. She always made it know who the "good" students were and who the "bad" students were.
These teachers also tended to have only the curriculum and book learning, with no abstract or creative activities in the classroom. Students did what the teacher wanted them to do ONLY or else you were a poor student. I remember a home economics teacher telling me how disappointed she was in my cooking class because I didn't follow the recipe exactly. My classmates all liked my changes (for the most part, sometimes they ended in disaster though), but I did not "follow directions."
Finally, the most difficult teachers that really turned me off to learning were those that seemed to exert their power over me as a student. They always had a way of making sure I knew they were in control and knew more than I did, so I should not ask questions of them or interrupt their class flow. In fact, years later, I realized that they did not like me to ask questions because they probably did not know how to answer them.
Dumbing down the World? Or a new way to assess learning?
In some ways, I do think that we are "dumbing down" in the world. But not in the traditional sense. I don't think that a grade these days is complex enough to assess a student's learning. I don't think that many of the teachers from which I learned the most (I still can remember many of the lessons 30-45 years later) would be able to keep up with the "testing". In fact, some of my daughter's teachers that had the qualities I look for in a good teacher were considered "poor" by some parents because their students enjoyed school and the kids did not have enough homework at night! (Even though their students tested high on standardized tests).
The new educational reforms in the US still focus on these simplistic quantitative tests and pitting teachers against students and parents. I have just read about community schools, however, which I hope with create a new educational environment that is based less on numbers and more on learning.
Now read on.
I recently had a conversation with my adviser about "good teachers." He begins his introduction to teaching course asking his students to reflect on these questions. This conversation came back as I read a post by Ken Allen on "Is the world dumbing down?".
In the post, he has a clip of Branford Marsailis who speaks about what he has learned from his students. Basically, he says he has not learned anything, as his students just want to be told they are good, and don't want to work hard to learn. My adviser also mentioned how one of his students felt that much of teaching education focuses on the "touchy-feely stuff" and not on the learning outcomes.
This got me to thinking about which teachers had influenced me the most. There were four teachers who had the greatest impact on my learning. One thing that they had in common was they challenged me, but always let me know that they had confidence that I would meet any challenge.
Miss Relation was my reading and 3rd grade teacher. I can still remember when she was so impressed with how well I did with multiple digit multiplication and complex math concepts (such as sets). As I learned to read, she would always re-enforce it with, "I knew you could do it. See?" But she would never take, "I can't" as an excuse. She was in it with me, guiding me, having confidence that I could do it.
Miss McDonough was one of the toughest teachers I had (5th grade), but when I accomplished something, she would let me know how proud she was that I stuck with it and was able to master it. At no time would she give up on any student. You would achieve her high standards or she (and possibly you) would die trying. Her utter confidence in every student (I never heard her say a negative thing about a student...they weren't smart, they were lazy, what were they thinking?) made you want to show her you could do it.
My middle school math teacher was the first to let me know that I was really good in math...during a time when women were not expected to be good in math. Everything that we did, he would point out the good job I did. This confidence in me, made me confident in myself and I excelled in math as a result.
Finally, one of my professors in graduate school, allowed us to co-create our own curriculum. I loved this class as the students found the readings, presented the content, but were guided by very insightful questions from the professor. He treated us (master and Ph.d students) as knowledgeable students that he could learn from. Some of his questions would make you stop and think (and sweat if you weren't prepared). He was very low key and respectful of the students, which made you want to do the best you could. I still remember many of the discussions we had in the class, and the project I worked on (tariffs and counter-tariffs for the Steel Industry).
Another trait that all of these teachers had was that they knew ME and what I needed to learn. They did not use a cookie cutter approach to teaching and took time out to know what I knew and how I thought. They then used this to help me learn better.
What I learned from poor teachers
Likewise, the teachers that I look on with humiliation and anger, even to this day, taught me what a good teacher does not do.
As I mentioned in my comment to Ken:
"Dumbing down" is in the eyes of the beholder though. What is important is that in the US at any rate, we have begun to classified "smart" or "knowledgeable" as being able to take standardized tests about basic facts (i.e. math formulas, defining terms, and writing in a standard format regardless of audience or purpose). We have also relegated anything outside of math, science, and technology as "fluff" and not real knowledge.
The teachers that impeded by learning only looked at the standards and never bothered to look at what I actually knew. They also had a very narrow view of what "learning" and "knwoledge" was, then labeled those outside of those norms as "not quite smart". I can remember being moved from the "smart" reading group to the "slow" reading group in 1st grade. The major problem was that the teacher taught reading in one way only, and those that did not learn that way were then labeled "slow". It was humiliating for me and I lost all confidence in my studies. She always made it know who the "good" students were and who the "bad" students were.
These teachers also tended to have only the curriculum and book learning, with no abstract or creative activities in the classroom. Students did what the teacher wanted them to do ONLY or else you were a poor student. I remember a home economics teacher telling me how disappointed she was in my cooking class because I didn't follow the recipe exactly. My classmates all liked my changes (for the most part, sometimes they ended in disaster though), but I did not "follow directions."
Finally, the most difficult teachers that really turned me off to learning were those that seemed to exert their power over me as a student. They always had a way of making sure I knew they were in control and knew more than I did, so I should not ask questions of them or interrupt their class flow. In fact, years later, I realized that they did not like me to ask questions because they probably did not know how to answer them.
Dumbing down the World? Or a new way to assess learning?
In some ways, I do think that we are "dumbing down" in the world. But not in the traditional sense. I don't think that a grade these days is complex enough to assess a student's learning. I don't think that many of the teachers from which I learned the most (I still can remember many of the lessons 30-45 years later) would be able to keep up with the "testing". In fact, some of my daughter's teachers that had the qualities I look for in a good teacher were considered "poor" by some parents because their students enjoyed school and the kids did not have enough homework at night! (Even though their students tested high on standardized tests).
The new educational reforms in the US still focus on these simplistic quantitative tests and pitting teachers against students and parents. I have just read about community schools, however, which I hope with create a new educational environment that is based less on numbers and more on learning.
Friday, July 3, 2009
Example of assessment part 3: Online course
Below is the assessments I used for an online course in Distance education. When I create assessments for distance learning, there is a more systematic approach as assessments act as a means of dialogue between the student and myself and the students with each other. Not only do I use assessments to check student learning with the desired course outcomes, I also use it to check if the course is fulfilling the needs for the student (which might be different than what the institution requires), a means to create a class community, and a way for the student to identify their own needs, strengths and create a learning agenda for after the course.
As a result, I use multiple mediums for assessment and multiple measures including teacher, student reflections, peer reviews, group assessments, discussion, blogs, student projects, annotated bibliographies, and group projects. One thing that is missing from my assessments are standardized tests.
Below is the evaluation criteria I gave students:
A. Students will be expected to participate and contribute to online discussions, both within SLN and on outside assigned discussion boards (Yahoo, Googlegroups). Students will be expected to demonstrate they have done the readings by citing pages and ideas from the readings, but applying those concepts to the discussion activities. Minimal discussion requirements are included in the discussion instructions. This will earn students a “B-”. Students that want a higher mark will be required to post more frequently and include quality postings (a discussion of quality postings is included in the instructions for discussions in Module 1). Maximum 40 Points per module.
B. At the end of each module, students will be given time to reflect on the module’s instruction and their learning as a journal entry. They will be given some guiding questions in the journal section and asked to answer the questions and evaluate their performance in class for that module. Students may also post other issues that they find of importance for that module. Maximum 40 points per module.
C. Students will work on designing a distance learning or outreach module with other teachers. As part of the process, students may be working with other faculty at a distance. A series of preparatory activities and the journal questions will be used to help guide students through the process. Ten percent of the grade will be based on the preparatory activities and 10% of the final grade will be based on the final product. Maximum 200 points
D. Students will write an evaluative annotated bibliography using 5 resources from the resource list posted on the course website and 5 additional resources (peer reviewed article or academic book) which are related to their module and research interests. In addition to the APA style citation, the annotated bibliography should include a short summary of the resource and how it relates to their module topic. Maximum 15 points per citation.
E. Each student will write a reflective paper on the instructional design choices they made for their distance learning module. The paper should include research and references which support their choices or explain their approach. The paper should be 10-14 pages double-spaced, using APA style. Maximum 250 points.
Note the level of choice students are given, yet there are also standards established (i.e. APA style) that are required by the department and profession.
In addition to the above evaluation criteria, I include the following at the beginning of each discussion area (entitled "Instructions for discussion")
This sets a stand criteria on which I am evaluating the discussions. I find it difficult to "grade" each discussion post, but instead grade holistically. I give each student a grade for discussion after each module. In some cases, students may post a few very high quality posts or they may post a number of very thought provoking questions, both of which are evaluated highly. Other times, they may post frequently, but just very superficially in which case they are not evaluated as highly.
In developing discussing questions, I make sure students are required to apply reading concepts and class activities to their own situation and/or class problems. Below is an example of the discussion question I used to evaluate their understanding of distance learning assessment:
The following are instructions for peer review on students projects:
Notice how I try to create boundaries for the type of feedback peers should be giving. This is especially true for online assessment as there are no social cues that will help to temper constructive criticism.
Below are the instructions for the project required by the students. Notice how, again, they are given a great deal of choice, yet within a very structured framework.
Another aspect of this project was the submission for feedback by both the instructor and peers and the negotiation of standards based on that feedback.
As a result, I use multiple mediums for assessment and multiple measures including teacher, student reflections, peer reviews, group assessments, discussion, blogs, student projects, annotated bibliographies, and group projects. One thing that is missing from my assessments are standardized tests.
Below is the evaluation criteria I gave students:
A. Students will be expected to participate and contribute to online discussions, both within SLN and on outside assigned discussion boards (Yahoo, Googlegroups). Students will be expected to demonstrate they have done the readings by citing pages and ideas from the readings, but applying those concepts to the discussion activities. Minimal discussion requirements are included in the discussion instructions. This will earn students a “B-”. Students that want a higher mark will be required to post more frequently and include quality postings (a discussion of quality postings is included in the instructions for discussions in Module 1). Maximum 40 Points per module.
B. At the end of each module, students will be given time to reflect on the module’s instruction and their learning as a journal entry. They will be given some guiding questions in the journal section and asked to answer the questions and evaluate their performance in class for that module. Students may also post other issues that they find of importance for that module. Maximum 40 points per module.
C. Students will work on designing a distance learning or outreach module with other teachers. As part of the process, students may be working with other faculty at a distance. A series of preparatory activities and the journal questions will be used to help guide students through the process. Ten percent of the grade will be based on the preparatory activities and 10% of the final grade will be based on the final product. Maximum 200 points
D. Students will write an evaluative annotated bibliography using 5 resources from the resource list posted on the course website and 5 additional resources (peer reviewed article or academic book) which are related to their module and research interests. In addition to the APA style citation, the annotated bibliography should include a short summary of the resource and how it relates to their module topic. Maximum 15 points per citation.
E. Each student will write a reflective paper on the instructional design choices they made for their distance learning module. The paper should include research and references which support their choices or explain their approach. The paper should be 10-14 pages double-spaced, using APA style. Maximum 250 points.
Note the level of choice students are given, yet there are also standards established (i.e. APA style) that are required by the department and profession.
In addition to the above evaluation criteria, I include the following at the beginning of each discussion area (entitled "Instructions for discussion")
Quality discussion responses
A high quality response contains information from the textbook or other valid source, or applies a concept from the text or course in a meaningful way, or facilitates understanding of the course material or topic. This could include posing questions, clarifying others ideas, giving alternative view points or interpretations of the same reading passages, giving real life examples that apply the concepts, and citing additional resources. Responses such as: "I agree.", "Good question" or "Good answer" / Any response that is just an opinion, or is unsubstantiated may add to the discussion but will not be evaluated as part of your discussion grade. Any response that is carelessly typed, poorly thought-out, grammatically incorrect or confusing / any response that is disrespectful of another student or any other person, etc., are not acceptable.
Netiquette
As discussion is of a public nature, please observe proper "netiquette" -- courteous and appropriate forms of communication and interaction over the Internet (in online discussions). This means no personal attacks, obscene language, or intolerant expression. All viewpoints should be respected. Because there are no communication cues, such as a smile, eye contact, nodding, and tone of voice, to help identify when someone is jesting or being sarcastic, you should be careful not to be insulted if a comment is misunderstood or misinterpreted, but rather to clarify its meaning. Likewise, you need to expect that others might misinterpret or be insulted when using subtle humor, so reread what you intend to post to make sure there will be no misinterpretation of your intentions. Emoticons can be used to express your intentions.
This sets a stand criteria on which I am evaluating the discussions. I find it difficult to "grade" each discussion post, but instead grade holistically. I give each student a grade for discussion after each module. In some cases, students may post a few very high quality posts or they may post a number of very thought provoking questions, both of which are evaluated highly. Other times, they may post frequently, but just very superficially in which case they are not evaluated as highly.
In developing discussing questions, I make sure students are required to apply reading concepts and class activities to their own situation and/or class problems. Below is an example of the discussion question I used to evaluate their understanding of distance learning assessment:
Work though each of the activities before you start this discussion! Since you will have more than three weeks for this discussion, I would not expect a high level of participation until April 14-26.
Analyze each of the activities listed in the activities section above using Philip's design dimensions. How would you categorize each of those activities? Which activities/design dimensions were you most comfortable doing as a student? as an instructional designer/teacher? Which were you least comfortable with as a student? as an instructional designer/teacher? Which do you think would be most relevant for your students? Why? What type of support do you think would be necessary for those activities above? Why? Look through at least 3 other students that were not part of your team and discuss the types of problems they had and how you would have supported them as an instructional designer.
The following are instructions for peer review on students projects:
Attach a draft of your project for the class to review and give feedback. Remember when you are giving feedback that the author of the draft 1) is probably still working on it so it is a work in progress, 2) has put a lot of time and thought into the project, 3) might want you to only look at certain aspects of the project, 4) may have a different teaching situation than your own, 5) is posting this for constructive feedback (saying" this looks good" is not enough, explain why it looks good, what you like about it; likewise, "you need to change X" is not enough, explain why you feel there needs to be changes, such as I wasn't able to understand what you wanted the student to do in X), and can't see your face (you might want to use emoticans).
Notice how I try to create boundaries for the type of feedback peers should be giving. This is especially true for online assessment as there are no social cues that will help to temper constructive criticism.
Below are the instructions for the project required by the students. Notice how, again, they are given a great deal of choice, yet within a very structured framework.
Each module, students will be asked to submit a different part of what will be their final project. Each student will be expected to develop a distance learning module. While more than one student may collaborate on the distance learning activity or module, each student will be expected to tailor the module to their own situation (in other words there needs to be some components that are their own original work). The module will follow this format:
Module Name
Author Name and Date Prepared
Target Group: Who will use this module (student, teachers, institution, general public); academic domain (i.e. ESL, science, history);level of education (elementary, secondary, higher ed, adult).
Institution (s): A brief description of the institutional context where it will be used including location (s), mission or goals, relationship between institutions if there are multiple locations, institutional structure (including required approvals and resource allocation).
Technology: List primary and secondary (or back-up) technologies that will be used. Include whether these technologies are currently available or will need to be procured by the students or institutions. Also include technology support that will be available including help desks or websites.
Module Description: Include a summary of the purpose of the module and how it will fit into the curriculum and/or standards and any prerequisite requirements
Module Goals and Objectives
Dates and/or Activity Schedule:
· Identify due dates and/or time frames for activities. Specific dates are not necessary (i.e. you may use Day, Week, or Month 1).
· Identify and briefly describe module activities
Module Content: Include auxiliary information including readings (in PDF or Word files), CD’s, Websites (addresses), Video or Audio clips (web address or on CD’s), scripts, or accompanying manuals (i.e. video conferencing).
In class activities: How will this module be integrated into your course or class? This section should be different for each student, even if you are collaborating with another teacher. It may include discussion questions unique to your group, separate activities, or separate assignments.
Evaluation: Method and basis for evaluating achievement of module goals and objectives. These might include a grading rubric, module evaluation by students, and/or end of module assignment or exam.
Supporting documents: This includes websites, worksheets, or articles for teachers.
Some samples of previous modules are posted in the shared reference area and on the course website.
Another aspect of this project was the submission for feedback by both the instructor and peers and the negotiation of standards based on that feedback.
Sunday, June 28, 2009
Example of assessment part 2: Blended course
I taught an intensive course on computer supported writing across the curriculum. This was a one week course (8 hours a day) followed by 2 weeks of independent work. In this class, I had students from primary-adult education from language arts, history, science, and foreign language areas. In addition, the majority of the students were working professionals.
It was important, because of the diversity of the class, to incorporate variety and choice in the assessment tool. In addition, because the course was part of a educational technology institute, it was important that students demonstrated some technological ability, but appropriate for their own situation. For example, if a student worked in an environment in which certain technology was blocked, it would be a useless to have them demonstrate the use of that technology as it would be irrelevant in their work. Finally, in all of my assessments for education classes, it is important that students demonstrate their understanding of WHY they make choices and are able to justify it with research.
The following is the assessment tool I used for that class:
E-portfolio: (50%) Students will need to demonstrate their understanding of the course concepts by putting together an e-portfolio of their own completed examples of work they did in class (while we might begin the work in class, they may not be completed until after the course in finished). The portfolio should include a finished piece that demonstrates: 1) CSW (computer supported writing) that develops communication skills, 2) CSW that demonstrates writing to learn, 3) Collaborative CSW, 4) CSW appropriate for your level of teaching and discipline (or a discipline you are interested in), 5) research or data collection in CSW, 6) an analysis of CSW technology, 7) an example of hypertext, 8) a CSW assessment tool. In addition to the completed pieces, students will need to include explanations as to how each of pieces meet the criteria for each required element (e.g. what makes a piece a hypertext and how does your finished product meet that criteria). We will discuss this further in class (separate handout and rubric).
Learning Blog: (25%) Students will need to reflect on class discussions, activities, and required readings for each day (both readings due before and after the class) and write a blog that addresses each day’s questions (listed above). Students should label each post with the day and topic, with a total of 5 separate posts. The blogs will be used to evaluate your understanding of the course concepts AND readings, therefore, it is important they you reference the readings in your reflection.
Project: (25%): You will be given some time in class to work on this; however, this time might not be sufficient to complete the project during class.
Option A: Students can put together a CSW project that can be used in their classroom. This might include a lesson plan integrating CSW software, the development of a CSW software or website, a wiki or blog that outlines guidelines or compares CSW software attributes, a prototype of an OWL (online writing lab) or the design for a research project on CSW. In addition to the project, students will write a two to three page justification for the project and its design, based on readings.
or
Option B: Students may conduct a literature review on a topic in CSW and write a summary (6-10 pages) of major findings, issues, and gaps in the literature. Students need to have at least 10 resources and should use a standard style format (APA, University of Chicago, MLA, etc…).
It was important, because of the diversity of the class, to incorporate variety and choice in the assessment tool. In addition, because the course was part of a educational technology institute, it was important that students demonstrated some technological ability, but appropriate for their own situation. For example, if a student worked in an environment in which certain technology was blocked, it would be a useless to have them demonstrate the use of that technology as it would be irrelevant in their work. Finally, in all of my assessments for education classes, it is important that students demonstrate their understanding of WHY they make choices and are able to justify it with research.
The following is the assessment tool I used for that class:
E-portfolio: (50%) Students will need to demonstrate their understanding of the course concepts by putting together an e-portfolio of their own completed examples of work they did in class (while we might begin the work in class, they may not be completed until after the course in finished). The portfolio should include a finished piece that demonstrates: 1) CSW (computer supported writing) that develops communication skills, 2) CSW that demonstrates writing to learn, 3) Collaborative CSW, 4) CSW appropriate for your level of teaching and discipline (or a discipline you are interested in), 5) research or data collection in CSW, 6) an analysis of CSW technology, 7) an example of hypertext, 8) a CSW assessment tool. In addition to the completed pieces, students will need to include explanations as to how each of pieces meet the criteria for each required element (e.g. what makes a piece a hypertext and how does your finished product meet that criteria). We will discuss this further in class (separate handout and rubric).
Learning Blog: (25%) Students will need to reflect on class discussions, activities, and required readings for each day (both readings due before and after the class) and write a blog that addresses each day’s questions (listed above). Students should label each post with the day and topic, with a total of 5 separate posts. The blogs will be used to evaluate your understanding of the course concepts AND readings, therefore, it is important they you reference the readings in your reflection.
Project: (25%): You will be given some time in class to work on this; however, this time might not be sufficient to complete the project during class.
Option A: Students can put together a CSW project that can be used in their classroom. This might include a lesson plan integrating CSW software, the development of a CSW software or website, a wiki or blog that outlines guidelines or compares CSW software attributes, a prototype of an OWL (online writing lab) or the design for a research project on CSW. In addition to the project, students will write a two to three page justification for the project and its design, based on readings.
or
Option B: Students may conduct a literature review on a topic in CSW and write a summary (6-10 pages) of major findings, issues, and gaps in the literature. Students need to have at least 10 resources and should use a standard style format (APA, University of Chicago, MLA, etc…).
Thursday, June 25, 2009
Examples of multiple assessments: traditional course
As promised, below is an example of the assessments I used for a college course on Speech Presentation and Composition. The goal of the assessments is to let students have some choice to work on areas they may need help. As this is a required entry course for the major, I have students coming in with a wide range of skills. Some have very good oral presentation skills, some have very good speech writing skills, others have very good skills but inappropriate for professional communication, while others (many in fact) have public speaking anxiety, and a few have no public speaking skills or experience (especially those from inner city public schools). As a result, it is important for the students to work on those skills they feel they need.
There is a core of mandatory assignments I use for students to demonstrate their ability to prepare, write, and present speeches. Students that only hand in these mandatory assignments doing a good job demonstrating their abilities will pass, but not excell in the class. Students can bring up their grade by doing as many of the optional assignments up to 200 points. Students can off-set problems with mandatory assignments (often freezing on the first or second speech) with extra credit assignments.
Note that many of the mandatory assignments are used to assess student's understanding of the core concepts they need by the end of the course, while the optional assessments are more reflective and targeted towards an individual's own needs. The optional assignments use reflection for students to assess their own abilities and come up with their own plans on how to improve. It also allows students to practice skills they have learned in the mandatory assignments, especially if they had trouble with the mandatory assignments (e.g. audience analysis and audience impact analysis work sheets).
I find the assessment in this class works like a dialogue between the student and instructor. In addition, I provide my students with a number of formts to choose from to demonstrate their knowledge including blogs, YouTube, face to face interaction, and work sheets.
Assessment tools
Mandatory (Total Maximum Possible Points=775 Points)
(25 pts) Audience Analysis: Conduct an audience analysis for speech I using the worksheet attached at the end of this assignment handout (also available on WebCT). You should try to identify multiple audiences in the class for your hometown. This might be based on geographic locations (upstate/downstate, rural/urban, regions of New York state and out of stators), type of student (major/non-major, commuters/residents, older/younger students), or life-style (travelers/non-travelers, partiers/non-partiers, single/couple/family).
(25 pts) Audience Impact analysis for speech II. Students must fill out a worksheet (attached at the end of this handout and available on WebCT) analyzing the impact various speech points on the intended audiences.
Speeches (4 speeches, Speech evaluations for 5 students assigned by the teacher, written manuscript for Speech II, References for Speech III):
Speech I (50 pts): Your Hometown
Audience: Classmates Time: 5-7 minutes
Purpose: You want to inform you classmates about your hometown. You may include graphics (computer, posters, handouts). This is an informal speech.
Speech II (100 pts): Your Hometown
Audience: Professional Group interested in Hometown Time: 3 minutes
Purpose: You may decide which professional group you are delivering the speech to depending on your interests and town (i.e. investors, economic development group, town council, school board, press). You will be presenting on your hometown, but specific speech points and message will depend on your audience analysis. The speech will be a formal informative and persuasive speech and you must make sure that all the information in the speech manuscript is delivered accurately to the audience. You will not be allowed to use graphics or visuals as supporting information. Some of the information you will be presenting will be new, so you may have to persuade the audience to listen to you.
(25 pts) Typed copy of complete speech II (see above).
Speech III (150 pts): Persuasive Speech. Topic will be assigned randomly based on student suggestions.
Audience: Opponents of the topic Time: 3-5 minutes
Students will be asked to write down a persuasive speech topic. These topics may be modified depending on complexity of subject given the time constraints of the speech (i.e. only parts of complex issues may be used), the duplication of a topic, or the class position (choosing the contrary position of an issue upon which the entire class is in agreement, e.g. taxes should be lowered would be changed to taxes should not be lowered). Each student will randomly choose a topic. Students will also be given specific audiences that are opponents of the topic. The specifics of the speech should be based on the analysis of the history of the issue, the audience’s position, and the action expected from the audience.
References (25 pts) Make a list of at least 10 references that will be used to prepare speech III. The references should include at least one magazine, one reference book, an internet source, and a personal interview reference. Each reference should be listed using an approved citation method (APA, MLA, University of Chicago). After each citation you should note which side the reference supports and the purpose of the reference (inform, persuade, evoke).
Example: Research Question: Our school should use less group activities
Barker, V., Abrams, J., Tiyaamornwong, V., Seibold, D., Duggan, A., Park, H., & Sebastian, M. (2000). New contexts for relational communication in groups. Small Group Research, 31 (4) 470-503. For group activities: inform.
Mary Smith. Interview October 23, 2004. SUNY Communication major. Against group activities: evoke.
Note: Your text has citation styles in Appendix B
In addition to the 10 references, answer the following questions:
1. What are the (at least 2) positions?
2. How did the “debate” begin?
3. How has it been addressed in the past?
4. Has each side had equal voice?
5. What do the two sides agree on? Has this changed over time?
6. What do the two sides disagree on? Has this changed over time?
7. What have been some of proposed options to resolve this issue? (list at least one from each side)
This information will be used to help you prepare your speech.
Speech IV (200 pts): Final Speech
Audience: Choice of students appropriate to Speech Time: 7-10 minutes
Topic: The speech topic will be a pitch to a group for any topic you want. You will try to persuade the audience on an issue (change in policy, purchase a product, hire your company, vote for a candidate or piece of legislation).
Audience: You will need to identify the specific audiences based on your specific topic. However, it is assumed that each audience will include each of the following three groups:
1. Policy Makers: these could be CEO’s of companies, industry leaders, law-makers, regulators, government officials, social leaders
2. Mainstream listeners: this group has never really been involved with the issue directly. They may have some general ideas or opinions based on second-hand information (magazines, TV, public discussion). Most likely, these will be your secondary audience.
3. Those directly affected by your pitch: This group could be divided two ways: those that will be positively affected and those that will be negatively affected. Both of these groups may have a very powerful voice with the policy makers, or may have historically been ignored by the policy makers.
Format: You may choose whichever format you feel is appropriate for your subject and audiences. You may use graphics, posters, handouts, other people; you may inform, persuade or evoke; you may use any of the information organizational formats we have covered (cause and effect, problem solution, spatial, chronologic, hierarchical, comparative) and any of the reasoning (direct, indirect, causal, analogical).
The following questions might help you to focus on what needs to go into your speech: (Extra Credit, 10pts: type up the answers to the questions.)
1. What role have policy makers had in the past to establish the issue you would like to change? How might they be affected if they do implement the change? What assumptions have they made about the mainstream listeners and those affected by the change? How were those assumptions formed (what is the history of the change)?
2. What would the average person know about this issue? Where would they get their information from? How would that information bias (positive or negative) you proposal? How would you be able to use or overcome those biases? How will the change affect the average listener? Do you think they will understand the implications? How will that affect the way your speech is organized?
3. Have those that will be affected by your pitch ever had a voice in the policy making on this issue? Why or why not? How will that affect the way in which you approach the issue? What assumptions do they have about the issue? Why? If the impact is negative, how will you get them to accept it? What reasoning can you use? What type of supporting information? If the impact is positive, will this audience believe you based on past experience?
Speech Evaluations: There are 4 different evaluation sheets each speech (forms at the end of this handout). For each speech, students will be assigned 5 students to evaluate. These evaluation sheets will need to be filled out, handed into the teacher for grading (5 pts each for the first speech, 10 pts each for the following 3 speeches), then given to the speaker as feedback. Please review the syllabus for course conduct expectations when giving feedback.
Additional Assignments (Maximum possible points=325)
Students can submit as many assignments as they want earning up to 200 points in total. For example, students can submit all the assignments, receiving credit for 207 points out of 325. Only 200 points will count toward their final grade.
Speech evaluation
Students need to present a typed :
(25 pts) Speech Analysis. Review the speech by Christopher deCharms, Looking inside the brain in real time (http://www.ted.com/index.php/talks/christopher_decharms_scans_the_brain_in_real_time.html) and answer the following questions:
• What is the general purpose of the speech? What is the specific purpose of the speech?
• What are some of the non-verbal communication cues he uses to make his point(s)? What obstacles does he need to overcome in giving his speech? How effective is he in overcoming those obstacles? Why or why not? How does he establish credibility? How does he interact with the audience?
• What type of introduction does he use? What type of conclusion? Are they effective? Why or why not? What assumptions does he make about the audience? How does that effect his speech?
• What does he do well in the speech? If you were to give him suggestions for improvement, what would they be?
(25 pts each) Randy Pausch (Last Lecture: Achieving Your Dreams) http://www.youtube.com/user/carnegiemellonu and Barack Obama’s (Jan. 10, 2009) http://www.americanrhetoric.com/speeches/barackobama/barackobamaweeklytransition10.htm speeches (each will be graded separately).
Questions:
• What is the general purpose of the speech? What is the specific purpose of the speech? List at least 3 speech points the speaker makes. What supporting information does the speaker use to make the point? What type of reasoning (ethos, pathos, or logos) does the speaker use? How does the speaker arrange the supporting information? What style of speech does the speaker use? Why? How does the speaker interact with the audience? How does the speaker establish creditability? What assumptions does the speaker make about the audience? How does that affect the type of speech chosen? How does it affect audience interaction? What does the speaker want the audience to do? How do you know this?
• Compare the written speech to the audio/video. How does the speaker use his/her voice the alter the speech? What are the differences between how the speech can be interpreted when reading it and when listening to it? Why?
(50 pts) Video, podcast, or CD of Speech II. Students may prepare a Video, podcast, or CD of Speech II. They will then do a self-analysis of their speech, identifying speech points, organization of ideas, and effectiveness of the speech given their identified audience. They are expected to make suggestions based on their review of the tape. This should be a 1-2 page (double spaced) analysis.
(25 pts) Audience Impact Analysis: Conduct an audience impact analysis for speech III-IV using the worksheet attached at the end of this handout.
(75 pts) A self-analysis blog in which students will analyze each of their first three speeches after reviewing student and professor feedback. Students need to set up a blog, or use their own separate blog space and give me the URL address. The blog will include four separate entries: Speech I, Speech II, Speech III, and What I have learned. Each entry should include a description of preparation of the speech, including assumptions made about the audience, analysis of speaker’s perception of the speech, analysis of teacher and student feedback, similarities and differences in perceptions between each group, outline of how this will impact the speaker’s next speech (i.e. changes in style, assumptions, and/or preparation). The last module should include your analysis of how you have improved, what areas you still need to work on, and how your analysis of others speakers have changed due to class assignments.
(25 pts) Students may attend or watch a formal speech (i.e. on campus speaker or presidential campaign speeches) and answer the following questions:
• What is the speaker’s position? How do you know that? What type of reasoning does the speaker use? What type of supporting information? Is the support relevant? Reliable? Representative? What bias does the speaker have? Is this implicit or explicit?
• How does the speaker motivate the listener? What type of appeal does the speaker use? Does this appeal work for you? Why or why not? Is the appeal appropriate for the audience? Why or why not? Is the appeal appropriate for the message? Why or why not?
• What credentials does the speaker have? How does the speaker establish credibility? How does the speaker establish rapport with the audience? How affective is the speaker in establishing creditability and rapport?
• What does the speaker want the listener to do? Is this explicit or implicit? Does the speaker have an ethical or moral stance? How do you know? If you had to make suggestions in improving this speech, what would they be? What was affective about this speech?
(50 pts) Diversity interview. Step I:
Imagine that you want to find a pen pal on the internet. Write a description of yourself in 30 words or less in the space below:
Step II
Locate someone outside of the class to interview that does not match the characteristics you used to describe yourself in step I.
Before interviewing them, reflect on the following questions:
What is your culture? Which groups do you identify with? How does that affect your communication? How does this affect who you speak to and how?
What assumptions do you make about the other person’s culture?
What assumptions do you make about the other person based on their culture?
Step III
Find out the following information in your interview:
What are the perceived similarities between the two cultures?
What are the perceived differences?
How can you tell the difference between a personal belief and a group’s belief?
What is the best way to find out about the culture?
What is the most unfamiliar part of your culture to the person being interviewed? (What do they have trouble understanding about your culture?)
What is the best part of your culture according to the person being interviewed? Why?
Can they give an example of conflict between your culture and their culture? How do they handle that situation?
Step IV
After you have interviewed this person, I want you to reflect on the following questions:
How did your assumptions affect your interview?
Were you able to learn anything new about that person?
What (if anything) surprised you about their answers?
How could this information help you in composing speeches for diverse audiences? Audiences of a different culture than your own?
(25 pts) Speech IV References: Identify 10 sources as you did for Module 3. In addition to identifying the position of the author and the reason for the resource (inform, evoke, or persuade), identify which groups the author(s) would represent and their position on change. As in Module 4, write a brief summary of the history of the issue, the various positions, how the policy or issue was originally established, and who has had a voice in the process?
Extra Credit (Maximum Total Points=60)
Students can submit as many extra credit assignments as they want earning up to 40 extra credit points.
(10 pts each) Self Evaluation. Students will fill out an evaluation sheet for their own speech.
(10 pts) Speech IV typed focus questions. Type up the answers to the questions.
1. What role have policy makers had in the past to establish the issue you would like to change? How might they be affected if they do implement the change? What assumptions have they made about the mainstream listeners and those affected by the change? How were those assumptions formed (what is the history of the change)?
2. What would the average person know about this issue? Where would they get their information from? How would that information bias (positive or negative) you proposal? How would you be able to use or overcome those biases? How will the change affect the average listener? Do you think they will understand the implications? How will that affect the way your speech is organized?
3. Have those that will be affected by the change ever had a voice in the policy making on this issue? Why or why not? How will that affect the way in which you approach the issue? What assumptions do they have about the issue? Why? If the impact is negative, how will you get them to accept it? What reasoning can you use? What type of supporting information? If the impact is positive, will this audience believe you based on past experience?
(10 pts) Visuals. Review your evaluations from Speech III and the visuals. Type up the answers to the following questions:
1. How did the visuals contribute and/or hinder your presentation? Your message? Audience reaction?
2. How would you change your visuals to improve your presentation?
3. What rules could you develop for creating effective visuals for your presentations?
There is a core of mandatory assignments I use for students to demonstrate their ability to prepare, write, and present speeches. Students that only hand in these mandatory assignments doing a good job demonstrating their abilities will pass, but not excell in the class. Students can bring up their grade by doing as many of the optional assignments up to 200 points. Students can off-set problems with mandatory assignments (often freezing on the first or second speech) with extra credit assignments.
Note that many of the mandatory assignments are used to assess student's understanding of the core concepts they need by the end of the course, while the optional assessments are more reflective and targeted towards an individual's own needs. The optional assignments use reflection for students to assess their own abilities and come up with their own plans on how to improve. It also allows students to practice skills they have learned in the mandatory assignments, especially if they had trouble with the mandatory assignments (e.g. audience analysis and audience impact analysis work sheets).
I find the assessment in this class works like a dialogue between the student and instructor. In addition, I provide my students with a number of formts to choose from to demonstrate their knowledge including blogs, YouTube, face to face interaction, and work sheets.
Assessment tools
Mandatory (Total Maximum Possible Points=775 Points)
(25 pts) Audience Analysis: Conduct an audience analysis for speech I using the worksheet attached at the end of this assignment handout (also available on WebCT). You should try to identify multiple audiences in the class for your hometown. This might be based on geographic locations (upstate/downstate, rural/urban, regions of New York state and out of stators), type of student (major/non-major, commuters/residents, older/younger students), or life-style (travelers/non-travelers, partiers/non-partiers, single/couple/family).
(25 pts) Audience Impact analysis for speech II. Students must fill out a worksheet (attached at the end of this handout and available on WebCT) analyzing the impact various speech points on the intended audiences.
Speeches (4 speeches, Speech evaluations for 5 students assigned by the teacher, written manuscript for Speech II, References for Speech III):
Speech I (50 pts): Your Hometown
Audience: Classmates Time: 5-7 minutes
Purpose: You want to inform you classmates about your hometown. You may include graphics (computer, posters, handouts). This is an informal speech.
Speech II (100 pts): Your Hometown
Audience: Professional Group interested in Hometown Time: 3 minutes
Purpose: You may decide which professional group you are delivering the speech to depending on your interests and town (i.e. investors, economic development group, town council, school board, press). You will be presenting on your hometown, but specific speech points and message will depend on your audience analysis. The speech will be a formal informative and persuasive speech and you must make sure that all the information in the speech manuscript is delivered accurately to the audience. You will not be allowed to use graphics or visuals as supporting information. Some of the information you will be presenting will be new, so you may have to persuade the audience to listen to you.
(25 pts) Typed copy of complete speech II (see above).
Speech III (150 pts): Persuasive Speech. Topic will be assigned randomly based on student suggestions.
Audience: Opponents of the topic Time: 3-5 minutes
Students will be asked to write down a persuasive speech topic. These topics may be modified depending on complexity of subject given the time constraints of the speech (i.e. only parts of complex issues may be used), the duplication of a topic, or the class position (choosing the contrary position of an issue upon which the entire class is in agreement, e.g. taxes should be lowered would be changed to taxes should not be lowered). Each student will randomly choose a topic. Students will also be given specific audiences that are opponents of the topic. The specifics of the speech should be based on the analysis of the history of the issue, the audience’s position, and the action expected from the audience.
References (25 pts) Make a list of at least 10 references that will be used to prepare speech III. The references should include at least one magazine, one reference book, an internet source, and a personal interview reference. Each reference should be listed using an approved citation method (APA, MLA, University of Chicago). After each citation you should note which side the reference supports and the purpose of the reference (inform, persuade, evoke).
Example: Research Question: Our school should use less group activities
Barker, V., Abrams, J., Tiyaamornwong, V., Seibold, D., Duggan, A., Park, H., & Sebastian, M. (2000). New contexts for relational communication in groups. Small Group Research, 31 (4) 470-503. For group activities: inform.
Mary Smith. Interview October 23, 2004. SUNY Communication major. Against group activities: evoke.
Note: Your text has citation styles in Appendix B
In addition to the 10 references, answer the following questions:
1. What are the (at least 2) positions?
2. How did the “debate” begin?
3. How has it been addressed in the past?
4. Has each side had equal voice?
5. What do the two sides agree on? Has this changed over time?
6. What do the two sides disagree on? Has this changed over time?
7. What have been some of proposed options to resolve this issue? (list at least one from each side)
This information will be used to help you prepare your speech.
Speech IV (200 pts): Final Speech
Audience: Choice of students appropriate to Speech Time: 7-10 minutes
Topic: The speech topic will be a pitch to a group for any topic you want. You will try to persuade the audience on an issue (change in policy, purchase a product, hire your company, vote for a candidate or piece of legislation).
Audience: You will need to identify the specific audiences based on your specific topic. However, it is assumed that each audience will include each of the following three groups:
1. Policy Makers: these could be CEO’s of companies, industry leaders, law-makers, regulators, government officials, social leaders
2. Mainstream listeners: this group has never really been involved with the issue directly. They may have some general ideas or opinions based on second-hand information (magazines, TV, public discussion). Most likely, these will be your secondary audience.
3. Those directly affected by your pitch: This group could be divided two ways: those that will be positively affected and those that will be negatively affected. Both of these groups may have a very powerful voice with the policy makers, or may have historically been ignored by the policy makers.
Format: You may choose whichever format you feel is appropriate for your subject and audiences. You may use graphics, posters, handouts, other people; you may inform, persuade or evoke; you may use any of the information organizational formats we have covered (cause and effect, problem solution, spatial, chronologic, hierarchical, comparative) and any of the reasoning (direct, indirect, causal, analogical).
The following questions might help you to focus on what needs to go into your speech: (Extra Credit, 10pts: type up the answers to the questions.)
1. What role have policy makers had in the past to establish the issue you would like to change? How might they be affected if they do implement the change? What assumptions have they made about the mainstream listeners and those affected by the change? How were those assumptions formed (what is the history of the change)?
2. What would the average person know about this issue? Where would they get their information from? How would that information bias (positive or negative) you proposal? How would you be able to use or overcome those biases? How will the change affect the average listener? Do you think they will understand the implications? How will that affect the way your speech is organized?
3. Have those that will be affected by your pitch ever had a voice in the policy making on this issue? Why or why not? How will that affect the way in which you approach the issue? What assumptions do they have about the issue? Why? If the impact is negative, how will you get them to accept it? What reasoning can you use? What type of supporting information? If the impact is positive, will this audience believe you based on past experience?
Speech Evaluations: There are 4 different evaluation sheets each speech (forms at the end of this handout). For each speech, students will be assigned 5 students to evaluate. These evaluation sheets will need to be filled out, handed into the teacher for grading (5 pts each for the first speech, 10 pts each for the following 3 speeches), then given to the speaker as feedback. Please review the syllabus for course conduct expectations when giving feedback.
Additional Assignments (Maximum possible points=325)
Students can submit as many assignments as they want earning up to 200 points in total. For example, students can submit all the assignments, receiving credit for 207 points out of 325. Only 200 points will count toward their final grade.
Speech evaluation
Students need to present a typed :
(25 pts) Speech Analysis. Review the speech by Christopher deCharms, Looking inside the brain in real time (http://www.ted.com/index.php/talks/christopher_decharms_scans_the_brain_in_real_time.html) and answer the following questions:
• What is the general purpose of the speech? What is the specific purpose of the speech?
• What are some of the non-verbal communication cues he uses to make his point(s)? What obstacles does he need to overcome in giving his speech? How effective is he in overcoming those obstacles? Why or why not? How does he establish credibility? How does he interact with the audience?
• What type of introduction does he use? What type of conclusion? Are they effective? Why or why not? What assumptions does he make about the audience? How does that effect his speech?
• What does he do well in the speech? If you were to give him suggestions for improvement, what would they be?
(25 pts each) Randy Pausch (Last Lecture: Achieving Your Dreams) http://www.youtube.com/user/carnegiemellonu and Barack Obama’s (Jan. 10, 2009) http://www.americanrhetoric.com/speeches/barackobama/barackobamaweeklytransition10.htm speeches (each will be graded separately).
Questions:
• What is the general purpose of the speech? What is the specific purpose of the speech? List at least 3 speech points the speaker makes. What supporting information does the speaker use to make the point? What type of reasoning (ethos, pathos, or logos) does the speaker use? How does the speaker arrange the supporting information? What style of speech does the speaker use? Why? How does the speaker interact with the audience? How does the speaker establish creditability? What assumptions does the speaker make about the audience? How does that affect the type of speech chosen? How does it affect audience interaction? What does the speaker want the audience to do? How do you know this?
• Compare the written speech to the audio/video. How does the speaker use his/her voice the alter the speech? What are the differences between how the speech can be interpreted when reading it and when listening to it? Why?
(50 pts) Video, podcast, or CD of Speech II. Students may prepare a Video, podcast, or CD of Speech II. They will then do a self-analysis of their speech, identifying speech points, organization of ideas, and effectiveness of the speech given their identified audience. They are expected to make suggestions based on their review of the tape. This should be a 1-2 page (double spaced) analysis.
(25 pts) Audience Impact Analysis: Conduct an audience impact analysis for speech III-IV using the worksheet attached at the end of this handout.
(75 pts) A self-analysis blog in which students will analyze each of their first three speeches after reviewing student and professor feedback. Students need to set up a blog, or use their own separate blog space and give me the URL address. The blog will include four separate entries: Speech I, Speech II, Speech III, and What I have learned. Each entry should include a description of preparation of the speech, including assumptions made about the audience, analysis of speaker’s perception of the speech, analysis of teacher and student feedback, similarities and differences in perceptions between each group, outline of how this will impact the speaker’s next speech (i.e. changes in style, assumptions, and/or preparation). The last module should include your analysis of how you have improved, what areas you still need to work on, and how your analysis of others speakers have changed due to class assignments.
(25 pts) Students may attend or watch a formal speech (i.e. on campus speaker or presidential campaign speeches) and answer the following questions:
• What is the speaker’s position? How do you know that? What type of reasoning does the speaker use? What type of supporting information? Is the support relevant? Reliable? Representative? What bias does the speaker have? Is this implicit or explicit?
• How does the speaker motivate the listener? What type of appeal does the speaker use? Does this appeal work for you? Why or why not? Is the appeal appropriate for the audience? Why or why not? Is the appeal appropriate for the message? Why or why not?
• What credentials does the speaker have? How does the speaker establish credibility? How does the speaker establish rapport with the audience? How affective is the speaker in establishing creditability and rapport?
• What does the speaker want the listener to do? Is this explicit or implicit? Does the speaker have an ethical or moral stance? How do you know? If you had to make suggestions in improving this speech, what would they be? What was affective about this speech?
(50 pts) Diversity interview. Step I:
Imagine that you want to find a pen pal on the internet. Write a description of yourself in 30 words or less in the space below:
Step II
Locate someone outside of the class to interview that does not match the characteristics you used to describe yourself in step I.
Before interviewing them, reflect on the following questions:
What is your culture? Which groups do you identify with? How does that affect your communication? How does this affect who you speak to and how?
What assumptions do you make about the other person’s culture?
What assumptions do you make about the other person based on their culture?
Step III
Find out the following information in your interview:
What are the perceived similarities between the two cultures?
What are the perceived differences?
How can you tell the difference between a personal belief and a group’s belief?
What is the best way to find out about the culture?
What is the most unfamiliar part of your culture to the person being interviewed? (What do they have trouble understanding about your culture?)
What is the best part of your culture according to the person being interviewed? Why?
Can they give an example of conflict between your culture and their culture? How do they handle that situation?
Step IV
After you have interviewed this person, I want you to reflect on the following questions:
How did your assumptions affect your interview?
Were you able to learn anything new about that person?
What (if anything) surprised you about their answers?
How could this information help you in composing speeches for diverse audiences? Audiences of a different culture than your own?
(25 pts) Speech IV References: Identify 10 sources as you did for Module 3. In addition to identifying the position of the author and the reason for the resource (inform, evoke, or persuade), identify which groups the author(s) would represent and their position on change. As in Module 4, write a brief summary of the history of the issue, the various positions, how the policy or issue was originally established, and who has had a voice in the process?
Extra Credit (Maximum Total Points=60)
Students can submit as many extra credit assignments as they want earning up to 40 extra credit points.
(10 pts each) Self Evaluation. Students will fill out an evaluation sheet for their own speech.
(10 pts) Speech IV typed focus questions. Type up the answers to the questions.
1. What role have policy makers had in the past to establish the issue you would like to change? How might they be affected if they do implement the change? What assumptions have they made about the mainstream listeners and those affected by the change? How were those assumptions formed (what is the history of the change)?
2. What would the average person know about this issue? Where would they get their information from? How would that information bias (positive or negative) you proposal? How would you be able to use or overcome those biases? How will the change affect the average listener? Do you think they will understand the implications? How will that affect the way your speech is organized?
3. Have those that will be affected by the change ever had a voice in the policy making on this issue? Why or why not? How will that affect the way in which you approach the issue? What assumptions do they have about the issue? Why? If the impact is negative, how will you get them to accept it? What reasoning can you use? What type of supporting information? If the impact is positive, will this audience believe you based on past experience?
(10 pts) Visuals. Review your evaluations from Speech III and the visuals. Type up the answers to the following questions:
1. How did the visuals contribute and/or hinder your presentation? Your message? Audience reaction?
2. How would you change your visuals to improve your presentation?
3. What rules could you develop for creating effective visuals for your presentations?
Sunday, June 21, 2009
Incorporating assessment into a constructivism based instructional design
Michael Hanley's 15th and 16th installment of instructional design had me thinking about assessment. In the two installments, Michael introduces a flexible instructional design model that fits will with constructivism. However, as I commented on his blog, there is little discussion of what assessment tools can be used that will maintain the level of flexible learning. He asked if I had any suggestions.
In fact, when I teach introduction to distance learning, this is an area that I work at developing with my students. The following is a compilation of the notes I give my students with some additional updates based on my own experience:
The assessment-learning connection
Sometimes, students need to be pushed to go beyond their comfort zone. If we want our students to succeed in the real world, at some point they will need to learn how to take initiative and risks. Those are the employees that are rewarded with promotions and greater responsibilities. However, there will be some students (for a variety of reasons including lack of confidence, maturity, background, personality) that will always avoid risk of failure. Constructivist activities, especially those designed for distance learning, generally allow students to step out of these fears into a supportive environment that allows them to risk failure as long as there are supports in place to help them if they do fail. These supports include instructional support (either preparing them for the experience or using the experience as a stepping stone for further targeted instruction), emotional support (empowering students when they make mistakes rather than leaving them with a sense of humiliation, building confidence by finding success in making mistakes or errors and learning from them or working through them), community support (making sure members know what their role is and creating an atmosphere that will build trust), and support in setting goals and processes to achieve them (helping students to make choices that are presented to them, helping them to develop time management skills with a timeline or structure within which to work, giving clear expectations for any activity and being flexible--or not--depending on the student needs).
In looking at assessment, there often is little attention given to trust and trust-building activities. Often assessment and evaluation leave students feeling "set-up" initially, feeling betrayed. This is something that can happen much easier in distance learning than in a traditional class, as there are no social cues as can be found in a traditional class room. It is important, therefore, that any constructivist learning instructional design with an analysis of the current level of trust between the various "presences"--teacher, students, community, and discipline (Phillips). In all instructional designs, the designer should analyze (either implicitly or explicitly) how trust will be impacted, supported, maintained, and developed. You may need to explicitly identify assessments and activities that will undermine trust (e.g. a reflective blog on identifying weaknesses and solutions for a group that can be seen by a supervisor and used in promotional considerations).
Looking at the variety of activities that can be used in a flexible instructional design like the models proposed by Phillips and Sims & Jones (2002), there needs to be a variety of ways to assess learning. The differences in activities help to provide different types of learning for different learning needs. Likewise, different types of assessment will address the different ways that people apply their learning and demonstrate their knowledge.
Assessment Tools
We all know the traditional ways that we assess learning in the classroom: quizzes (multiple choice), joint projects, etc... In the classroom, especially at the elementary and middle school levels, we can see the class dynamics; we have social cues to indicate to us when a student "gets it". But this is not always true in distributed, distance, or adult learning. Technology can distort feedback (both receiving and giving). In addition, the time delay for feedback can be both positive (giving time for students to reflect) and negative (frustrating students when they are stuck with no visible means of support). So, for distance learning, not only is it important to determine the best way to assess student learning (what did they really learn and how well did they learn it) but also the best way to provide evaluation so administrators, parents, employers, other faculty, and the students themselves understand how well they met the expected outcomes and those areas in which there needs to be improvement. As assessment tools are chosen and created, try to think of these questions: how can the learning activity be justified to all stakeholders (e.g. the $500 spent on a video conference with the Baseball Hall of Fame)? What type of feedback will help focus student learning? What did students really learn (e.g. in addition to the content, perhaps communication and technology skills, problem solving and group skills, and understanding of the world outside of the classroom) and how can this be documented? How will the skills students have learned benefit them in their future? Did the learning activity allow them to use what they have learned immediately, or will there be a delay (need to practice or reflection, for example) before they can demonstrate their understanding? Will assessment be a part of the learning or is it used to measure learning for outside stakeholders?
If we follow the advice given by constructivist researchers, the best activities for learning are ill-defined, action based, with multiple possible outcomes. As such, will a multiple choice test be appropriate to measure student learning and outcomes? The reality is that our current educational system requires that we test in a time efficient way, which is multiple choice or "objective" testing. As a result, as teachers, we feel compelled to use these tests to prepare our students. So how do we resolve these competing forces?
In a traditional classroom, where there are pressures for standardized test scores, therefore, a combination of assessment tools can be used. Follow-up classroom activities that lead to objective testing with student presentations on what has been learned, application of constructivist concepts to a traditional classroom activity, and application of constructivist skills (critical thinking, situating learning, perspective taking) in other contexts that fit a traditional classroom assessment requirement.
In non-traditional classrooms (i.e. online learning, self-directed learning), on the other hand, assessment tools must match the teaching approach: reflective journals, online discussions, group work and presentations, projects, papers. Grading rubrics help to guide students in terms of expectations. Written feedback from both instructors and peers also help to establish expectations and perceived learning outcomes. I use written feedback and a holistic approach to grading (which is used on your SAT, GRE, and GMAT written sections). Others feel more comfortable using a rubric, especially when there are multiple instructors for the same class or course. More and more, I have seen departments that work together in training and developing group standards through feedback and triangulating grading among evaluators.
A number of researchers have also suggested including student input in developing the assessment tools (McLoughlin & Luca, 2001). The co-creation of assessment tools means students are not only constructing their learning, but they are aware as they do so of how they will be able to demonstrate their level of learning and stakeholders' expectations.
Some samples of alternative assessment that was developed by my students can be found on my web page in the right hand margin under Evaluation Rubrics. Over the next few posts I will include some examples of how I incorporate flexibility into my own assessments and the guidelines I use for my own assessment tools.
References
McLoughlin, C. & Luca, J. (2001). Quality in online delivery: What does it mean for assessment in e-learning environments? Available at: http://www.ascilite.org.au/conferences/melbourne01/pdf/papers/mcloughlinc2.pdf
Phillips, R. (2004). The design dimensions of e-learning. Presented December, 2004 at ASCILITE Conference in Perth, Australia. Available at http://www.ascilite.org.au/conferences/perth04/procs/pdf/phillips.pdf
Sims, R., & Jones, D. (2002). Continuous Improvement Through Shared Understanding: Reconceptualising Instructional Design for Online Learning. Proceedings of the 2002 ascilite conference: winds of change in the sea of learning: charting the course of digital education. Internet: Available from: http://www.ascilite.org.au/conferences/auckland02/proceedings/papers/162.pdf
In fact, when I teach introduction to distance learning, this is an area that I work at developing with my students. The following is a compilation of the notes I give my students with some additional updates based on my own experience:
The assessment-learning connection
Sometimes, students need to be pushed to go beyond their comfort zone. If we want our students to succeed in the real world, at some point they will need to learn how to take initiative and risks. Those are the employees that are rewarded with promotions and greater responsibilities. However, there will be some students (for a variety of reasons including lack of confidence, maturity, background, personality) that will always avoid risk of failure. Constructivist activities, especially those designed for distance learning, generally allow students to step out of these fears into a supportive environment that allows them to risk failure as long as there are supports in place to help them if they do fail. These supports include instructional support (either preparing them for the experience or using the experience as a stepping stone for further targeted instruction), emotional support (empowering students when they make mistakes rather than leaving them with a sense of humiliation, building confidence by finding success in making mistakes or errors and learning from them or working through them), community support (making sure members know what their role is and creating an atmosphere that will build trust), and support in setting goals and processes to achieve them (helping students to make choices that are presented to them, helping them to develop time management skills with a timeline or structure within which to work, giving clear expectations for any activity and being flexible--or not--depending on the student needs).
In looking at assessment, there often is little attention given to trust and trust-building activities. Often assessment and evaluation leave students feeling "set-up" initially, feeling betrayed. This is something that can happen much easier in distance learning than in a traditional class, as there are no social cues as can be found in a traditional class room. It is important, therefore, that any constructivist learning instructional design with an analysis of the current level of trust between the various "presences"--teacher, students, community, and discipline (Phillips). In all instructional designs, the designer should analyze (either implicitly or explicitly) how trust will be impacted, supported, maintained, and developed. You may need to explicitly identify assessments and activities that will undermine trust (e.g. a reflective blog on identifying weaknesses and solutions for a group that can be seen by a supervisor and used in promotional considerations).
Looking at the variety of activities that can be used in a flexible instructional design like the models proposed by Phillips and Sims & Jones (2002), there needs to be a variety of ways to assess learning. The differences in activities help to provide different types of learning for different learning needs. Likewise, different types of assessment will address the different ways that people apply their learning and demonstrate their knowledge.
Assessment Tools
We all know the traditional ways that we assess learning in the classroom: quizzes (multiple choice), joint projects, etc... In the classroom, especially at the elementary and middle school levels, we can see the class dynamics; we have social cues to indicate to us when a student "gets it". But this is not always true in distributed, distance, or adult learning. Technology can distort feedback (both receiving and giving). In addition, the time delay for feedback can be both positive (giving time for students to reflect) and negative (frustrating students when they are stuck with no visible means of support). So, for distance learning, not only is it important to determine the best way to assess student learning (what did they really learn and how well did they learn it) but also the best way to provide evaluation so administrators, parents, employers, other faculty, and the students themselves understand how well they met the expected outcomes and those areas in which there needs to be improvement. As assessment tools are chosen and created, try to think of these questions: how can the learning activity be justified to all stakeholders (e.g. the $500 spent on a video conference with the Baseball Hall of Fame)? What type of feedback will help focus student learning? What did students really learn (e.g. in addition to the content, perhaps communication and technology skills, problem solving and group skills, and understanding of the world outside of the classroom) and how can this be documented? How will the skills students have learned benefit them in their future? Did the learning activity allow them to use what they have learned immediately, or will there be a delay (need to practice or reflection, for example) before they can demonstrate their understanding? Will assessment be a part of the learning or is it used to measure learning for outside stakeholders?
If we follow the advice given by constructivist researchers, the best activities for learning are ill-defined, action based, with multiple possible outcomes. As such, will a multiple choice test be appropriate to measure student learning and outcomes? The reality is that our current educational system requires that we test in a time efficient way, which is multiple choice or "objective" testing. As a result, as teachers, we feel compelled to use these tests to prepare our students. So how do we resolve these competing forces?
In a traditional classroom, where there are pressures for standardized test scores, therefore, a combination of assessment tools can be used. Follow-up classroom activities that lead to objective testing with student presentations on what has been learned, application of constructivist concepts to a traditional classroom activity, and application of constructivist skills (critical thinking, situating learning, perspective taking) in other contexts that fit a traditional classroom assessment requirement.
In non-traditional classrooms (i.e. online learning, self-directed learning), on the other hand, assessment tools must match the teaching approach: reflective journals, online discussions, group work and presentations, projects, papers. Grading rubrics help to guide students in terms of expectations. Written feedback from both instructors and peers also help to establish expectations and perceived learning outcomes. I use written feedback and a holistic approach to grading (which is used on your SAT, GRE, and GMAT written sections). Others feel more comfortable using a rubric, especially when there are multiple instructors for the same class or course. More and more, I have seen departments that work together in training and developing group standards through feedback and triangulating grading among evaluators.
A number of researchers have also suggested including student input in developing the assessment tools (McLoughlin & Luca, 2001). The co-creation of assessment tools means students are not only constructing their learning, but they are aware as they do so of how they will be able to demonstrate their level of learning and stakeholders' expectations.
Some samples of alternative assessment that was developed by my students can be found on my web page in the right hand margin under Evaluation Rubrics. Over the next few posts I will include some examples of how I incorporate flexibility into my own assessments and the guidelines I use for my own assessment tools.
References
McLoughlin, C. & Luca, J. (2001). Quality in online delivery: What does it mean for assessment in e-learning environments? Available at: http://www.ascilite.org.au/conferences/melbourne01/pdf/papers/mcloughlinc2.pdf
Phillips, R. (2004). The design dimensions of e-learning. Presented December, 2004 at ASCILITE Conference in Perth, Australia. Available at http://www.ascilite.org.au/conferences/perth04/procs/pdf/phillips.pdf
Sims, R., & Jones, D. (2002). Continuous Improvement Through Shared Understanding: Reconceptualising Instructional Design for Online Learning. Proceedings of the 2002 ascilite conference: winds of change in the sea of learning: charting the course of digital education. Internet: Available from: http://www.ascilite.org.au/conferences/auckland02/proceedings/papers/162.pdf
Wednesday, June 3, 2009
Creating standards and assessing learning
Results of the New York State Math test, a standardized test given to children in grades 3-8 (8-14 year olds), was announced yesterday. What was amazing was the analysis of what these test scores meant and why students in the NY City area had done so well. Theories included that the test was too predictable to the fact that New York City schools were now under control of the Mayor. Surprisingly, the fact that a new formula to calculate state aid and greater investment into urban schools was never mentioned! This despite the fact that grades increased in all urban areas.
What this demonstrates is that it is difficult to assess learning solely on the basis of a standardized test results. Even more important, there was no analysis of what standards students were meeting and the impact that would have on higher ed and workplace readiness. Instead, Merryll Tisch, chanceler of the New York State Board of Regents announced that perhaps the passing score should be raised.
What do standardized assessments really measure?
A standardized test if designed correctly, measures the retrieval of information set out by the curriculum. In some cases, it may measure performance of a standard (i.e. the lab section of the NY state science test). However, often it measures the knowledge of processes rather than a level of understanding in performing a task.
For a standardized test to be successful, it should:
1) Be aligned with the stated standards
2) Have some sort of mechanism to ensure consistent evaluation and grading
3) Ignore other learning or knowledge that has not been defined in the standards
Standardized tests do not always have to be an objective test. For example, the GRE has a critical writing component in which evaluators are trained and each written answer is evaluated by 3 different people to ensure consistency in grading.
Standardized tests therefore do not grade a student's learning, but rather if they are familiar with the intended material, content, or curriculum the standards reflect. They also do not measure if the standards will prepare a student for college or the workplace if the standards do not reflect the skills needed. (See the related article on the White House push to improve curriculum to prepare students for the 21st Century).
Finally, I would think that the goal of assessment would be to have a high maintained level of passing as it demonstrates that more and more students are achieving the stated standards. However, from Chancellor Tisch's comments it would appear that either the standards are inadequate (which is why students need remedial classes) or it is being used to rate students, excluding a certain % (which is why the passing score would be raised). Standards should be changed if they no longer are aligned with the needs of the learning outcomes, not because "too many passed the test."
Choosing a better way to assess
It seems that there is a disconnect between
1) learning standards and learning outcome needs
2) what is methods of measuring learning and instructional design
3) knowledge and application of that knowledge in multiple contexts
4) having content and understanding the content
The first step in any instructional design should be to establish learning goals and develop measures to assess learning outcomes. In developing these learning goals, it is not enough to identify what someone should "know" or be "familiar with" unless these are aligned with how learning will be used.
For example, training for new regulations in the insurance industry can use standards such as "know" or "be familiar with" because it is knowledge of the regulations that is important to regulatory agencies.
However, training for FERMA personnel in procedures for first responders for a natural disaster will need to have an understanding of the context and environments in which they will be working. In addition, there is a level of tacit knowledge which will need to be used (they need to know the practice of disaster relief, not just the process).
A good example of melding these two types of assessment can be found in the driver's test. The first stage, getting the learner's permit, requires a written test about the laws required for driving. The road test, taken after practical training, looks at a driver's ability to assess the driving conditions, react to changes in the driving environment, and showing judgment in using certain driving skills based on environmental conditions. A student who is driving too closely so that they need to slam on their breaks when someone in front makes a quick stop is assessed differently than someone that slams on their breaks when ever they are startled. Both are slamming on the breaks, but in the second case, when there is no environmental reason, there would be a perception of less driving skill and knowledge.
What this demonstrates is that it is difficult to assess learning solely on the basis of a standardized test results. Even more important, there was no analysis of what standards students were meeting and the impact that would have on higher ed and workplace readiness. Instead, Merryll Tisch, chanceler of the New York State Board of Regents announced that perhaps the passing score should be raised.
What do standardized assessments really measure?
A standardized test if designed correctly, measures the retrieval of information set out by the curriculum. In some cases, it may measure performance of a standard (i.e. the lab section of the NY state science test). However, often it measures the knowledge of processes rather than a level of understanding in performing a task.
For a standardized test to be successful, it should:
1) Be aligned with the stated standards
2) Have some sort of mechanism to ensure consistent evaluation and grading
3) Ignore other learning or knowledge that has not been defined in the standards
Standardized tests do not always have to be an objective test. For example, the GRE has a critical writing component in which evaluators are trained and each written answer is evaluated by 3 different people to ensure consistency in grading.
Standardized tests therefore do not grade a student's learning, but rather if they are familiar with the intended material, content, or curriculum the standards reflect. They also do not measure if the standards will prepare a student for college or the workplace if the standards do not reflect the skills needed. (See the related article on the White House push to improve curriculum to prepare students for the 21st Century).
Finally, I would think that the goal of assessment would be to have a high maintained level of passing as it demonstrates that more and more students are achieving the stated standards. However, from Chancellor Tisch's comments it would appear that either the standards are inadequate (which is why students need remedial classes) or it is being used to rate students, excluding a certain % (which is why the passing score would be raised). Standards should be changed if they no longer are aligned with the needs of the learning outcomes, not because "too many passed the test."
Choosing a better way to assess
It seems that there is a disconnect between
1) learning standards and learning outcome needs
2) what is methods of measuring learning and instructional design
3) knowledge and application of that knowledge in multiple contexts
4) having content and understanding the content
The first step in any instructional design should be to establish learning goals and develop measures to assess learning outcomes. In developing these learning goals, it is not enough to identify what someone should "know" or be "familiar with" unless these are aligned with how learning will be used.
For example, training for new regulations in the insurance industry can use standards such as "know" or "be familiar with" because it is knowledge of the regulations that is important to regulatory agencies.
However, training for FERMA personnel in procedures for first responders for a natural disaster will need to have an understanding of the context and environments in which they will be working. In addition, there is a level of tacit knowledge which will need to be used (they need to know the practice of disaster relief, not just the process).
A good example of melding these two types of assessment can be found in the driver's test. The first stage, getting the learner's permit, requires a written test about the laws required for driving. The road test, taken after practical training, looks at a driver's ability to assess the driving conditions, react to changes in the driving environment, and showing judgment in using certain driving skills based on environmental conditions. A student who is driving too closely so that they need to slam on their breaks when someone in front makes a quick stop is assessed differently than someone that slams on their breaks when ever they are startled. Both are slamming on the breaks, but in the second case, when there is no environmental reason, there would be a perception of less driving skill and knowledge.
Thursday, December 11, 2008
A Call to Reassess Assessment
Okay, so my top 10 list will be delayed. But the topic of assessment is screaming at me and just needs to be put down on this blog!
Two things happened over the last week that just gets my blood pressure up. First, my son got his PSAT scores back. For those that are not familiar with the PSAT's, these are the preparatory tests given a year or so before you are expected to take the SAT. Now, my son did well, not great, but well. But what really upset me was his impression that he wouldn't get into school or perhaps the school he wants to go to because of one test. He asked the question that I have asked the past 40 years when I took my PSAT's: how can doing the work FASTER mean that you are SMARTER? As he said, "They didn't test if I was a good reader, only faster."
Meanwhile, a project I have been following is finally ready for implementation. The developers were very proud of the way the pre-test and post test looked, with explanations given when a user gets the multiple choice question wrong. However, the project's goal is to change attitudes and professional culture. So how can a pre-test/post test multiple choice test with the "correct" answers assess these attitude or cultural belief changes? More importantly, how can we expect changes over the course of a week? Doesn't it take time?
I keep reading about the changes in Instructional Design, changes in the tools we use, even changes in the theoretical basis of instruction. Yet, we still fall back to the old measures. This is like using a magnifying glass to verify nano-technology research. We just can't see everything that is going on.
A New Way to Assess Complex Learning
I have written previously about the need to change assessment. I especially liked Ewa's expansion of my original idea:
In terms of the SAT's, why must the tests be timed? If it is so important that students be ranked depending on the timing, then students need only finish the test, then punch in when they are finished and a time is generated. Is this just to generate a false sense of pressure so schools can see who does well under exam pressure? And shouldn't the SAT's include measures of the use of tools? I don't want to imply that those who do well on the SAT will not do well in college. But many of my "best students" come ill prepared for the college classroom as they are used to doing only what is given them to do. They also are paralyzed when they have to deviate from the written material and give their own insights. Education is becoming much more complex and the assessment tools should be also. Why isn't there a movement towards standardized portfolio assessment as a means of measuring high school student work (as NEAP does)? I wonder if this would be a better measurement of student preparedness?
The Future
I hope that students realize that there are alternatives in getting into good colleges. How many students are encouraged to enter national academic competitions (often this is reserved for only the "top students", those that take tests well)? Certainly demonstrating your ability in a National Writing competition, science fair, or even having an article published on a top the student has an interest in counts for more than a timed multiple choice test.
My hope is that the schools my son is interested in looks at the overall picture. But from my own personal experience, I know this doesn't happen. It is easier to cull the pool of applicants using a superficial mechanism when you have 10-20,000 of them. However, I am proof that there are many ways to achieve the same goal. Although it has taken many years and many rejection letters, I have straight A's and at the end of my Ph.d. program, with only the data analysis and my committee between my completing my degree.
I also would like organizations to stop looking at employees as nothing more than computers which just need the correct "programming" (training) to spit out the desired output (testing). There is a lot of untapped protential, and until companies learn how to measure this, they will wasting resources on busy work. They need only look at Coke's use of this old "numbers" model in developing New Coke, without really looking at the complexity of customer behavior.
And for all of you out there that say that this is too much work, I have never used a multiple choice test in my 17 years of teaching at the university level, and I only used multiple choice tests that were imposed on me by the organization before that. I have had up to 50 students in a class where I was the sole teacher, and still I used essay exams or projects. It can be done.
Two things happened over the last week that just gets my blood pressure up. First, my son got his PSAT scores back. For those that are not familiar with the PSAT's, these are the preparatory tests given a year or so before you are expected to take the SAT. Now, my son did well, not great, but well. But what really upset me was his impression that he wouldn't get into school or perhaps the school he wants to go to because of one test. He asked the question that I have asked the past 40 years when I took my PSAT's: how can doing the work FASTER mean that you are SMARTER? As he said, "They didn't test if I was a good reader, only faster."
Meanwhile, a project I have been following is finally ready for implementation. The developers were very proud of the way the pre-test and post test looked, with explanations given when a user gets the multiple choice question wrong. However, the project's goal is to change attitudes and professional culture. So how can a pre-test/post test multiple choice test with the "correct" answers assess these attitude or cultural belief changes? More importantly, how can we expect changes over the course of a week? Doesn't it take time?
I keep reading about the changes in Instructional Design, changes in the tools we use, even changes in the theoretical basis of instruction. Yet, we still fall back to the old measures. This is like using a magnifying glass to verify nano-technology research. We just can't see everything that is going on.
A New Way to Assess Complex Learning
I have written previously about the need to change assessment. I especially liked Ewa's expansion of my original idea:
test those who were not present in the training but worked on a project / assignment, with the ones who were. Assessment after 3-6 months of the project and the methodology used should be team-based: teams assess each other in front of the other teams that participated in the training. This way you'd see the advantages of networking and flow of information as well as assessment on how useful your training was: if people didn't use the methods you might want to reconsider its adaptation for the organization. I think this should be the model for business school or any other applied science education. Project- based Companies that want to practice continuous learning could also apply this model.
In terms of the SAT's, why must the tests be timed? If it is so important that students be ranked depending on the timing, then students need only finish the test, then punch in when they are finished and a time is generated. Is this just to generate a false sense of pressure so schools can see who does well under exam pressure? And shouldn't the SAT's include measures of the use of tools? I don't want to imply that those who do well on the SAT will not do well in college. But many of my "best students" come ill prepared for the college classroom as they are used to doing only what is given them to do. They also are paralyzed when they have to deviate from the written material and give their own insights. Education is becoming much more complex and the assessment tools should be also. Why isn't there a movement towards standardized portfolio assessment as a means of measuring high school student work (as NEAP does)? I wonder if this would be a better measurement of student preparedness?
The Future
I hope that students realize that there are alternatives in getting into good colleges. How many students are encouraged to enter national academic competitions (often this is reserved for only the "top students", those that take tests well)? Certainly demonstrating your ability in a National Writing competition, science fair, or even having an article published on a top the student has an interest in counts for more than a timed multiple choice test.
My hope is that the schools my son is interested in looks at the overall picture. But from my own personal experience, I know this doesn't happen. It is easier to cull the pool of applicants using a superficial mechanism when you have 10-20,000 of them. However, I am proof that there are many ways to achieve the same goal. Although it has taken many years and many rejection letters, I have straight A's and at the end of my Ph.d. program, with only the data analysis and my committee between my completing my degree.
I also would like organizations to stop looking at employees as nothing more than computers which just need the correct "programming" (training) to spit out the desired output (testing). There is a lot of untapped protential, and until companies learn how to measure this, they will wasting resources on busy work. They need only look at Coke's use of this old "numbers" model in developing New Coke, without really looking at the complexity of customer behavior.
And for all of you out there that say that this is too much work, I have never used a multiple choice test in my 17 years of teaching at the university level, and I only used multiple choice tests that were imposed on me by the organization before that. I have had up to 50 students in a class where I was the sole teacher, and still I used essay exams or projects. It can be done.
Wednesday, October 15, 2008
Ways of Assessing Organizational Learning
A comment Ewa left on my blog and my response, has left me wondering how we can assess worker learning over a period of time, both through formal learning and informal learning.
I keep coming up with a portfolio of work over a year's time. I think that an electronic portfolio with the worker's self-selected best work will give trainers and management an idea of what workers feel are important in their work, the skills they have developed, and perhaps how individual assessment match up to management and organizational assessments/goals.
With that in mind, I began to consider what my portfolio would look like. I might include some of my blog postings I have done as a result of my analysis of my teaching, such as "What are we doing as teachers to make ourselves literate in the workplace?" or "Lessons learned in New Communication Technologies in Organizational Life". The first is an example of how I am learning (a structure of what I think is important) and the second is an example of how I have learned from my work.
I would also include some of my students' work which is a by-product of my own work: teaching, such as the Pageflake for my students' conference on nano-technology or the technology blog they put together. I also thought that a list of the resources I have collected for classes on delicious would be a good example of the types of resources I have found concerning a certain topic such as those for my ACOM 203 class (speech writing and presentation) or work literacy (professional resources). Finally, I have web pages (which actually need to be updated) for each of the courses I have taught over the last 3 years: Global Communication, New Communication Technologies in Organizational Life, Introduction to Distance Learning, Computer Supported Writing Across the Curriculum, Speech Composition and Presentation.
There are two areas in which I am stuck, however. How do I show "learning" or evidence of my work that is either 1) in a non-electronic format or 2) protected because of privacy? In the second case, I would need those evaluating my work portfolio access to my work (such as blackboard). However, in my case, blackboard and university wikis are erased at the end of the semester because of FERPA (student privacy laws). This might be the problem in areas such as healthcare, financial services, or insurance. I am not sure how to be able to document work that by law is limited access.
The second area has to do with tacit learning. For example, I share an office with another communication professor and we often discuss problems and solutions for similiar problems we are having with our students. While I might not use her ideas verbatum, the discussion gives me a different insight into the situation which might result in more effective teaching on my part.
Likewise, my sister, a speech pathologist, will often consult with me on cases of bilingual students. Both of us leave the discussion with new insight into language development (sometimes with the aid of outside sources, other times just through the discussions/question and answers). How can this type of activity be documented and counted? One possible way would be to blog about it.
I would be interested in hearing about any other suggestions you might have for assessing learning in more non-traditional ways.
Thursday, May 29, 2008
Measuring learning and the time factor
Clark Quinn has developed a chart representing a Performance Ecosystem. The main short-coming with most ePerformance frameworks that I have seen is the focus on short term results. In workplace training, especially, the skills we learn may not be useful until the student has reached a certain level. As a result, most assessments of ePerformance have to do with asking the trainee if the training was useful and could he or she use it in their work. This might not be possible right after training.
For example, micro-economics made no sense until I had to use it as an auditor. Remarkably, I remembered much of what I had learned in class and put it to use (despite the fact that I had taken the class 8 years previously). Even now, some of the basic principles of computer programming and economics come to mind after 20+ years.
For example, micro-economics made no sense until I had to use it as an auditor. Remarkably, I remembered much of what I had learned in class and put it to use (despite the fact that I had taken the class 8 years previously). Even now, some of the basic principles of computer programming and economics come to mind after 20+ years.
Subscribe to:
Posts (Atom)