Michael Hanley's 15th and 16th installment of instructional design had me thinking about assessment. In the two installments, Michael introduces a flexible instructional design model that fits will with constructivism. However, as I commented on his blog, there is little discussion of what assessment tools can be used that will maintain the level of flexible learning. He asked if I had any suggestions.
In fact, when I teach introduction to distance learning, this is an area that I work at developing with my students. The following is a compilation of the notes I give my students with some additional updates based on my own experience:
The assessment-learning connection
Sometimes, students need to be pushed to go beyond their comfort zone. If we want our students to succeed in the real world, at some point they will need to learn how to take initiative and risks. Those are the employees that are rewarded with promotions and greater responsibilities. However, there will be some students (for a variety of reasons including lack of confidence, maturity, background, personality) that will always avoid risk of failure. Constructivist activities, especially those designed for distance learning, generally allow students to step out of these fears into a supportive environment that allows them to risk failure as long as there are supports in place to help them if they do fail. These supports include instructional support (either preparing them for the experience or using the experience as a stepping stone for further targeted instruction), emotional support (empowering students when they make mistakes rather than leaving them with a sense of humiliation, building confidence by finding success in making mistakes or errors and learning from them or working through them), community support (making sure members know what their role is and creating an atmosphere that will build trust), and support in setting goals and processes to achieve them (helping students to make choices that are presented to them, helping them to develop time management skills with a timeline or structure within which to work, giving clear expectations for any activity and being flexible--or not--depending on the student needs).
In looking at assessment, there often is little attention given to trust and trust-building activities. Often assessment and evaluation leave students feeling "set-up" initially, feeling betrayed. This is something that can happen much easier in distance learning than in a traditional class, as there are no social cues as can be found in a traditional class room. It is important, therefore, that any constructivist learning instructional design with an analysis of the current level of trust between the various "presences"--teacher, students, community, and discipline (Phillips). In all instructional designs, the designer should analyze (either implicitly or explicitly) how trust will be impacted, supported, maintained, and developed. You may need to explicitly identify assessments and activities that will undermine trust (e.g. a reflective blog on identifying weaknesses and solutions for a group that can be seen by a supervisor and used in promotional considerations).
Looking at the variety of activities that can be used in a flexible instructional design like the models proposed by Phillips and Sims & Jones (2002), there needs to be a variety of ways to assess learning. The differences in activities help to provide different types of learning for different learning needs. Likewise, different types of assessment will address the different ways that people apply their learning and demonstrate their knowledge.
We all know the traditional ways that we assess learning in the classroom: quizzes (multiple choice), joint projects, etc... In the classroom, especially at the elementary and middle school levels, we can see the class dynamics; we have social cues to indicate to us when a student "gets it". But this is not always true in distributed, distance, or adult learning. Technology can distort feedback (both receiving and giving). In addition, the time delay for feedback can be both positive (giving time for students to reflect) and negative (frustrating students when they are stuck with no visible means of support). So, for distance learning, not only is it important to determine the best way to assess student learning (what did they really learn and how well did they learn it) but also the best way to provide evaluation so administrators, parents, employers, other faculty, and the students themselves understand how well they met the expected outcomes and those areas in which there needs to be improvement. As assessment tools are chosen and created, try to think of these questions: how can the learning activity be justified to all stakeholders (e.g. the $500 spent on a video conference with the Baseball Hall of Fame)? What type of feedback will help focus student learning? What did students really learn (e.g. in addition to the content, perhaps communication and technology skills, problem solving and group skills, and understanding of the world outside of the classroom) and how can this be documented? How will the skills students have learned benefit them in their future? Did the learning activity allow them to use what they have learned immediately, or will there be a delay (need to practice or reflection, for example) before they can demonstrate their understanding? Will assessment be a part of the learning or is it used to measure learning for outside stakeholders?
If we follow the advice given by constructivist researchers, the best activities for learning are ill-defined, action based, with multiple possible outcomes. As such, will a multiple choice test be appropriate to measure student learning and outcomes? The reality is that our current educational system requires that we test in a time efficient way, which is multiple choice or "objective" testing. As a result, as teachers, we feel compelled to use these tests to prepare our students. So how do we resolve these competing forces?
In a traditional classroom, where there are pressures for standardized test scores, therefore, a combination of assessment tools can be used. Follow-up classroom activities that lead to objective testing with student presentations on what has been learned, application of constructivist concepts to a traditional classroom activity, and application of constructivist skills (critical thinking, situating learning, perspective taking) in other contexts that fit a traditional classroom assessment requirement.
In non-traditional classrooms (i.e. online learning, self-directed learning), on the other hand, assessment tools must match the teaching approach: reflective journals, online discussions, group work and presentations, projects, papers. Grading rubrics help to guide students in terms of expectations. Written feedback from both instructors and peers also help to establish expectations and perceived learning outcomes. I use written feedback and a holistic approach to grading (which is used on your SAT, GRE, and GMAT written sections). Others feel more comfortable using a rubric, especially when there are multiple instructors for the same class or course. More and more, I have seen departments that work together in training and developing group standards through feedback and triangulating grading among evaluators.
A number of researchers have also suggested including student input in developing the assessment tools (McLoughlin & Luca, 2001). The co-creation of assessment tools means students are not only constructing their learning, but they are aware as they do so of how they will be able to demonstrate their level of learning and stakeholders' expectations.
Some samples of alternative assessment that was developed by my students can be found on my web page in the right hand margin under Evaluation Rubrics. Over the next few posts I will include some examples of how I incorporate flexibility into my own assessments and the guidelines I use for my own assessment tools.
McLoughlin, C. & Luca, J. (2001). Quality in online delivery: What does it mean for assessment in e-learning environments? Available at: http://www.ascilite.org.au/conferences/melbourne01/pdf/papers/mcloughlinc2.pdf
Phillips, R. (2004). The design dimensions of e-learning. Presented December, 2004 at ASCILITE Conference in Perth, Australia. Available at http://www.ascilite.org.au/conferences/perth04/procs/pdf/phillips.pdf
Sims, R., & Jones, D. (2002). Continuous Improvement Through Shared Understanding: Reconceptualising Instructional Design for Online Learning. Proceedings of the 2002 ascilite conference: winds of change in the sea of learning: charting the course of digital education. Internet: Available from: http://www.ascilite.org.au/conferences/auckland02/proceedings/papers/162.pdf
- V Yonkers
- Education, the knowledge society, the global market all connected through technology and cross-cultural communication skills are I am all about. I hope through this blog to both guide others and travel myself across disciplines, borders, theories, languages, and cultures in order to create connections to knowledge around the world. I teach at the University level in the areas of Business, Language, Communication, and Technology.