About Me

Education, the knowledge society, the global market all connected through technology and cross-cultural communication skills are I am all about. I hope through this blog to both guide others and travel myself across disciplines, borders, theories, languages, and cultures in order to create connections to knowledge around the world. I teach at the University level in the areas of Business, Language, Communication, and Technology.

Sunday, June 21, 2009

Incorporating assessment into a constructivism based instructional design

Michael Hanley's 15th and 16th installment of instructional design had me thinking about assessment. In the two installments, Michael introduces a flexible instructional design model that fits will with constructivism. However, as I commented on his blog, there is little discussion of what assessment tools can be used that will maintain the level of flexible learning. He asked if I had any suggestions.

In fact, when I teach introduction to distance learning, this is an area that I work at developing with my students. The following is a compilation of the notes I give my students with some additional updates based on my own experience:

The assessment-learning connection

Sometimes, students need to be pushed to go beyond their comfort zone. If we want our students to succeed in the real world, at some point they will need to learn how to take initiative and risks. Those are the employees that are rewarded with promotions and greater responsibilities. However, there will be some students (for a variety of reasons including lack of confidence, maturity, background, personality) that will always avoid risk of failure. Constructivist activities, especially those designed for distance learning, generally allow students to step out of these fears into a supportive environment that allows them to risk failure as long as there are supports in place to help them if they do fail. These supports include instructional support (either preparing them for the experience or using the experience as a stepping stone for further targeted instruction), emotional support (empowering students when they make mistakes rather than leaving them with a sense of humiliation, building confidence by finding success in making mistakes or errors and learning from them or working through them), community support (making sure members know what their role is and creating an atmosphere that will build trust), and support in setting goals and processes to achieve them (helping students to make choices that are presented to them, helping them to develop time management skills with a timeline or structure within which to work, giving clear expectations for any activity and being flexible--or not--depending on the student needs).

In looking at assessment, there often is little attention given to trust and trust-building activities. Often assessment and evaluation leave students feeling "set-up" initially, feeling betrayed. This is something that can happen much easier in distance learning than in a traditional class, as there are no social cues as can be found in a traditional class room. It is important, therefore, that any constructivist learning instructional design with an analysis of the current level of trust between the various "presences"--teacher, students, community, and discipline (Phillips). In all instructional designs, the designer should analyze (either implicitly or explicitly) how trust will be impacted, supported, maintained, and developed. You may need to explicitly identify assessments and activities that will undermine trust (e.g. a reflective blog on identifying weaknesses and solutions for a group that can be seen by a supervisor and used in promotional considerations).

Looking at the variety of activities that can be used in a flexible instructional design like the models proposed by Phillips and Sims & Jones (2002), there needs to be a variety of ways to assess learning. The differences in activities help to provide different types of learning for different learning needs. Likewise, different types of assessment will address the different ways that people apply their learning and demonstrate their knowledge.

Assessment Tools

We all know the traditional ways that we assess learning in the classroom: quizzes (multiple choice), joint projects, etc... In the classroom, especially at the elementary and middle school levels, we can see the class dynamics; we have social cues to indicate to us when a student "gets it". But this is not always true in distributed, distance, or adult learning. Technology can distort feedback (both receiving and giving). In addition, the time delay for feedback can be both positive (giving time for students to reflect) and negative (frustrating students when they are stuck with no visible means of support). So, for distance learning, not only is it important to determine the best way to assess student learning (what did they really learn and how well did they learn it) but also the best way to provide evaluation so administrators, parents, employers, other faculty, and the students themselves understand how well they met the expected outcomes and those areas in which there needs to be improvement. As assessment tools are chosen and created, try to think of these questions: how can the learning activity be justified to all stakeholders (e.g. the $500 spent on a video conference with the Baseball Hall of Fame)? What type of feedback will help focus student learning? What did students really learn (e.g. in addition to the content, perhaps communication and technology skills, problem solving and group skills, and understanding of the world outside of the classroom) and how can this be documented? How will the skills students have learned benefit them in their future? Did the learning activity allow them to use what they have learned immediately, or will there be a delay (need to practice or reflection, for example) before they can demonstrate their understanding? Will assessment be a part of the learning or is it used to measure learning for outside stakeholders?

If we follow the advice given by constructivist researchers, the best activities for learning are ill-defined, action based, with multiple possible outcomes. As such, will a multiple choice test be appropriate to measure student learning and outcomes? The reality is that our current educational system requires that we test in a time efficient way, which is multiple choice or "objective" testing. As a result, as teachers, we feel compelled to use these tests to prepare our students. So how do we resolve these competing forces?

In a traditional classroom, where there are pressures for standardized test scores, therefore, a combination of assessment tools can be used. Follow-up classroom activities that lead to objective testing with student presentations on what has been learned, application of constructivist concepts to a traditional classroom activity, and application of constructivist skills (critical thinking, situating learning, perspective taking) in other contexts that fit a traditional classroom assessment requirement.

In non-traditional classrooms (i.e. online learning, self-directed learning), on the other hand, assessment tools must match the teaching approach: reflective journals, online discussions, group work and presentations, projects, papers. Grading rubrics help to guide students in terms of expectations. Written feedback from both instructors and peers also help to establish expectations and perceived learning outcomes. I use written feedback and a holistic approach to grading (which is used on your SAT, GRE, and GMAT written sections). Others feel more comfortable using a rubric, especially when there are multiple instructors for the same class or course. More and more, I have seen departments that work together in training and developing group standards through feedback and triangulating grading among evaluators.

A number of researchers have also suggested including student input in developing the assessment tools (McLoughlin & Luca, 2001). The co-creation of assessment tools means students are not only constructing their learning, but they are aware as they do so of how they will be able to demonstrate their level of learning and stakeholders' expectations.

Some samples of alternative assessment that was developed by my students can be found on my web page in the right hand margin under Evaluation Rubrics. Over the next few posts I will include some examples of how I incorporate flexibility into my own assessments and the guidelines I use for my own assessment tools.


McLoughlin, C. & Luca, J. (2001). Quality in online delivery: What does it mean for assessment in e-learning environments? Available at: http://www.ascilite.org.au/conferences/melbourne01/pdf/papers/mcloughlinc2.pdf

Phillips, R. (2004). The design dimensions of e-learning. Presented December, 2004 at ASCILITE Conference in Perth, Australia. Available at http://www.ascilite.org.au/conferences/perth04/procs/pdf/phillips.pdf

Sims, R., & Jones, D. (2002). Continuous Improvement Through Shared Understanding: Reconceptualising Instructional Design for Online Learning. Proceedings of the 2002 ascilite conference: winds of change in the sea of learning: charting the course of digital education. Internet: Available from: http://www.ascilite.org.au/conferences/auckland02/proceedings/papers/162.pdf


Michael Hanley said...

Great post - really comprehensive, considered, and well-balanced. If you don't mind, I'd like to link to it in the next part of my series on instructional design.
Best regards,

V Yonkers said...

No problem.

Kathreen said...

Hi V,

Another key component to constructivist learning and sustained student engagement is the integration of peer based assessment strategies. What kind of peer based assessment tools are available for the distance/online learner?

V Yonkers said...

Great point Kathreen. As you see from the rubrics my students developed, I have them work as a group in developing assessment tools. I also do a lot with students posting drafts of projects for peer review and comments. For myself, I try to place students in different groups throughout the semester but with similar interests and backgrounds.

For example, the rubrics were developed by students that taught at each of the grade levels. These rubrics were then posted for comments by other groups with guided questions. In addition to the group work, I have each individual assess the relevance of classmates' work to their own situation as part of their journals. For example, when I did technology, I placed students in groups according to their interests towards a specific technology. They posted their findings in an online presentation to the class who then commented and/or asked questions about the use of the technology for their own situations. Then students wrote a journal that analyzed each of the groups presentations and how it might be used in their own situation.

This allowed me to assess the impact of the presentations, how prepared the students were in applying their group research to a real life context, and individual as well as group learning.

Michael Hanley said...

Hi Virginia and Kathreen; have either of you undertaken any performance-based assessment in a constructivist context - I'm thinking specifically about using Read/Write Web tools to support student's portfolios, individual and collaborative essays, using mindmaps to demonstrate knowledge and skill acquisition at the higher levels of Bloom's Taxonomy in the cognitive domain (analysis/synthesis/evaluation)?

Similarly, have you any views on instruments to measure learners using criterion-referencing, norm-referencing, and ipsative rubrics?

BTW - I'm really enjoying reading this thread!

V Yonkers said...

I have used electronic portfolios, blogs/journals, and wikis. I also have had students work collaboratively doing primary research and presenting their findings in a format of their choice which can include podcasts, wikis, social networks (such as a Ning), video (i.e. on Youtube), webpage, or even an interactive game (especially effective for my education students). Truthfully, my teaching style tends to focus on performance assessment rather than content assessment, especially in distance or online learning as I believe content will change depending on a student's situation (situated learning). My online students come into the course with such a large variety of backgrounds, learning goals, experience, fields of study, age/demographics, even majors (course of study), that I need to focus on skills and their use of content that is relevant to their own situation.

In terms of any rubric, I think the way a rubric is written is very important (more than the reason for the rubric whether is be norms, criteria, or ipsative tools. Many of my students come into my class with a certain rubric in mind, but soon find it is important to create their own based on the learning objectives. If you look at the rubrics my students created for the class (see link above for my home page where the rubrics are located), there are some good electronic tools based on the level students are at. However, as the students worked in groups to develop a "generic" tool for each level of education, they found short comings in standard rubrics each used and realized that the tool was not as important as knowing what you really want to measure. They were surprised to discover that they often had a rubric that was much too narrow (usually based on content rather than performance). The primary education group had the easiest time creating a rubric because the curriculum in New York State uses both criterion and normative learning goals which made it easy to populate the rubric.

What I have discovered over the years is the process for developing the rubric. For my own part, I use a version of Rogers & Rymer rubric.

Blogger In Middle-earth said...

Kia ora Virginia!

You certainly have gone the extra light year here. I concur with what you believe about the distance learner having no social cues around times and occasions for assessment.

One real area of concern for me with assessing distance learners for qualifications is that there is little or no opportunity for teacher monitoring close to assessment.

The onus is on the learner to check all the tick-boxes. If they don't do that thoroughly time can be wasted and the learner can feel betrayed, as you say.

I have a strategy that can sometimes be difficult to implement - I use it nevertheless. It involves explicit written instruction (in several places) as well as phoning and interviewing the learner beforehand. Of course, this relies on success of contacting at the appropriate times - but I've found that when all that happens, it is worthwhile.

Catchya later