About Me

Education, the knowledge society, the global market all connected through technology and cross-cultural communication skills are I am all about. I hope through this blog to both guide others and travel myself across disciplines, borders, theories, languages, and cultures in order to create connections to knowledge around the world. I teach at the University level in the areas of Business, Language, Communication, and Technology.

Wednesday, May 27, 2009

Training, Knowledge, and assessment

As I do research on knowledge creation, organizational learning, and culture, I also am in the midst of my children taking their year end standardized tests. Putting the two together, I began to realize that very little has been written on perception of knowledge and assessment measures within a business learning context.

Current Forms of Assessment

Most training programs use some form of standardized test to measure individual and organizational learning. Cook and Brown (1999) point out that the basis for our belief in what knowledge is for the last 3 centuries has been the Cartesian view in which "knowledge, particularly anything that might pass as rigorous knowledge, is something that is held in the head of an individual and is acquired, modeled, and expressed most accurately in the most objective and explicit terms possible.(p. 384)"

The implications of this, as Cook and Brown point out, is that most in our culture believe that a person possesses knowledge and that knowledge only exists in the individual. However, as organizations try to harness individual knowledge, isn't there some level of collective knowledge that may exist outside of the individual?

Types of knowledge

In addition to the traditional view of knowledge (the scientific method), many researchers have divided knowledge into tacit and explicit (Cook and Brown, 1999), individual and collective (Ashton, 2004, Yakhlef, 2002), information and know-how (Conceicao et al., 1998; Conceicao et al., 2003; Yakhlef, 2002).

Kolb (1984) also distinguished between two types of knowledge: apprehension and comprehension. Apprehensive knowledge is the intuitive process that happens as we experience the world. Apprehensive knowledge makes us aware of what we are experiencing and perceive our world, although it may not have meaning. Comprehensive knowledge is the abstract ideas and understanding we create based on our experience.

The problem, then, is if knowledge is outside of the individual and/or can exist in a "non-coded" manner (as in tacit or apprehensive knowledge) how can we measure the level of knowledge that an individual or group might have? How do we assess "learning" or the acquisition of new knowledge when the knowledge is tacit, know-how, or a level of "knowing" or deeper understanding of the knowledge an individual might possess?

Learning in Action

Cook and Brown, Kolb, and Dewey all addressed the issue of being able to put knowledge into action. This is what Cook and Brown call "knowing". A person does not "possess" all of the knowledge needed for action, although they might be able to access some of the knowledge they have in order to put what they know into action.

For example, my kids are currently preparing to take their state tests. My son does well on these tests as he "knows" how to take them. He can read a question and "know" what they are asking for. My daughter will have the knowledge of the subject, but often gets hung up on what "they" are asking for. She can read each multiple choice response and depending on her focus within the wording of the question, she can explain why each response is appropriate. While she possesses the knowledge to answer the questions, she doesn't know the answer. These standardized multiple choice assessments are not testing her knowledge but rather whether she knows how to take the test.

Likewise, my son can apply the mathmatical processes needed to get the correct answer in trigonometry, but he doesn't understand the mathmatical concepts behind the process. This means he has a difficult time when he needs to use problem solving skills and determine which tool to use in a Math problem. For the most part, he has learned to use certain tools when there are correlating words. But when those words are missing or when he needs to use determine which tools to use during a science lab, for example, he is lost. Multiple choice or even essay questions don't measure the level of understanding, tacit knowledge, and/or "knowing" something.

This is an especially important distinction to make in the work environment.

Assessing Learning

So how do we assess this aspect of learning that is difficult to measure? This is an area that Medicine and Aeronautics have been working on over the last decade. Simulations, portfolio of work, practicums, and a certain number of hours of "practice" all are means of assessing individuals and group levels of "knowing."

Likewise, the bar exam and the CPA certification exam use a more complex method of assessing both knowledge and knowing. Now it is important that "standardized" tests become more complex to capture the true state of learning. Training organizations need to spend time on developing different ways to measure learning to report to management. Management needs to realize that numbers are not going to capture the real level of learning, knowledge and knowing within the organization in the 21st Century. To acheive this, new management theories need to be developed that include the organization, individuals, groups, and distributed work groups.


Ashton, D. (2004). The impact of organisational structure and practices on learning in the workplace. International Journal of Training and Development, 8(1), 43-53.

Conceicao, P., Heitor, M., Gibson, D., & Shariq, S. (1998). The emergining importance of knowledge for development: Implications for technology policy and innovation. Technological Forecasting and Social Change, 58, 181-202.

Conceicao, P., Heitor, M., & Veloso, F. (2003). Infrastructures, incentives, and institutions: Fostering distributed knowledge bases for the learning society. Technological Forecasting and Social Change, 70, 583-617.

Cook, S. and Brown, J. (1999)Bridging Epistemologies: The Generative Dance Between Organizational Knowledge and Organizational Knowing. Organization Science, 10 (4) 381-400.

Kolb, D. (1984). Experiential learning: Experience as a the source of learning and development. Englewood-Cliffs, NJ: Prentice-Hall, Inc.

Yakhlef, A. (2002). Towards a discursive approach to organisational knowledge formation. Scandinavian Journal of Management, 18, 319-339.


Blogger In Middle-earth said...

Kia ora Virginia!

Often it is not necessary to examine minutely the knowledge or skill that is possessed by a person. Take ability to play tennis, for example.

If we think of how this can be assessed, it will perhaps throw some light on how other skills should/could be assessed. True, assessment in some disciplines cannot be done so explicitly as in 'knowing' how to play tennis. BUT one would never set a written test to determine the achievement of someone in the game of tennis, no more than marksmanship can be assessed by similar means. The test (whatever it contained) would determine something other than what was intended.

Most written tests are inappropriate - especially in a work-skill context. Even interview-type examinations would not necessarily be appropriate unless for a very specific skill such as a journalist's/broadcaster's ability to conduct an interview.

There are some disciplines, such as in Performing Arts and in Creative Art, where the assessment method has been specifically tailored to suit the discipline.

But these are also difficult to assess in a comparative way, whereas ability in tennis, at least on the continuum scale, appears to be easier as it is done by a process of comparison.

I think a lot of assessment problems arise purely because attempts are made to standardise the way assessments (in general) are conducted, such as by giving a written test. While some inappropriate instances of this are easy to spot, some are not so.

Catchya later

V Yonkers said...

Your comment brings up something that people always complain about in assessing skills...not being objective.

Actually, this reminds me of something that has happened in my son's playing of lacrosse. He began the year very strong, with others looking up to him. However, soon the coaches began to "measure" skills by counting the number of goals, number of passes and passes caught, etc... What this did was to pit player against player so that his team mates would refuse to pass to those who might drop the ball, took risks at making goals (which counted more) and stopped playing as a team without anyone playing defense (not one of the skills measured).

This is the group knowledge that could not be counted so the team lost its coherence. But the sport is a team sport and the best teams don't necessarily have the best players, but the best team players.

Blogger In Middle-earth said...

Kia ora Virginia!

There are a range of disciplines that can be chosen to select different strengths as you have described: best players vs best team players. One of them is in music where a solo singer who may be the best in his or her field is not necessarily the best group singer when it comes to singing with others. Similarly musicians can also find their different strengths this way. Having an empathy with others seems to be a common factor that tends to link all team members, whether they be singers or footballers, sprinters or dancers.

Catchya later