About Me

Education, the knowledge society, the global market all connected through technology and cross-cultural communication skills are I am all about. I hope through this blog to both guide others and travel myself across disciplines, borders, theories, languages, and cultures in order to create connections to knowledge around the world. I teach at the University level in the areas of Business, Language, Communication, and Technology.

Thursday, July 5, 2012

Managing the data

Now that I have completed my dissertation, there are some aspects of the research process I want to write about. One topic is related to questions I received at my defense and issues that have been brought up on #phdchat.

One of the first issues was to use a software like NVIVO or come up with my own process/technology. In the end, after downloading NVIVO and trying it out, I felt that it was too much work for the benefit. I have trouble with using software that does not fit my own style, writing and thinking styles included.

So I opted for index cards and work processing software. The index cards allowed me to move ideas around, making connections between ideas, reorganizing so related ideas could be reused. I also did a lot of hand diagrams. Like NVIVO, I have yet to find a diagramming software that matches the way I like to think.

I was doing a qualitative study, so there was a lot of data that needed to be coded. I ended up writing the emerging codes (in pencil) on index cards, then I would discard or revise the codes (by hand) as I coded electronically. I used the comment function to code data and highlighted key words (which I eventually began to write down to identify data later on).

Using Charmaz's constructed grounded theory process, I answered each research question using the index cards with each of the codes. Not all index cards/codes were used as they did not answer the research questions. I laid out each of the codes and began to create answers to the research questions, moving the index cards around when it appeared they were related. After each "category" of codes, I would fill out an index card with the theme. I then wrote out each code that fell under the theme.

I'm not sure I could have achieved this part of the process if I had used a computer. The advantage of the index card was that I was able to see many ideas at one time and manipulate the cards to reflect the various relationship between the ideas.

Once I had the themes, I then went back into the data and identified passages that supported the emerging themes using the key words I had identified (on the coded index cards) and highlighting the font different colors. I then cut and paste each of those passages, identifying the source and page number (i.e. Ronda, interview 1 as the section title, p. 4 {passage}. I also identified when there was no data from participants for a specific theme. At the end of this I had about 40 pages of data arranged by theme and further arranged by participant and/or document.

Finally, I documented the relationship between the data over the study period, developing a visual of how the data changed over the course of the study.

Once I had this, I began to write up my findings by answering the research questions. I would then go into the data and develop my conclusions/findings. I continued to go back into the master list of "passages" trying to find examples that would support my conclusions or reading through all of the passages to help fine tune my findings. This was especially important during my revisions and preparation of defense. It also helped when I was asked to give more detail on my research process as I had all the cards, hand created models, and printouts of passages I used.

I have written this post in the hopes that those with limited resources can use this process to help expedite their research process. This was a relatively low tech, yet thorough process that anyone could use.

No comments: