Sunday, June 9, 2013

Traditional, Blended, E-Learning, Results and Digital Natives

Photo from
The study "Comparing Effectiveness of Traditional versus Blended Teaching Methods: Efforts to Meet the Demand of Students in a Blend 2.0" illustrates that when designed appropriately for the learning situation, a blended learning environment is as effective as a traditional learning environment. This study asserts that students are "craving technology in the classroom." While this may generally be the case, it is clear from this study's results that there were no significant differences in overall effectiveness of a blended environment versus a traditional one.  

Perhaps this conclusion may illustrate that if a course is designed well, regardless of the approach, it will be effective. In fact the study acknowledges that the instructor had been dedicated in making "the two sections of the class as comparable as possible" and the instructional design for both classes included a "media rich environment." Thus, it is important to note that technology played a significant role in both learning environments, even though one environment is called "traditional" it did not exclude the use of technology.

However, another implication is that some courses may work better in a blended environment than others. This study only looked at one course being taught by the same instructor with the same material with two different classes with similar compositions. One class experienced a traditional setting and the other class learned the course material through a blended environment. One limitation of the study is that it only addresses one course, so while the results are worth considering, one should be careful not to generalize these results across the board. 

I was interested in the study's observation that while blogs and wikis are commonly used in classrooms, many students do not use them for informal communication. In contrast, the use of social networking sites is "widespread." This was not a surprise to me, nor do I think it would be a surprise to many readers. I know from just asking students how many of them have used a blog before, few answer yes, but most of them answer yes to having a Facebook account. 

One result in this study is the positive response that students had to the blended environment for "pragmatic reasons." Students liked the fact they could work from home at any time of the day. Such evidence illustrates that the use of technology can enhance learning approaches, such as Project-Based Learning and flipped classrooms. Many approaches, particularly ones that include constructivist activities, require the learners to take more responsibility for their learning. Thus, the ability for the learner to be able to learn anywhere at anytime can be both empowering and engaging. 

However, as the study indicates, it is crucial to ensure that the learners are guided as they become familiar with the technology and software, and time is also required for learners to adapt and grow comfortable with online experiences with which they have little to no experience. For instance, online discussions via audio chat.  Students in this study sometimes felt discouraged or uncomfortable in the virtual learning environment. As the study points out, this finding goes against much of the discussion in educational circles about "digital natives" and how they are inherently technically-inclined. This finding did not surprise. From my own experience and from some articles I have read, I am not convinced the term "digital natives" accurately labels younger learners. While, there is some argument that they may be more acceptable in using technology for a wide range of purposes, my own experiences in the classroom have led me to believe that it can be a very misleading generalization. 

I was particularly surprised by some of the feedback from students in regard to online communication. It is common to hear the argument made that an online discussion can allow introverted or shy students the opportunity to speak out when they may be uncomfortable to do so in a traditional classroom discussion. The study acknowledges this fact, but one student in the study stated that “Talking and not seeing faces really makes me feel uncomfortable because of my shyness.” While this is only one comment out of many, perhaps it does indicate that, as educators, we can not assume that the use of technology will encourage shy students to participate. We still need to foster support and encouragement and not see technology as a quick fix. 

The article "Evaluating E-Learning Effectiveness Using Kirkpatrick’s Four-Level Model" discusses four steps that can be used to determine the effectiveness of an e-learning experience. While this approach has mainly been used by corporations that use e-learning for training purposes, I think it does have some benefits to the e-learning experiences created for other educational purposes.

The first step is Reactions. As stated in the article, this implies "measuring satisfaction." This is  not unlike the previous study discussed when the authors highlight feedback from students based on their experiences in either a traditional or blended learning environment. The first step is easy to apply as it is really a matter of data collecting and observations. Further, as the article stresses, honesty is promoted by allowing anonymous responses.

Step two is Learning, and like the first step this too was evident in the previous study discussed. This is normally completed through a quantitative evaluation that can include a pretest-posttest approach and a comparison with a control group when possible. While the previous study was not testing the effectiveness of e-learning, but rather a blended environment, it essentially followed this step with the use of the students in the traditional environment as the control group.

Step three is Behaviour. In this step the focus is on "the translation of the acquired knowledge into performance in the workplace." This step may not be as easy to apply when analyzing learning environments in schools. The article advises waiting around three months before doing the measurements. In many school environments this is not realistic. In regard to the former study discussed, the only measurement that was used in terms of acquired knowledge was the students' marks.  

The last step is Results, and this step is particularly difficult in not only the corporate world, but also the education world. Essentially, this step addresses whether the training had an impact on business results. To measure this in the corporate world, the article suggests considering the cost of evaluation verus the potential benefits. This can be a difficult measurement because of the challenge to quantify potential benefits, particularly when it comes to education. The article acknowledges this challenge and points out that most evaluations using this method on university programs had focused only on the first two steps (or levels) of this model. 

I know from my own experiences, trying to determine if the cost of a learning program is worth maintaining can become a very subjective journey. In one situation, some teachers were arguing for continued investment in a costly program because of the improvements made on provincial assessment results. On the other side of the debate, there were teachers who did not accept that the results were linked to this particular program, since there were other interventions that also occurred which they felt could account for the improved assessment results. 

I think the discussion on this last step in this article is important, because as more educators put more time and effort in technology and schools invest finances, more time should also be invested in how we can measure the results. Of course, this road can lead to dangerous consequences, where students are judged or graded on rigid standardized testing and decisions could possibly be made about the use of technology and software without feedback from teachers and students. While some type of quantitative data may need to be measured to ensure time and money is invested wisely, it certainly should be balanced with qualitative data as well. Step four should never outweigh the other three steps, but rather encourage educators and other stakeholders to consider new approaches that could help improve results.

Below is a summary of the results from the article when this four stage model was applied to e-learning:
  • Reactions: Students expressed higher levels of satisfaction in e-learning and the highest impact on satisfaction was interaction, following by instruction, administrative processes and technological functionality
  • Knowledge: Students are mixed on how well they feel they learn in an e-learning environment. One study did show that students improved their academic performance, yet some studies indicate that students feel they learn less in e-learning, and they learn better through classroom methods versus online activities.
  • Behaviour: The use of e-learning does appear to improve work behaviours and many report applying the skills and knowledge they learned through e-learning opportunities in their work environments. 
  • Results (Return of Investment): Very little evidence available which may be because of the difficulty in completing the necessary measurements. 

No comments:

Post a Comment