Sunday, June 13, 2010

Assessing e-learning platforms

In this blog I'm sharing some points I gathered from a paper which debates whether e-learning is actually a value adding process that challenges the way of teaching. This paper is titled "e-learning waste of time" and describes why some e-learning courses are really a waste of time, but also provides an innovative method of assessing e-learning platforms as described later on when discussing the MDL cube.

At first glance e-learning lecture based courses often fail to promote this 'value creating process' for both students and teachers, and the realization of such course requires enormous efforts. Some people regard e-learning implementation as simple as publishing a presentation online, which others promote themselves as e-learning specialists without any scientific basis. High dropout rates from e-learning courses is often due to insufficient efforts in creating value in e-learning.

The authors argue that the vast majority of e-learning applications such as BlackBoard fail to establish a unique learning experience. Utilising such systems reveals the inflexible knowledge management processes since e-learning is mainly about management of knowledge. E-learning needs much more effort for equal or improved learning outcomes when compared to traditional learning. High acceptance of e-learning depends on joint efforts combining the teachers, students, technology and learning processes.

The majority of e-learning platforms base their characteristics on simulating the traditional way of teaching. A number of essential questions such as the following emerge:
1. How does e-learning differ from traditional learning?
2. Can we define concrete ways by which to enrich content in virtual environment?
The common practice is to buy an e-learning platform, adopt or buy content and deliver the material to learners. This is an easy way to claim presence on e-learning, regardless of the absence of ways by which value should be passed on the the learners.

The concept of Multidimensional Dynamic e-Learning (MDL) is introduced by the authors which is a research they are doing on expanding the traditional considerations for e-learning importance. This model is based on three dimensions:

  1. The Knowledge Management dimension
  2. The e-Learning dimension
  3. The application integration dimension

1. Knowledge management sophistication is the ability of the e-learning platform to manage learning content in various formats, for instance managing to set content into a common cross-platform scheme such as by using XML.

2. The E-Learning dimension summarizes the ability of the e-learning system to construct effective learning mechanisms and learning processes supporting the achievement of different educational goals such as learning styles, learning needs and learning templates.

3. The application integration dimension refers to the ability to collaborate with other applications. For example this could relate to how the e-learning platform could connect to a school managment system such as e-sims.

With the use of this model, every e-learning platform can be positioned somewhere on the MDL cube shown above. This approach sets a method for evaluation of any e-learning platform. The development of a system that realizes the upper right layers of the cube would be ideal.

Find the full text of the paper by D. Lytras, Pouloudi, E-Learning: Just a waste of Time making it quite an interesting read especially for those looking into methods and criteria on which to assess e-learning systems.

1 comment:

  1. In a paper I read on a similar topic, it states that one must assess an e-learning system based on its usability which should consider the following:
    • Learnability which concerns how novice users are able to deal with the system in order to work effectively in an appropriate time frame.
    • Efficiency which relates to how efficient is the system with respect to enabling a user to use a few mouse clicks or keyboard entries in order to achieve his or her objective.
    • Memorability which is key to users who work with the system infrequently. The system should aid such users to remember basic functions after a long time lapses.
    • Errors refer to the number of failures and the ability of the system to offer adequate help to correct occurring mistakes.
    • Satisfaction which reflects the user’s subjective impression of the system and is closely connected to the above four constituents of usability.
    A variety of evaluation methods exist and these can be broadly classified into two namely analytical methods and empirical usability evaluation methods. The analytical methods include approaches that have to be conducted by usability experts who put themselves in the position of users to assess the system. These methods are suited for early evaluations during the system development phase. The empirical usability evaluation methods consist of usability tests and questionnaires during which interaction with real system users is mandatory.
    References:
    Blecken, A., Bruggemann, D., & Marx, W. (2010). Usability Evaluation of a Learning Management System

    ReplyDelete

Note: Only a member of this blog may post a comment.