Open educational resources movement is roughly ten years old (Wiley & Gurell, 2009). Since that time thousands and thousands of resources have been produced. Though these resources have been used both for classroom development and the autodidact, the development of OER is not without problems. One possible reason for the limited reuse observed is the barrier of technology. Tools to help educators find appropriate educational resources are necessary, but still nascent. Some OER is easier to view, revise and remix than others. For example, a PDF cannot be easily altered, but is widely viewed. This is only one possible reason that OER is only being minimally reused (Wiley, 2009). This presentation will examine the results of a Delphi study and an experiment in rating a sample of OER to measure technical difficulty in reuse.
Some OER is easier to view, revise and remix than others. For example, a PDF cannot be easily altered, but is widely viewed. In contrast, a file save in the XCF format for the GIMP photo editor is a fully open format, but cannot be easily viewed. Hilton, Wiley, Stein and Johnson (2010) created the ALMS analysis framework to assess the technical openness of an open educational resource. Unlike Geser (2007), Hilton et al. (2010) the framework suggests that openness is a continuum, rather than a binary concept.
Although the ALMS framework allows for an assessment of open educational resources, no actual measurement is reported in the Hilton III et al. (2010). The framework has not been tested because there is no known rubric with which measurement can occur. Consequently, Hilton III?s framework needs to be tested against a range of open educational resources. The ALMS analysis framework consists of the following elements:
- Access to editing tools?
- Level of expertise required to revise or remix?
- Meaningfully editable?
- Source-file access?
Although the ALMS framework allows for an assessment of open educational resources, no actual measurement is reported in the Hilton III et al. (2010). The presentation will explore an attempt take the concept of an ALMS analysis to a concrete framework with sufficient detail and documentation to allow for inter-rater reliability to be measured and comparisons to be made against OER.