Monday 15 October 2007

Evaluation Workshop 10th October 2007

Last week members of the WebPA team attended an Evaluation Workshop which was ran by a consultancy company called Glenaffric. The audience was a mixture of JISC representatives, consultants and other project teams like ours.

The day revolved around working through the Glenaffric six stage model to effective evaluation, which can be found at; http://www.jisc.ac.uk/media/documents/programmes/elearningcapital/sixsteps.pdf

There were two types of evaluation that we could carry out within our project; summative and formative evaluation. The former is carried out during the life of the project and the latter is carried out at the end. Formative evaluation is for 'improving' what you are doing. Summative is to 'prove' how the project has improved something.


The first phase in the six stage model is to identify numerous stakeholders that could benefit from and being involved with both formative and summative evaluation, including academics, students, JISC, the Open Source Community and L&T support staff to name a handful.

Once we had identified stakeholders we have to identify methods for gathering data and evidence to answer our overall evaluation questions (second phase). This made us think about what we wanted to evaluate? What did we want to prove/show to others that we have done and how would be do this? It was suggested that tried and tested simple methods should be used. For example, using existing contacts (or brokers) to be involved with evaluation would be a good idea.

We felt that the community could be used to evaluate different aspects of the WebPA software and documents that we produced to give us feedback and to tell other members of the community how useful these are.

The third step was to design our evaluation, however, because this would take longer than the day session we discussed more ways in which we could carry out the evaluation and what our evaluation questions were. Some example summative evaluation questions were;

  • Do students have a more positive experience of group work activities linked to assessment? Is it fair, compared to other methods?
  • What evidence is there to suggest that using WebPA can save academics’ time?
  • What evidence is there to show that WebPA as become adopted and sustainable in any other institution?
Some formative evaluation questions were;
  • Has the feedback from end users and the community informed the development, implementation and associated practices?
  • Have the project’s outputs been fed back into the community? Are the project outputs reaching the right people?
Step four was to talk about gathering evidence. It was agreed that this may be the hardest part of the evaluation. For example, it may be difficult to evaluate how WebPA has been embedded within institutions? What impact has WebPA had? It could be difficult to prove this, especially as it may take a period of time for WebPA to be embedded within an institution. Embedding WebPA within an institution may take a lot longer than the lifetime of a project.

The penultimate step was to analyse results. We again thought that the community could be involved with this and do some of the analysis, either with their own data or with feedback gathered by the project. This stage is where we can use the data in a way that it's meaningful to the project evaluation questions. There is no point in gathering data if it isn't going to be used in a meaningful way. It was agreed that the revised plan should show our intended actions for both step four and five as these are the most difficult steps.

The final step is to produce an evaluation report for JISC and the wider community. This should clearly highlight the evaluation questions and the answers to these.
Overall the workshop was well run and very useful. It focused heavily on how best to evaluate the experiences of stakeholder e.g. how it was of benefit to academics or students, and less on the technical evaluation which is paramount for this project. We hopefully can involve the Open Source community in relation to technical evaluation, where possible.

One major outcome of the workshop was that we should start evaluation early and take advantage of the many opportunities for carrying out formative evaluation. Therefore, we are going to revise our evaluation plan to include some of the issues that we discussed and identified at the evaluation workshop. When the evaluation plan and report is complete I will let you know.

No comments: