Tuesday, 13 May 2008

CATME Peer assessment system

A peer assessment tool named ‘Comprehensive Assessment of Team Member Effectiveness’ (CATME) was developed in the US for facilitating self and peer assessment of group work. Similar to WebPA, CATME was developed through extensive research into peer assessment and the methods for facilitating such tasks. Such research was funded by the National Science Foundation. CATME is an online system and an unique aspect of the software is that tutors are able to see the overall rating fro each team member with and without self peer assessment scores. Therefore, it provides results for ‘self and peer’ and ‘peer only’ assessment at the same time. CATME’s limitations lie in relation to the criteria that can be used for the assessment such that the criteria is fixed for each assessment. Research carried out by the CATME team found that there were 5 ‘instrument measures’ which students should be assessed by which were;

1. Contributing to the team’s work
2. Interacting with team mates
3. Keeping the team on track
4. Expecting quality
5. Having relevant knowledge, skills and abilities.

Therefore academics are unable to change the criteria within the CATME system. It is unclear how widely used the CATME system is or how long the system has been developed for. The CATME system is similar to WebPA in that confidential assessment is carried out by students and that extensive research has informed the development of the system.

More information can be found from https://engineering.purdue.edu/CATME/

Friday, 26 October 2007

"Inherently frail - the verdict on marking"

During an idle moment this lunchtime I had a five minute look at THES (26.10.07). The front page headline calls for debate as a lack of consistency on marking is 'inherently frail'. Also this article reports mainly on assessment with little reference to peer or self assessment it is worth noting that this lack of consistency is a problem for academics. The article reports that students are increasingly litigious about marking and the current practice may find it difficult to stand up to the fact tat student marks are entirely accurate. It raises interesting points as to current styles and practices to improve accuracy.

If in the future we are seeking new ways in which students are assessed, specifically in relation to assessing group work you have to think that the WebPA software may become more common. As part of this project we are attempting to identify why someone would use WebPA and new reports and articles such as this may prompt academics to seek and explore new working practices. We have already identified that using a self and/or peer assessment system involves students in the process so it might help them to feel that they obtain more accurate marks and help to provide more genuine feedback. This may reassure students that the mark they receive is actually what they deserve.

The more people that seek new methods the more lessons can be learnt. We intend to involve students when evaluating the WebPA system as we want to know the student experience. Thus we can ask students whether they prefer using WebPA compared to other methods. We could also ask students about their concerns about accuracy and whether WebPA helps to curb any apprehensions about accuracy of the marks/grades they receive.

If you get a chance to read THES (26.10.07) the headline is well worth a read. Please do get in touch either through our JISCmail list (http://www.jiscmail.ac.uk/archives/webpa.html) or to me directly by replying to this post or via email (s.p.loddington@lboro.ac.uk)

Monday, 15 October 2007

Evaluation Workshop 10th October 2007

Last week members of the WebPA team attended an Evaluation Workshop which was ran by a consultancy company called Glenaffric. The audience was a mixture of JISC representatives, consultants and other project teams like ours.

The day revolved around working through the Glenaffric six stage model to effective evaluation, which can be found at; http://www.jisc.ac.uk/media/documents/programmes/elearningcapital/sixsteps.pdf

There were two types of evaluation that we could carry out within our project; summative and formative evaluation. The former is carried out during the life of the project and the latter is carried out at the end. Formative evaluation is for 'improving' what you are doing. Summative is to 'prove' how the project has improved something.


The first phase in the six stage model is to identify numerous stakeholders that could benefit from and being involved with both formative and summative evaluation, including academics, students, JISC, the Open Source Community and L&T support staff to name a handful.

Once we had identified stakeholders we have to identify methods for gathering data and evidence to answer our overall evaluation questions (second phase). This made us think about what we wanted to evaluate? What did we want to prove/show to others that we have done and how would be do this? It was suggested that tried and tested simple methods should be used. For example, using existing contacts (or brokers) to be involved with evaluation would be a good idea.

We felt that the community could be used to evaluate different aspects of the WebPA software and documents that we produced to give us feedback and to tell other members of the community how useful these are.

The third step was to design our evaluation, however, because this would take longer than the day session we discussed more ways in which we could carry out the evaluation and what our evaluation questions were. Some example summative evaluation questions were;

  • Do students have a more positive experience of group work activities linked to assessment? Is it fair, compared to other methods?
  • What evidence is there to suggest that using WebPA can save academics’ time?
  • What evidence is there to show that WebPA as become adopted and sustainable in any other institution?
Some formative evaluation questions were;
  • Has the feedback from end users and the community informed the development, implementation and associated practices?
  • Have the project’s outputs been fed back into the community? Are the project outputs reaching the right people?
Step four was to talk about gathering evidence. It was agreed that this may be the hardest part of the evaluation. For example, it may be difficult to evaluate how WebPA has been embedded within institutions? What impact has WebPA had? It could be difficult to prove this, especially as it may take a period of time for WebPA to be embedded within an institution. Embedding WebPA within an institution may take a lot longer than the lifetime of a project.

The penultimate step was to analyse results. We again thought that the community could be involved with this and do some of the analysis, either with their own data or with feedback gathered by the project. This stage is where we can use the data in a way that it's meaningful to the project evaluation questions. There is no point in gathering data if it isn't going to be used in a meaningful way. It was agreed that the revised plan should show our intended actions for both step four and five as these are the most difficult steps.

The final step is to produce an evaluation report for JISC and the wider community. This should clearly highlight the evaluation questions and the answers to these.
Overall the workshop was well run and very useful. It focused heavily on how best to evaluate the experiences of stakeholder e.g. how it was of benefit to academics or students, and less on the technical evaluation which is paramount for this project. We hopefully can involve the Open Source community in relation to technical evaluation, where possible.

One major outcome of the workshop was that we should start evaluation early and take advantage of the many opportunities for carrying out formative evaluation. Therefore, we are going to revise our evaluation plan to include some of the issues that we discussed and identified at the evaluation workshop. When the evaluation plan and report is complete I will let you know.

Friday, 28 September 2007

Welcome!

Welcome to the blog for Web Peer Assessment - researching for development. As part of the JISC funded Web Peer Assessement Project, over the next 18 months we shall be carrying out research into online Peer Assessment; working closely with academics, students and the wider community. Our aim is to research, develop and enhance an existing online Peer Assessment system which has been used at Loughborough Unviersity since 1998. Our research is paramount to enhance the use and understanding of online Peer Assessment throughout UK institutions and beyond. Whilst we have a number of institutions using our software we are actively seeking opportunities for collaboration with other universities to trial our existing online Peer Assesment software.

This blog will provide and give links to a variety of information in terms of thoughts, experiences and findings of our fundamental research.

The project website can be found at; http://webpaproject.lboro.ac.uk/