- Previous events
- Vitae researcher development conference 2008
- Workshop programme NEW UPDATE December 08
- C4/D4 Workshop summary and outcomes
C4/D4 Workshop summary and outcomes
Reviewing and evaluating skills training
Paul Kearns, Director, PWL and Elaine Walsh, Senior Lecturer, Imperial College London
Evaluation has always been regarded as a problematic subject: outcomes cannot always be anticipated or accurately predicted. This session explored how evaluating and reviewing researcher training and development can be kept very simple. Part of the answer lies in knowing who the stakeholders are and helping them to manage their own expectations. Research is only ever as good as the questions that are being asked of it. ‘How will my work be evaluated?' should be the first question a researcher poses, not the last.
Within the context set by Paul Kearns, Elaine Walsh provided an HE institution perspective, describing the ‘Skills Perception Inventory' (SKIPI) and SKIPIED (Skills Perception Inventory of End-stage Doctoral students) tools developed by Imperial College.
Reviewing and evaluating skills training: Paul Kearns
Participants were introduced to key principles relating to the review and evaluation of research training programmes and considered how they might make changes to the review and evaluation of their own programmes. Paul's presentation ‘Evaluating the return on investment from learning: how to develop value-based training' discussed how different forms of training and development could be meaningfully selected and evaluated in a value-based training model. Features included:
- asking critical questions
- calculating added value
- adopting the baseline evaluation model
- producing a ‘business case' using the Return on Investment (ROI) formula
- creating a performance curve
- using a three-box system for budgets and priorities.
View Paul Kearns' presentation http://www.paulkearns.co.uk/
Measuring impact - evaluating student skills and development: Elaine Walsh
SKIPI was designed to evaluate 'The Research Skills Development Course' which was specifically designed for researchers early in their doctorate. This course has an important role in promoting workshops and other development opportunities. It:
- provides a direct measure of the impact of the course, with quantitative feedback on specific areas of benefit
- gauges any attitudinal shifts in the perceived value and benefits of skills training in general
- enables preliminary investigation of variations due to gender, discipline or residential status
- raises student awareness as to the kind of skills which are of value in the research environment.
SKIPI has proved successful as a quantitative measure of course impact. As a validated tool it has also helped win academic hearts and minds for the course and the programme. It also sparked further research motivation on background differences, e.g. sex, residence status etc. and for a follow up study - SKIPIED.
This includes the original SKIPI questionnaire plus additional items, to explore ‘distance travelled' in skills development since commencement of doctorate and factors impacting on skills development.
Elaine described SKIPI/SKIPIED and their findings in detail. SKIPI found clear and wide-ranging impact of the residential development course. On the ‘distance travelled in transferable skills' scales, creativity appeared as the weakest area, while ‘working independently' and ‘defending own research' produced the highest range of scores. Analysis by researcher origin (UK, EU, international) pointed to interesting differences in training need and impact.
View Elaine Walsh's presentation.
Key messages presented and discussed were:
- the key to evaluation is to establish a baseline and set up a loop/cycle to check against the original objectives
- trying to attribute a specific impact to a specific piece of training is a red herring
- ‘Roberts' stipulations do not lend themselves to added value training.
Roberts adopts a conventional perspective on the importance of training (of researchers) and its funding criteria. This appears to be based on an ‘input' model, rather than seeking clear evidence of output from the training from the outset (ie researcher training starts from a position of what output is expected from the research).
Comment on this page.