- The impact of researcher development activity
- Researcher development evaluation
- Developing the framework
Developing the framework
An impact framework that works as both an evaluation methodology and a national framework against which all evaluations can be mapped
The Rugby Team Impact Framework was developed by the Impact and Evaluation Group ( formerly the Rugby Team), a sector working group set up following the Policy Forum of January 2005 (held in Rugby). The group includes representatives from all the key stakeholders in researcher development: researchers, academics, higher education institutions, funders of research and government. The group consulted widely during the development phase towards presenting a draft Rugby Team Impact Framework (RTIF) at the 2008 Roberts Policy Forum to the sector and other stakeholders. Following revision after further consultation a final ‘Rugby Team Impact Framework' was presented at the national Vitae Conference in September 2008.
The Rugby Team Impact Framework is:
‘an evaluation model for training and development activity specifically tailored to the context of training and development of researchers in higher education (HE)'.
Executive summary Rugby Team Impact Framework
The purpose of the Rugby Team Impact Framework is to:
- foster, support and potentially guide existing and new ways of effective evaluation
- encourage further engagement in the evaluation agenda by HEIs
- aid the HE sector in building a more comprehensive evidence base
At the heart of the impact framework is a ‘logic diagram' (see below) which draws upon and develops the ideas particularly of Kirkpatrick and the critiques of Kirkpatrick such as Kearns (See footnotes). You are encouraged to read the Rugby Team Impact Framework publication, as it develops ideas to a much greater depth tha possible on this web site, and with specific contextualising to the researcher development sector.
The RTIF can be used as an evaluation methodology in itself and a national framework against which evaluations using any methodology can be mapped. Regardless of an evaluation methodology, outcomes can be reported with reference to the five identified ‘Impact levels' of the Framework.
(click on the diagram for a higher resolution version)
Explanation of levels
The logic diagram progression is simply addressing the question of what might be expected to happen following an input of resource (e.g. Roberts funding and the funding individual higher education institutions themselves input in to the training and development of researchers). The logic diagram provides five levels at which impact can be measured as follows:
Impact Level 0: Foundations
This level relates to investment that leads to development of the infrastructure for training and development activity, such as the employment of additional staff, a larger programme of training workshops and other activities being offered, or training facilities being refurbished. Metrics such as the number of training opportunities offered, the number of researchers participating, or a more specific example such as the number of researcher interactions with industry as the result of a particular training activity, are examples of level 0 impact measures, i.e. this level primarily measures inputs and throughputs. From a different perspective, that of a researcher as a participant in training and development activity, level 0 would be ‘baseline' assessment of skills and training needs.
Impact level 1: Reaction
This level indicates the reaction of participants to training and development activities. For example, at the end of a workshop participants may be asked what were their views of the experience? What was their view of the training programme as a whole?
Impact level 2: Learning
This level reflects ‘the extent to which participants change attitudes, improve knowledge, and/or increase skill as a result of attending the programme' (see footnote 2a). For example, does a researcher have a better understanding of how to work effectively within a team as a result of participating in a development opportunity?
Impact level 3: Behaviour
This level reflects ‘the extent to which change in behaviour has occurred because the participant attended the training programme' (see footnote 2b). Is the researcher now managing their project and time better as a result of the development activity? How has the researcher applied what they have learnt?
Impact level 4: Outcomes
This level measures the final results of the training and development activity. Have changes in behaviour resulted in different outcomes? Has the quality of research improved? Is there a more highly skilled researcher workforce?
1.The basis of the logic progression is the work of Kirkpatrick. The critiques of Kirkpatrick, for example Kearns, are also reflected.
2a.Kirkpatrick D L., and Kirkpatrick J D, (2006) ‘Evaluating Training Programmes', Third Edition, Berrett-Koehler Publishers Inc ISBN-10: 1-57675-384-4; ISBN-13: 978-1-57675-384-4
2b. Kearns P and Miller T (1997) ‘Measuring the Impact of Training and Development on the Bottom Line' Pitman Publishing ISBN 0 273 63187 X