Skip to content

Ideally, evaluation should be integral to the process of planning and designing a researcher development activity.

Choosing the start and duration of an evaluation study

Decisions on when to evaluate and how long for, should be agreed at the design stage. It is also important to clearly identify whether the focus of evaluation will be formative (prior to the researcher development activity), summative (after the activity has taken place) or longitudinal (pre and post development). The purpose and aims of the evaluation study will provide a good basis for choosing the timing and duration of the evaluation study.

Should I collect quantitative or qualitative data?

Choosing the type of data you want to collect is an important step in carrying out a good evaluation plan. You will need to do this in view of your evaluation goals, the resources available and the time you have to carry out the evaluation. Common practice suggests that a mixed qualitative and quantitative approach will help you reach a balanced view and cater to a range of preferences.
Combining approaches will help you to:

  • add value to your study by using qualitative work to identify issues or to obtain information on variables not obtained by quantitative surveys
  • generate hypotheses from qualitative work to be tested using a quantitative approach
  • provide in-depth explanation by using qualitative data to understand unanticipated results from quantitative data
  • verify or reject results by comparing quantitative and qualitative data

Ensuring that findings are valid

When designing your evaluation study, it is important to consider the advantages and limitations of the evaluation methods and approaches you will use and it is good practice to speak with a range of stakeholders in your institution to gain support and help in choosing the most appropriate solution.

Selection bias arises when participants in an evaluation study are not a random sample of the population. Participants are often selected because they have participated in a development programme or through self-selection. Those choosing to provide feedback may have different characteristics to those who do not choose to engage with the evaluation study (for instance, participants who have enjoyed the programme may be more likely to share their feedback than those who are neutral or disengaged). In an ideal scenario, evaluators would need to gain feedback from a similar group who have not yet benefited from the development programme.

Randomised controlled trials (RCT), are a type of impact evaluation which aims to limit bias and generate an internally valid impact estimate. An RCT compares outcomes between groups, such as those who have participated in a training intervention and those who have not. This comparison gives an indication of the impact of the intervention. Randomised experiments may not be the most feasible option for a range of reasons (resources, access to data, time).