Set the stage for a culture of learning, change and improvement
Evaluation helps our clients to better understand how their programmes or policies are operating on the ground. onEvidence works collaboratively with organisations and institutions to deliver actionable insights to inform the development of a policy, programme or service to more effectively change the knowledge or behaviours of their services users, stakeholders, clients or the public.
Taking a theory-based, bottom-up approach, we test the relevance, effectiveness and efficiency of your programmes and policies, to assess the extent to which the intervention or cluster of interventions has produced or influenced observed results, and to objectively examine what role the intervention/s played in producing the observed results. Understanding contribution, rather than proving attribution, is our key objective.
To achieve this we start by engaging stakeholders, and the people that are impacted by/will benefit from the intervention at the evaluation planning stage. This collaborative method leads to quick identification and diagnosis of the problem/need that the programme aims to address. Regular consultation with funders, those impacted, those trying to address the problem/need and, where necessary, subject-matter experts, plays a critical part of our research strategy throughout the evaluation process.
Benefits of evaluation
Better understand how your intervention is working and why:
onEvidence helps programme managers to identify areas of success and areas for improvement.
Make better informed strategic decisions:
Our evaluations help shape decision-making regarding current and future projects.
Save resources and time:
onEvidence identify areas where resources are being used, and make concise recommendations on how your organisation can deliver more efficient and effective interventions.
How we help
- Needs assessment
- Logic model assessment
- Implementation assessment
- Impact assessment
- Cost assessment
We research and assess the extent to which the intervention continues to address a demonstrable need, and is responsive to the needs and/or priorities of the organisation. Is it still needed? Does it still make sense? Is the programme targeting the right people in the right ways? Does the design still work?
Logic model assessment
Also known as ‘programme theory’ or ‘change theory’. Our evaluator will work closely with your programme staff to identify assumptions, risks and external factors, and to provide a neutral, evidence-based assessment of the value for money (i.e., relevance and effectiveness) of the intervention. Where programme staff have not developed their own logic model, we will retroactively develop one, in consultation with the stakeholders and subject-matter experts, to support our evaluation effort. In terms of targets, reach and design, we will objectively assess expected, immediate, intermediate and ultimate outcomes, to provide information about what aspects of the intervention worked and which didn’t.
Our implementation research aims to provide insights into how programmes work, and valuable information about the reasons for their success or failure. Developing an understanding about whether or not a programme was implemented as originally planned and, where applicable, to what extent (programme integrity), allows our researchers to more accurately interpret the relationship between the intervention and the observed outcomes. This mean that there are more opportunities for making programme improvements, and increased validity of outcome findings.
Theory-based evaluation of what works, for whom, how, to what extent, and in what circumstances. This involves developing a quantitative and qualitative picture of the programme in action through either a Realistic evaluation (outcome = mechanism + context), or a Theory of change approach (contribution claim = verified theory of change + other key influencing factors accounted for).
Did the intervention achieve expected outcomes? We analyse cost-benefit or cost-efficiency ratio to assess the efficiency of a programme.
Evaluation methods and approach
Our systematic methods for collecting and analysing information to answer questions about a programme can involve quantitative and qualitative methods of research. The evaluation process, in particular instruments used to collect data (e.g. questionnaires, interviews, focus groups, workshops, case studies and creative), is carefully planned to be sensitive to differences in the target populations.
onEvidence adopt a ‘bottom-up’ democratic participatory approach to programme evaluation that reflects the diversity and exigencies of the programme contexts and enhances organisational support, as well as the utilisation of evaluation findings and process, without compromising technical quality or credibility (Cousins, 1996).
Get in touch to find out more about our independent evaluation services.