In the public sector, the goal is not just to run programmes for their own sake, but to achieve demonstrable social impact. This requires moving beyond simple activity tracking to developing a sophisticated evaluation framework - a strategic roadmap that guides the systematic assessment of an intervention's relevance, efficiency and effectiveness.
Aspiring managers must master this process if they want to transition into senior leadership roles. The fully online Postgraduate Diploma in Public Management in the field of Monitoring and Evaluation from the Wits School of Governance is structured to provide the deep, practical knowledge required to design, implement and utilise these frameworks to drive evidence-based policy.
Here is a high-level, step-by-step overview of how to build an effective public sector evaluation framework, and how the Wits specialisation supports you at each turn.
Before you can evaluate something, you must clearly define what it is intended to achieve. This is the stage of conceptualisation and is anchored in the Theory of Change or Programme Logic Model.
The goal here is to establish the causal link between your:
The framework begins with rigorous strategic planning. The Wits PDPM starts with modules such as Public Policy and Governance, Leadership and Public Value, which provide the essential conceptual grounding to analyse the complex social contexts and governance challenges that influence how a policy is conceived and how change is expected to occur. You will learn to articulate the policy problem before attempting to measure its solution.
With the programme logic defined, the next step is to design the measurement plan. This requires identifying the key evaluation questions (KEQs) and developing performance indicators.
An evaluation can answer questions about:
For each expected outcome, you must develop SMART (specific, measurable, achievable, relevant and time-bound) indicators.
The dedicated Evaluation module within the Wits specialisation ensures you master various forms and approaches to evaluation, enabling you to select the appropriate type of evaluation (such as process vs impact) that is proportionate to the intervention’s size and complexity. This training moves you beyond simple data collection to strategic evaluation design.
This is the technical core of the framework: collecting, managing and analysing the data needed to answer the KEQs.
A robust framework specifies the data sources (such as administrative records, surveys and interviews) and the data collection methods (such as quantitative and qualitative approaches). Crucially, it must outline protocols for ensuring data quality and integrity.
The Wits curriculum provides the essential technical toolkit:
The evaluation framework is not complete until it has a clear plan for utilisation and the findings of the evaluation have been used to inform policy and action. An evaluation that sits unused on a shelf is a failure, regardless of its rigour.
This step involves:
The final core M&E module, Managing Monitoring and Evaluation Practices, focuses precisely on this feedback loop. It addresses crucial questions about the use of research and evidence in public policy formulation, implementation and refinement. Addressing these questions will strengthen your ability to translate technical evaluation results into real-world public value.
By mastering these four steps through the Wits PDPM (M&E), you acquire the proven methodology to design and lead evaluations that move the public sector forward, ensuring efficiency and genuine social impact.
A logical framework (logframe) is a planning and management tool that outlines the project logic (inputs to impact). An evaluation framework is a strategic roadmap that uses the logframe as a starting point, defining how and when the programme will be assessed (such as setting the KEQs, methodology, data sources and utilisation plan).
An indicator is SMART if it is specific, measurable, achievable, relevant and time-bound. Using SMART indicators is critical because they prevent vague measurement, ensuring that data collection is focused and that the results reported are objective and comparable against defined targets.
The three main types are:
Stakeholders (beneficiaries, managers, funders and policymakers) must be engaged first to ensure the evaluation addresses the issues of greatest concern to them. This collaboration establishes buy-in, clarifies the programme's purpose and increases the likelihood that the evaluation findings will be used to inform decisions.
The Analytical Methods module is essential for step 3 that we discussed earlier. It equips students with the skills to correctly analyse and interpret quantitative data (often using statistical approaches). This ensures that the evidence gathered from the monitoring systems is correctly processed and translated into credible, fact-based conclusions for the final report.