Accountability Without Effectiveness

Thoughts on transforming what government’s measure.

By Nick Scott

Metrics, metrics, metrics. Governments track a lot of information - mostly quantitative - about what they do and hope to achieve. Theoretically this should help government actors track progress against work plans, action plans, project plans, strategies, priority projects, etc. and communicate that progress to their managers, executives, cabinet and the public.

Metrics are instrumental for this tracking. There are however a number of traps we get caught in when using metrics that unintentionally generate friction in our efforts to create value, innovate, or change the status quo. The point of collecting data and measuring progress is to help us learn. Especially when working in complexity and trying to create new forms of value for the public requires us to try different things and quickly find out what works and what doesn't. This feedback helps us figure out if we're doing the right things, get rid of what's not working, and measure the value being created. But if we only use data and numbers for accounting purposes, we get caught in traps.

Four purposes of data collection and analysis. Learning is fundamental to drive continuous improvement and performance excellence.

Ten Metric Traps

  1. Using metrics mainly for accountability, not for learning, understanding or advocating. Learning should be the prime directive, from which the rest follows

  2. Deciding on metrics before we even know what the solution is, which constrains discovery

  3. Measuring something just because we can, not because it's important. Not knowing how to measure something doesn’t mean it’s impossible to measure

  4. Prioritizing quantitative over qualitative methods because we think that quantitative metrics are objective and bias free. While quantitative data is great for telling us what is happening, qualitative data is needed to help us understand why something is happening and its human impact

  5. Micro-managing through metrics — “your performance will be measured by the number of widgets you produce, how you produce them is totally up to you” isn’t really giving autonomy or creative freedom to teams

  6. Picking metrics just because they're easy to measure, not basing our decisions on what will provide the most value

  7. Choosing metrics without talking to others or thinking it through. In the rush to meet deadlines we haphazardly decide on key performance indicators without much thought

  8. Measuring outputs and not outcomes — we end up counting beans and determine success based on outputs without tracking quality and effectiveness, or how it affects people.

  9. Focusing on short term measures over long term impact. Building follow-up evaluation into projects/programs is critically important, especially when dealing with complex problems

  10. Measuring process efficiency and not experience or effectiveness. If we are only measuring the number of transactions and transaction time then improvement efforts will inevitably be based in optimizing processes. Measuring experience and effectiveness, while more challenging, will help us determine how well processes work for people

We collect, analyze and share data for four main reasons: to keep track; to tell others; to show why something is important; and to learn and improve. Many organizations don’t use data well.  For nearly a decade I have been advocating for, advising, and training organizations to be more data-driven in their work. When I teach this work, accountability is only a quarter of the value that data collection provides. If I were to emphasize only one of the reasons listed above, it would be learning. Learning is fundamental to drive continuous improvement and performance excellence. But often, we focus too much on just keeping track and not enough on learning.

Data paves the road to the bottom. It is the lazy way to figure out what to do next. It’s obsessed with the short-term. Data gets us the Kardashians. — 

Seth Godin

We might be doing everything we're supposed to do, hitting all the goals we set, but still the effectiveness of our work is apparently not changing: confidence and trust in the government is low; employee satisfaction is low; and the public’s needs are not being met. Could this be because we're too focused on counting transactions and other output metrics? These metrics should only be part of the answer to help us learn and improve.

Make it stand out

We’re too busy measuring the number of passes the team in white makes to notice the moonwalking bear — you know, the thing that will surprise and delight our customers.

What happens when we are hitting all the wrong targets but only measuring the number of targets we hit? The result: Government is highly accountable, but not very effective. Every solution for improvement then becomes about making things faster and cheaper rather than more effective or enjoyable.

How might we start measuring what makes up the relationship the public has with government?

Previous
Previous

Creating Public Value Through Networked Governance

Next
Next

Teaching Prototyping with Back to the Future and Self-Lacing Shoes