I don’t try to be controversial, but this whole metrics and SLA thing is problematic.

To be blunt: so many organizations measure and report on a great many things, most of which just doesn’t matter. In this article, I’m going to share what I’ve learned about measurement and reporting in my years in Service Management.

How can it be green when…

I’m fortunate enough to have been the Network Operations manager responsible for North American network operational metrics and reporting for a Fortune 10 company early in my career.

The reporting was a work of art – a matrix of site-to-site network performance parameters. Things like round trip times, latency, dropped packets. Stuff that network engineers go deep on.

Each combination of company sites was color coded green/yellow/red according to how those parameters stacked up against the committed service levels.

One particular month, the report was a wall-to-wall flood of green, indicating a very strong month, with no issues or incidents with downtime/availability that added up to exceed the SLA.

Mind you, I was fairly young in my role at the time. But it was so spectacularly beautiful that I was proud. Not just for me, but the dedicated team that worked tirelessly to make sure the networks were performing as expected.

At the monthly service review, I may have teetered a bit toward ‘smug’ to throw up such a stellar report. I was happily running through all the measurements when I was interrupted by a non-IT business leader.

He was tactful, but direct: “Greg, how can your report be all green, when my team was unable to access <application name> for THREE DAYS!?”

To say I was taken aback would be an understatement. I had no answer. I’m sure I made some feeble commitment to get back to him.

But the truth of it is, that’s the day my education in Customer Experience began.

What matters?

As I went to work on what he’d said, one thing was supremely clear: what I reported didn’t matter to him. That was a hard pill to take. I’d put a lot of time and energy into a report that, from one vantage point (mine), was very good news. What’s not to like about an all-green SLA report?

But, if this didn’t matter, I asked, then what did?

The answer wasn’t hard. He told me flat out what mattered to him: The ability for him and his team to access their application to do their jobs.

Simple, right?

Only, my report had no ability to measure THAT.

I started thinking about how I could combine my report with other reports – server/application reporting, perhaps. Keep in mind, this was the 1990’s, and we had very limited access to sophisticated tools. The amount of manual data analysis and manipulation for a monthly report made it prohibitive. (And doing it only once to prove a point felt defensive.)

What I needed was a way to measure what matters.

This is where the idea of PCs strategically located at various points on the network, each running a script to emulate things users actually do: send an email, open a file, access an application.

If we were able to measure the performance of these higher order activities (supported by other services, including networks), that would come much closer to measuring what I was told mattered.

That’s the path we went down.

To whom?

I’ve been known to advise “measure what matters” ever since. This often triggers the question: “to whom?”. To which, I always respond: “exactly”.

Not only must you understand what matters, you must also know for whom it matters.

These days, we know this as Customer Experience (and experience management), which includes the idea of personas. Who are the stakeholders, what matters to them, how do they define value, and how can we know if we’re delivering on it?

This has been the core of my experience message for years.

In real life

I often hear that SLAs don’t measure experience – doesn’t address what really matters to customers. To this, my response is, then they’re the wrong measures.

It certainly requires some different thinking. For some (perhaps many) organizations, SLAs are little more than legal documents memorializing a dysfunctional relationship between two parties that, in most cases, are actually the same party (the organization in which they both work).

To understand what matters, there must be frequent and effective communication with stakeholders.  IT leaders must spend time building relationships with their non-IT colleagues, seeking to understand what matters to them.

If the culture doesn’t allow you to change the SLAs and reporting, then supplement the current reporting with “what matters” measures. These should reflect what you’ve learned through those relationships. Nothing wrong with reporting the standard metrics and adding “here’s something I’ve been working on from some conversations I’ve been having. I understand that these metrics aren’t all that meaningful to your day-to-day business, so I’m trying to find ways to report on what you really care about”.

You’ll undoubtedly get helpful feedback. But you also demonstrate that you’re listening and acting on what you’re hearing.

Detailed operational metrics have their place, and my network team was rightfully proud of their efforts. Stable, reliable networks DO matter.

But, there’s a major difference between operational metrics, and measuring the net result in terms of business impact.

Measure what matters, and know for whom it matters.