Design system work throughput

Design system work involves lots of people. How do you measure your team’s productivity in this chaos?

As you begin measuring the design system team’s productivity, typical velocity and throughput measurements no longer work the way they do for a product team. Growing a design system by inbound contributions is a great way to leverage the rest of the organization while keeping the core team small, but if you continue treating the usual metrics without adjusting to the “system” part, you might end up often being disappointed.

That’s because you neither control nor can confidently predict what’s gonna happen next in the contribution pipeline. One day, you have three fully baked requests with implementation that’ll perfectly fit the system, waiting there for just an approval. The other day, crickets. Next week, you get twenty bug reports about seven different tokens and components. It’s hard to plan ahead.

You go a level higher, and it is the beginning of the first quarter. Teams make their asks in a very clear way, dependency management is all the way in, overall plan makes sense. And yet, by mid-February, one-off requests and last-minute changes start coming your way. Your team starts digging deeper with curiosity and diligence, and while they’re doing what’s right for the design system, the rest of the org is watching impatiently. “We only need use case X to work, it doesn’t seem that hard”, casually comments a product manager. “Do you need to hire more people?”, your boss asks with an empathetic smile. “We should be more agile and do things iteratively!”, says an engineer whose pull-request just got 23 new comments on functionality changes and test coverage.

And you want this information, but you want less of it over time. Which would mean the design system team is improving the processes, and the rest of the organization learns to consider second-order effects of decisions that go into Figma or code that all other teams depend on.

Two things you can do, continuously, to achieve that are a) educate the people, so they reconcile what they expect from the design system with what’s important for the design system itself, and b) adjust (and occasionally reinvent) the architecture of the design system, what parts are independent vs. interdependent, so that iterating on, say, individual tokens or components is safe and encouraged.

After that, what do you do with throughput?

Measure two things:

  • the design system team’s own throughput for things that are fully isolated from the org, where the team is instrumental of calling all the shots
  • and then the full-line throughput, starting from the time someone from a product team sent a request and down to the point in time when the request was fulfilled.

Improving the former feels like the team as a whole is on track towards mastery. And improving the latter is going to help with adoption and usage of the design system that your team is working on. Make time to balance the two and find a proportion that works best for your team and organization.

Subscribe to zero added sugar

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.