Design Authority

May 12, 2019 · 9 min read

This post aims to cover off how to drive quality in your software products, by defining and using a “Design Authority”. It tries to call out what the problem is, and how you can use a self assessment to set the standard within your organisation, as to what is acceptable levels of quality within your product.

There are many ways to assess software quality, be it using automation tools, top down assessments from a governance board, or peer reviewed assessments. As with most things in life, the best solution is probably a blend of many methodologies to get the desired outcome. This post will focus on peer reviewed assessments.

The problem

Quality, in a broad sense, is subjective.

Take a Formula 1 car for example. The tyres last for about 20 laps (very roughly, 80 miles) and these are deemed to be of a high quality. If I had to change tyres on my car every 80 miles, I would not be happy. Whilst tyres are required for both Formula 1 and road cars, their use case is vastly different. The quality levels needed for Formula 1 are different to road cars. A team of engineers will have decided what quality controls will be in place for both situations.

The software industry is no different, so you need to try and define the quality metrics for your software, in your business.

If you have multiple software engineering teams building features and fixing bugs, you need to have quality baked in. You cannot have a platform going down, data loss, or other issues that stop your users doing what they need to do. The quality defined in your organisation, is going to be relative to your industry. If you are in a clinical environment, where a software issue could have fatal consequences, your quality bar should be very high. If you are a research and development team within a company, you may accept certain issues, or defer certain quality gates as it adds no value to your situation or objective.

Another aspect to all of this is the human side. If you are training up junior Technical Leads, how can you help them succeed and show case what quality is? If something is not up to scratch, how do you quantify that, and let the engineers know? How do you define quality gates before software can be released to customers?

The solution

The solution is multi faceted, from the helicopter view to the low level view of the code base.

If you are reading this post, I suspect you already have code reviews as part of your day to day process. If you don’t, it may be worth reading up on that first.

For the teams I have worked in, a pull request or code review normally has some guidance as to what the team are looking for when reviewing the code. This is a checklist [1] that not only helps the developers raising the pull request, but also helps the reviewers be consistent. Consistency is key. I generally have the code review guidance up whilst reviewing the pull request so I remember all the things to check.

So if you scale this a little more, and zoom out to a wider view, teams should be doing the same for their products. This is where a Design Authority can be helpful. This section aims to cover off how my current employer is using a Design Authority to help quantify what quality is, and help all engineering teams reach that level.

First off, you need to define what Camille Fournier calls “True North” in The Manager’s Path. Camille talks about technical leaders defining what is acceptable for software quality, and this being the “True North” for your organisation. The senior technical leaders in your organisation need to come together and collaboratively define what quality looks like for your business. Much like the Well Architected programme from AWS is there to provide guidance, support, and a marker for quality within the AWS ecosystem, a Design Authority can be used in a similar way.

In simple terms, a Design Authority is an assessment that asks probing questions about:

With the assessment, we have corresponding documentation that breaks down each discipline, section, and question into helpful insight into what is being requested of the team, and why.

Assessment

Much like the AWS programme mentioned above has five pillars, our programme has 6 disciplines, which fortunately provides a nifty little acronym to remember.

Each discipline within our Design Authority is a tab on a spreadsheet (don’t worry, it will hopefully evolve past a spreadsheet [2]). Each discipline is then broken down into sections, and each section into questions. A discipline and section has a reference number which then corresponds to the documentation outlining all the details.

To provide a concrete example, in our Maintainable discipline we have a pull request section.

M2.1 Status checks are conducted before a merge

In GitHub (where our documentation is stored, and versioned!) we go onto to explain what we mean by the statement in the assessment. We try and provide useful examples within our business to provide a benchmark.

Each software engineering team is expected to complete this assessment each month, and for each statement/question, you can answer “Met” or “Not met” and you have to provide evidence.

Each statement/question is defined as a MUST or a SHOULD. Quality gates are defined at:

We have badges that are awarded based on the quality gates above. The benchmark to even release software is at the “All MUSTs are met” gate. But that is not the level the senior leadership team are after, quite rightly too.

All assessments are published publicly, which helps drive quality and collaboration. If your team is struggling in a certain area, you can look at other teams who have scored higher, and engage with them on how to achieve the same standard. The data being produced during the assessment is actually useful information to other teams to learn from. It’s not simply a tick box exercise with a score at the end. The benefits from the output are tangible.

Standards

So where do you start?

We started with a bunch of senior engineers and talked around what we believed to be the definition of “quality” within our software products. This was broken down into backend/infrastructure, and front end/application structure. This helped focus the meetings, and engage relevant technical authorities at the right level, which allowed for many voices to be heard. This was not a top down management decision, but lead from within the engineering community.

The output from the above sessions was then whittled down into the disciplines mentioned above. We iterated on the assessment a couple of times, and removed specific targeted questions, in favour of more inquisitive generic questions. This allows teams the freedom to “meet” the objective in their own way.

For example, we went from

Are you conducting mutation testing?

to

How are you proving the quality of your unit tests?

There is a subtle difference there. We, as a team of engineers setting standards, care that unit test quality is in the mind of the teams. Some teams will provide mutation testing reports to explain how their unit tests are of a certain quality, other teams may just produce branch coverage reports. There is merit in both, and issues in both. The aim is to drive the behaviour, not be dogmatic in the implementation.

Another thing to call out is the standard is really high. The expectation from all involved was that no team is likely to pass their first assessment, and if they had, the standard would have needed to be increased. When you lift your head above the parapet and look at what other teams are doing, you soon realise the plethora of techniques and tools you can deploy to make your software of higher quality. By using the best of bread from all teams, you raise the bar across the business.

Process

Some important call outs from what we have learnt.

When it comes to process, the first step is to start, then iterate. What works for one team, department, organisation, may not work for others. You won’t really know until you start, get feedback, and then iterate.

We have started to look into whether we can automate some of the checks on the Design Authority Assessment, and use something like shields.io to add badges to the code bases in GitHub. Some of the “Engineering ready” items can be checked with automation, whereas the vast majority will need evidence and explanations.

Conclusion

The Design Authority process we have implemented is yet another tool in the box to help drive quality, just like a code review, or unit tests, or linting your code is a tool/technique you use daily withing your development processes.

I have certainly found it useful to help engagement in another project I am part of - The Technology Radar - as you have to answer “Met” in the “Supportable” discipline when we ask if you are using technology defined on the radar. This is a mechanism that holds teams accountable for engaging in the Technology Radar project.

Footnotes

See also