In my most recent blog posts I’ve been describing a variety of ways that I’ve seen governance initiatives fail, particularly as governance applies to architecture – most especially as it applies to enterprise architecture. I’ve talked about how insufficient stakeholder management, lack of a communication plan, and failure to line up some actual allies can all set things up for failure. In this post, I’m going to look at another pitfall that I’ve seen organizations fall into. The pitfall that I’m talking about is the pitfall of having unclear criteria established for use during reviews. Put more simply, under what circumstances could a project fail architecture compliance review? Or under what circumstances can a project be given actions that are a condition for it to pass? If this is unclear, it sets the program up for a high chance of failure.
Why is this? The reason why this is unavoidable is because of the very nature of organizations. First of all, the people performing the review are going to be the colleagues of the people that they are reviewing. They have to work with them on an ongoing basis, and it’s natural for most people to look for ways to avoid conflict. Next, people within an organization generally want that oUnfogrganization to succeed in its goals, and this leads to a natural tendency to want projects to go forward. These two factors combined means that when review criteria are unclear, projects get the benefit of the doubt. The problem comes when this means that in practice, no project ever fails review – because this in turn renders the whole review exercise a pointless ritual.
So – review criteria need to be clear. What this means is that it should be clear when a project should fail the review. But how to accomplish this? As is often the case, TOGAF provides a starting point. Chapter 48 provides a number of checklists that list areas to examine. However, as is usually the case with TOGAF, because the checklists are designed to be appropriate to all organizations, it’s not enough to take and use them as is – instead, it’s necessary to tailor the checklists to the environment and policies of the organization.
To pick an example at random, the checklist item “What Database Management Systems (DBMSs) have been implemented?” (from section 48.5.4.4 of the TOGAF specification) is not possible to pass or fail – it’s simply a question to consider. However, mandating the use of Oracle or SQL Server only is a criterion that it is possible to pass or fail on.
It may seem that insisting on ways that a project can fail a review is an unnecessarily negative approach to review criteria. But the goal of review is to ensure success of projects and the organization, and to accomplish this the review criteria have to have real effects. The best way to ensure this is to apply the test to each review criterion– could a project fail this test, or could a sympathetic panel find a way past it? Without this, reviews are simply going through the motions.