Rolling out a formal architecture modeling initiative involves many different strands; questions about governance, about access permissions, about importing existing architecture models… and about importing information that might already reside in other repositories. It’s a natural question to ask – if other models that might overlap with the proposed effort, only a fool would fail to wonder how they might use those other models and avoid replicating work. Most architecture tools offer some form of import mechanism, with precisely this kind of scenario in mind.
Now, one of the most common cases in this regard is integration with a Configuration Management Database, or CMDB. For various reasons, such as the widespread influence of ITIL, I find that a CMDB is one of the most common out of the ‘other’ repositories of information an organization might have. This can often be made doubly attractive by the existence of automated discovery tools that can populate the CMDB automatically. These tools can sometimes be a module within the CMDB – or they might be separate.
But if it sounds too good to be true, it’s because it is – there’s potential value in such an integration, but the devil, as so often, is in the details.
The one problem with a CMDB is that in any organization complex enough to need architecture modeling, the CMDB contains thousands and more likely tens of thousands of items. Tracking all of them in an architecture model is simply not going to be feasible. At the same time, even a single CI could very easily have more than a dozen attributes (especially in an environment with automated discovery).
So any integration with a CMDB is going to need to involve a couple of choices. The first and most important of these is – what will you import? Servers? Applications? Switches? To address this, the trick is to go back and consider – why model architecture at all? It’s to gain understanding and insight, and derive recommendations. So the rule of thumb is to look at the value of information. It might be worth importing servers…but perhaps there is only value in modeling those servers that support business-critical applications (at least in the early stages of the practice).
Next comes the question of how the import should take place. It’s tempting to set up some kind of automatic import and refresh, but this faces certain problems. For example, it might be that each deployment of Oracle is a separate CI in the CMDB, but in architecture modeling terms, to drive recommendations and governance and analysis, I’m going to argue that a given version of Oracle only needs to be one item in the model (even if it runs on multiple servers). Again, usage, i.e. what analysis and insights are desired, have to be the guide here. What this means in practice is that an automatic refresh is rather dangerous, and it would be far wiser to import into a holding area where a member of the architecture practice can use the imported data to inform a model rather than just adopting it wholesale.
The last question arises from the fact that the infrastructure is not static; the CMDB is going to be being updated as the infrastructure is updated. So a decision needs to be taken on how regularly refreshes take place. The decision here is going to depend on two factors… a) how much activity takes place and how – what is the velocity of change in the organization? And, b) how often does the discovery tool, if one is used, perform a refresh?
The integration capabilities of modeling tools are attractive; and useful. But as we’ve seen while looking at what’s involved in integrating with a CMDB, such an integration is the kind of thing that needs careful thought out, indeed designed, to draw true value from it.