Peter Harrad outlines a recommended approach on how to go about evaluating a modeling tool.
There are two things that I find striking about the modeling tools arena. First is the number of organizations that spend a 5- or 6-figure sum on a tool and then don’t seriously use it (rarely is this because of any deficiencies of the tool). The second is how many organizations approach the purchasing of a tool in a disorganized fashion. There are enough recommendations out there about what a tool should or should not offer, that it seems pointless to add to it.
Instead, in this paper I’m going to outline a recommended approach on how to go about evaluating a modeling tool, based on what I’ve seen work, and what I’ve seen fail. It may seem to be stating the obvious that evaluating a tool is a project in its own right – but too many organizations don’t act as if this is the case. The evaluation approach I will outline consists of the following 5 stages:
Define the Stakeholders
Before you even start to talk about what your tool should do, you need to ask yourself who needs to be involved.
This stage might seem obvious; the stakeholders are the budget holder, and the people who will be using the tool. But in practice, once you start looking at a modeling tool, questions arise. First of all; what other initiatives might be interested? Most modeling tools don’t just support one area of modeling – they’d be fools to. So the natural question that arises again and again is ‘what other teams might be able to use this, now or in the future?’
Sign in to continue reading this white paper.