Gartner’s annual Magic Quadrant for eDiscovery software was published last week. I am not sure that I am authorised to publish it, but you can find several copies of last year’s version on the web. I wouldn’t worry too much about the differences if I were you.
There are various reasons why I usually avoid writing about the Gartner eDiscovery Magic Quadrant. One is the fact that many of those who sponsor the eDisclosure Information Project appear in it somewhere; it would be invidious for me and rather dull for you if I were to pass on each of their press releases – if I do that with one, I must do it with all. Does a recital of names interest anyone without the detail which you can find in the report or by doing a quick Google search for the report’s name? Quite apart from anything else, I have a policy of avoiding lists, mainly because it is too easy to leave somebody out; I broke that rule last week and it took a little over ten minutes before a (mock-)hurt reaction came in from the person I had failed to mention.
David Horrigan of 451 Research has written about the Magic Quadrant for Law Technology News and he asked me for a few words. His article is headed Stagnant Magic Quadrant for 2014 E-Discovery, and I am quoted as being “sceptical” with this:
I am no great enthusiast for lists which purport to rank e-discovery software providers, feeling that even Gartner’s sophisticated model does not do justice to the range of factors which contribute — or which ought to contribute — to the decision-making.
I gave David Horrigan twice as many words as he wanted, and it is worth giving here the explanation which I added in the notes which I sent him. I said also:
I see a risk that the informed judgement of those who undertake the analysis might be over-ruled by a board of directors or procurement department whose relatively uninformed opinion is overly-affected by these rankings.
That does not diminish my regard for those who do well in such lists but nor does it adversely affect my view of some of the very good players who “only” (the quotation marks are important) appear elsewhere.
What do I mean when I say that there are a range of factors which will contribute to buyers’ decision-making? One factor out of those which Gartner’s methodology cannot account for is personal relationships – users (and I mean the users, not the procurement people) will stay loyal because they like the people they are dealing with and are happy working with them and their product.
Those users also know the difference – from direct experience and from talking to others with similar requirements – between a product description and a project.
Many of them will also be steeped in the murky world of eDiscovery bar-room gossip and back-channel rumour which will have no place in Gartner’s formal analysis or in formal recommendations to purchasing committees but which should form part of every buyer’s information-gathering.
It is this sort of thing which gets lost when ultimate decisions are made by some overriding procurement authority whose judgement too often is based on price and on whether everyone else has made that choice.
Let me tell you a story about how decisions get made in large organisations. Years ago I was one of those responsible for identifying computer solutions for my firm. My job was to identify candidates and to present priced alternatives for others to make final decisions.
I strongly favoured a particular network operating system which was neither the cheapest nor the best-known of the options, and I was the only one who supported it. One of my fellow partners, a man with no interest in computers but with an acute sense of partnership politics, took me out to lunch and urged me to abandon my choice; this was not because he knew anything about it, but because he could foresee that I would get all the blame if the choice was wrong and no credit if it was right. I thanked him for his kind advice but persisted with the recommendation; the computer committee came round to it and the cheque-writers wrote a cheque for it. It was still doing its job years after I had left the firm.
The point of the story is the insight it gives into how big organisations make decisions. The safe course is to look at a list and pick something from the top. That old saying that “No-one got fired for buying IBM” looks a bit dated now, but still expresses a truism about corporate purchasing decisions (incidentally, and almost as if to support my point, IBM itself appears in the bottom right corner of the Gartner MQ; would you ignore them on that ground?).
When George Socha and Tom Gelbmann abandoned their annual rankings, one of the reasons they gave was the fact that buyers were making decisions based on the Socha-Gelbmann report without proper analysis of their own needs. That does not mean they made the wrong decisions, but it narrowed their vision.
To come back to Gartner’s Magic Quadrant, there probably is some value in having a third-party ranking system; it is an achievement to appear in the top right-hand corner of it (the submissions process is non-trivial) and any short-list should include players from there. But not only from there – that encourages a purchasing vision limited to last year’s favoured ones. Nor, crucially, does it allow for the possibility that Gartner’s methodology has got it wrong – that double-edged word “ sophisticated” which appears in my quotation above was deliberately chosen and, as I have said, there are important factors which do not form part of a formal rankings list.
The young thrusters and the others not in the top-right corner have to earn their place on the buyer’s shortlist of course, and there are usually reasons why market leaders lead the market. Buyers do themselves no favours, however, by limiting their choices to last year’s winners on this year’s lists.