It is probably a grave dereliction of duty to disappear on holiday just as George Socha and Tom Gelbmann publish their annual Electronic Discovery Survey provider rankings, but that is no reflection on the performance of those of my sponsors whose names appear in it.
Anacomp, Autonomy Zantaz, Epiq Systems, Guidance Software, LexisNexis and Trilantic all appear, some of them more than once, in a survey regarded as the most objective and authoritative report on an industry whose 2007 revenues, at $2.794 billion, were up 46% from 2006.
Appearance in the survey rankings, especially in the headline categories as most of these names are, is a high accolade, and the results are eagerly awaited by providers, investors and would-be buyers alike. Why then have George Socha and Tom Gelbmann announced that the 2008 survey is to be their last in the present form?
Their reasoning may seem paradoxical, but it is entirely right – the survey has come to be more influential than its methodology (or any similar methodology) can justify as a means of assessing providers and their capabilities for the benefit of would-be users.
Over six years, the rankings have grown from a simple list of the top five electronic discovery rankings to a multi-class set of tables based on more than 350 separate categories of information. The paradox is that the more granular the sources and the more various and specific the tables, the more remote the results get from their original purpose – “to help guide consumers entering an inchoate market”.
In an article on EDD Update, Socha and Gelbmann say:
Why are we killing the rankings? We believe our survey rankings have reached the point where they no longer serve their original purpose. When they are announced, we are told, they can affect the share prices of publicly held companies. They can have an impact on the ability of providers to obtain financing. They can be a key factor, sometimes the most important factor, in determining which provider is selected to take on a project or deliver a software program.
More succinctly, they say that “anyone who makes buying decisions primarily on these rankings is a fool.”
The main difficulty with the rankings, I think, is not who is on the lists, but which companies offering sound solutions are never going to be ranked because they lack the user base to attract the user support needed to out-rank the big established players. The market is no longer “inchoate”, and the lists which originally served to identify players which were, put at its lowest, safe bets, have become a self-perpetuating closed group.
It should come as no surprise that (as the authors put it), “consumers are using [the rankings] as a substitute for the work they should be doing themselves — analyzing consumer needs and assessing what and whose services and software might meet those needs”. In a risk-averse world, it takes a brave man to recommend a provider whose name does not feature on tables provided by a source perceived to be authoritative.
We wait to see what George and Tom come up with as a more effective way of matching users’ needs to suppliers’ offerings. Be clear, however, that their determination to find a better way of surveying and reporting on the e-discovery market does not detract from the value of the 2008 rankings. The analysis which Socha and Gelbmann urge users to make will bring these companies to the fore anyway, with or without reliance on the survey. I am extremely proud to fly the flags of service and software providers who are rated so highly in the Socha-Gelbmann survey.