Ever since the news about the possibility of a TAM data blackout broke, we have been subjected to the key reason for suspension being ‘instability’. Peoplemeter-based measurement of TV consumption is a science governed by the rules of statistical sampling theory. It is not heuristics and sentiment.
However in all the noise, what has not come across at all are the real scientific reasons as to why the data has to be suspended. Instead, what the media and advertiser community has been subjected to is the ephemeral word called ‘instability’. This is nothing but a Machiavellian management of perception. Given the recent controversy of the NDTV lawsuit, ‘instability’ is a word that can strike terror in the hearts of the media practitioner. What is this instability that is being bandied about? What are the statistical reasons that merit suspension of the data? How much does the sampling error go up across different ranges of TVRs? Can we as an industry have an intelligent, productive discussion centered on real data and numbers?
Specifically, the deadline of 1 November being the cut-off date for digitisation of households has been long known. So one would expect that the measurement system will stay in sync in terms of its sampling to ensure an adequate representation of DTH, STB and cable homes. The main reason for instability would be an inadequate sampling of these household types leading to changes in sampling weight of each respondent, and thus potentially leading to instability. However given that everyone knew about the timeline for digitisation, there is no reason why the measurement system could not have stayed in step and ensured continuity of data. Suspension of data without a rational, meaningful, scientific discussion is akin to going back to the dark ages.
Mallikarjunadas CR, chief executive officer, Starcom MediaVest India