|

NRS: The importance of sample

NRS: The importance of sample

Katherine Page

Katherine Page, technical consultant for the National Readership Survey, says if a sample doesn’t do a good job of representing the universe then the audience data is likely to be flawed…

The foundation of any media measurement survey is the quality of its sample.  No matter how well-designed the questionnaire and how well-executed the fieldwork, if the sample doesn’t do a good job of representing the universe then the audience data are likely to be flawed.  If those data are being used as the basis of allocating advertising spend, or other key commercial decisions, that’s a problem.

Media consumption isn’t explained by simple demographics, and that’s the root of the issue.  Lifestyle, attitude and outlook all play their part in determining media preferences.  It isn’t possible to make a poor sample representative simply by weighting it back to the demographic profile of the population it represents.  There have been considerable developments in weighting techniques over and beyond demographics, but no holy grail so far.  The quality of what you start with is still important.

In order to achieve the best sample possible, media measurement surveys, including the National Readership Survey (NRS) and the BARB establishment survey, are among the last bastions of probability sampling, along with the government’s surveys into social trends.

The National Readership Survey is based on a probability sample of 36,000 adults per year, interviewed in-home.  Essentially what this means is that everyone within the universe has a random chance of being selected as a respondent.  Furthermore, only those selected respondents can be interviewed – the interviewer cannot choose who to interview and there are no quotas or substitutions.

Once a potential respondent is selected many attempts will be made to secure their cooperation, with interviewers making up to 10 visits.

The key to success is the personal contact that interviewers make with respondents on the doorstep… This is a major asset in obtaining the respondent’s co-operation, and is very difficult to replicate by any other methodology.

A random sample allows a straightforward calculation of response rate, i.e. what proportion of selected respondents give an interview.  The figure for NRS is 52% and has been held at constant levels for the last eight years due to the efforts of Ipsos MORI, the NRS research contractor.

It is harder to calculate response rates for other forms of sample.  For instance, the percentage of internet panellists completing a survey is not a response rate in the same way, as it doesn’t take account of how representative those panellists are in the first place.

The key to success is the personal contact that interviewers make with respondents on the doorstep.   Ipsos MORI’s interviewers aim to establish a rapport with the respondents within the first minute of conversation on the doorstep.  This is a major asset in obtaining the respondent’s co-operation, and is very difficult to replicate by any other methodology.

It is not possible to obtain a ‘stand-alone’ internet panel sample, which is representative enough to provide a media currency.

Of course, the interviewer has to find the respondent at home before they can persuade them to co-operate.   This is a more difficult proposition than it once was, particularly in urban areas, due to long working hours, frenetic lifestyles and single person households.  Entry phones and gated developments are another potential barrier.

In London, the NRS introduced a respondent incentive of £25 per interview in 2006 to address the declining response rate.  This immediately raised the response rate, and sample sizes in London increased by around 50%.  The incentive has been particularly helpful in increasing the sample of AB men.  Some wear-out of the incentive’s effect was expected, but this has not so far proved to be the case.

Despite these successes, it is reasonable to ask whether those selected respondents who aren’t interviewed might have different reading habits from those who were interviewed – and the answer is there will of course be differences.

Online surveys: while this requires careful management, it does offer an opportunity for the future and is an option the NRS will be watching closely.

There are also those who question the on-going value of probability samples in an age of declining response rates.   However, while no sample is perfect, in relative terms, a well-executed probability sample still offers significant advantages.  This is why probability samples are still the sample of choice for providing data to adjust and weight other surveys – NRS plays this role in respect of TGI data and the UKOM establishment survey.

The NRS is also asked why it doesn’t use an internet panel for the sample, as this would be less expensive and more convenient.  For the time being, the view is that it is not possible to obtain a ‘stand-alone’ internet panel sample, which is representative enough to provide a media currency.   A number of experiments in other countries, such as the US and Canada, support this view.

Typically, internet panels are based on “opt-in” samples, drawn from a small percentage of the population.  They can be skewed towards particular sorts of respondent, especially heavy internet users, and cannot represent non-internet users.  While weighting can help correct some of these differences, it is not enough for NRS purposes.  Indeed, some internet panels, such as YouGov, use NRS readership data to weight their own findings.

However, some readership surveys in the Netherlands and now France for instance do offer their respondents the opportunity to complete the survey online.  While this requires careful management, it does offer an opportunity for the future and is an option the NRS will be watching closely.

Metrics will be just one of the topics discussed at MediaTel Group’s Future of National Newspapers seminar on Friday. The panel includes Adam Freeman, Guardian News & Media; Alan Brydon, MPG Media Contacts; Dominic Carter, News International; Claire Enders, Enders Analysis and Raymond Snoddy.

Media Jobs