| |

Agency campaign reporting is a propaganda tool

Agency campaign reporting is a propaganda tool
Opinion

These reports are a waste of time and energy, with no actual learnings to be found, all designed to make the agency look busy. But it doesn’t have to be this way.


As with almost anyone who has worked in digital media for a good number of years, I’ve written, read and ignored many weekly campaign reports.

These reports come with good intentions — supposedly for those managing media to share insights and information from the abundance of data points they have access to.

But, in more practical terms, these reports are often a complete waste of time and effort. Many recipients don’t read them. And if we were being honest, those who do read them rarely find anything useful inside. It is certainly not the place to communicate powerful learnings. They have become something of a box-ticking exercise.

As long as the report gets shared with some written words and pretty-looking graphs, job’s a good’un.

This is because, in most media agencies, in practical terms, the weekly campaign report is a propaganda tool. A tool to make the agency look busy and productive, and to show that they are actively working towards making the campaign deliver better business results.

The reality is often very different.

With an abundance of data, you can concoct any performance narrative that you like. There’s always a metric that improves week on week or a data point where the advertising exceeds some vague industry benchmark.

Reports vs reality

I first had this realisation when I stepped out of the media agency bubble for the first time in my career. Tasked with driving real-world marketing effectiveness for my clients inside a creative agency, I got exposure to how other media agencies share reports. And, crucially, how clients respond to them.

At the time, I was working with one of the UK’s biggest brands, deploying one of the largest eight-digit media budgets in the industry. At that moment, business results were down and the brand was rapidly losing market share to competitors.

But you would have never guessed this reality had you only been exposed to the weekly performance report. In the report, everything was looking rosy. Indeed, I suspect the author of these reports had no idea that business results were looking so bad. They were too busy being lost in a sea of fairly meaningless channel metrics.

I’d sit with the client as these reports were talked through by the large multi-channel media agency team. “Another great week for <insert literally any channel that is being run>,” their commentary would begin. I could see and feel the client’s eyes starting to roll even before the sentence was finished.

And why was it a great week? Click-through rate was up. Or viewability. Sometimes, it was the conversion rate. Their holy grail was return on advertising spend (ROAS) going up. No-one would complain if ROAS was going up, even if the wider business was slowly dying a death by a thousand cuts.

Agencies love a new metric: engagements, view-through rate, attention. These are all metrics that can be pointed at in order to show that the agency is doing a good job.

Lies, damned lies and statistics

This might sound like an enormous waste of time and energy, because that’s precisely what it is. The agency incurs significant resources in producing insights and commentary each week, even if this job invariably falls on one of the most junior members of the team. That person is instructed to make the client feel like the agency is doing a good job, rather than highlighting areas that can be improved.

Not only is this process a waste of time and effort, it is also dangerous. What if someone actually reads the reports and mistakes the propaganda for an actual learning? What if the reader believes that some spurious agency action really did cause a bunch of important metrics to spike? When every week is a good week, what are we actually learning?

Sadly, it feels like the spreading of mistruth is only getting worse. We have more data points to support the lies, damned lies and statistics. In this era of APIs and connected dashboards, much of the data collection and graphic-building can be done without humans. Agencies sell the time-saving as a win to the client, with staff able to dedicate time to more useful activities in supporting the client.

But in reality? They are just given more work and even less time is spent trying to create real and useful campaign insights.

Actual business results

I’ve painted a gloomy and cynical picture here, because that is my lived experience. But it doesn’t have to be this way.

The idea of sharing deep insights at regular intervals is a good idea, but the focus should be on the important things that affect business results. At Bicycle’s performance marketing arm Blade, we try to offer richer insights by highlighting things that went well and things that did not. And we most definitely look at data points beyond just those relating to the ads we are running.

Most marketing stakeholders would find it enormously useful to know if category demand is up or down. Or when competitors change their pricing or headline offers. Or when brands or the category itself are hot on social media and what the sentiment is. This is useful information — information that adds context and helps with decision-making.

But it can also be embarrassing for an agency to share information like this, because sometimes it might reflect badly on them. You can’t point to other brands doing better, for fear of it reflecting badly on your own efforts.

So, instead, as an industry, it seems we’ll continue to report things that are vaguely in our control, despite making no meaningful effort to actively manage them. So we’ll just fudge the commentary and hope that no-one notices the constant good metrics haven’t led to any meaningful business success in terms of top- or bottom-line numbers.


Aidan Mark is media science and strategy director at Bicycle

James Robinson, Managing Director, Hello Starling, on 04 Apr 2024
“While the article highlights the potential disconnect between reported metrics and the client's actual business outcomes, it also paints a one-sided picture that fails to acknowledge certain complexities and nuances within the industry. Not least because advertising is not a guaranteed road to riches and agencies often have very little ability to impact a client's overall business outcomes. There is so much outside of our control and our scope of work only extends so far. It is important to recognise that not all campaign reports are created equal. While the article portrays them as uniformly propagandistic tools, this isn't universally true. Many agencies work hard to provide valuable insights for their clients that are derived from data analysis, although admittedly, there are instances where reports may fall short of this ideal. Interpretation of data can be subjective. Metrics such as click-through rates or viewability are undoubtedly important, albeit not exhaustive indicators of campaign success. Agencies may genuinely believe that improvements in these metrics signify progress, even if they don't directly translate into tangible business outcomes. Therefore, labelling such efforts as mere propaganda overlooks the possibility of genuine attempts to optimise campaigns. The author rightly advocates for a shift towards more meaningful reporting that encompasses broader business context and outcomes, but I refer to my earlier point, this is often out of our control and very often, clients don't want/can't share this information. All too often we are doing the job with one hand tied behind our back (that's a whole other subject matter!). I think that whilst some of the points are valid, the article does oversimplify a complex issue. Rather than dismissing all reports as propaganda, a more nuanced perspective could acknowledge both the challenges and opportunities inherent in data-driven marketing practices.”

Media Jobs