The Best Tool for Real Content Attribution? The Fermi Problem

Erik Dietrich - Aug 7 - - Dev Community

(Editorial note: I originally wrote this post over on the Hit Subscribe blog. I’ll be cross-posting anything I think this audience might find interesting and also started a SubStack to which I’ll syndicate marketing-related content.)

I was recently scrolling through LinkedIn, admiring the art of the single-line hook followed by emoji bullets, when I stumbled on a really interesting question from Fio Dossetto.  The question is as follows:

As someone who self-identifies as neither a marketer nor "smarter than" anyone, I figured I'd leave the comments to the thought leaders and call it a day.  But I couldn't get the question out of my head, especially since some variant comes up so frequently in discussions with clients.

So in this post, I'll offer the two cents nobody asked for on the subject of challenging content attribution.

Marketers and Measurement: The Quest for Attribution

When I encounter marketers talking about attribution, I generally see an evolution of opinion in two snapshots.

You can't really measure the value that creating high quality content brings.  You just have to put it out into the world, do good work, make people happy, and it pays off in the end.

As someone with erstwhile childhood aspirations of being a novelist, I like this take, even if I don't especially agree with it.  But it's just a matter of time before some executive pours cold water on it, directly or indirectly, and crushes the marketer's spirit with business-ese.

The marketer's take then evolves to a bit of white whaling.

I know high quality content brings value.  Now I just need a tool or methodology that shows that I'm right.

To be clear, I'm not saying that Fio is expressing either of these sentiments -- I don't have enough context to know that.  I'm now generalizing my own anecdotal experience to say that content marketers often seem to be on a treasure hunt for validation more than antiseptic measurement.  And understandably so, since they're probably responding to an equally unsubstantiated assertion that their work isn't bringing the vaunted ROI to the business.

But the problem with this dynamic, other than the human one of a content creator's broken spirit, is the binary: can't possibly be measured vs. must be precisely measured.

The answer lies somewhere unsatisfyingly in between.

Precision, Measurements, and Decisions

Before getting into the specifics of measurement and approach, I'd like to establish a few business-decision principles.

Measurement only needs as much precision as necessary to make a decision.

You're probably used to waking up and looking at your phone or asking Alexa for the weather.  And when you do this, your weather bot of choice conventionally responds with a measurement to the nearest degree.  If it didn't and instead said, "it's cold," this would likely annoy you and you'd find a different weather bot.

But do you really need this measurement to the nearest degree?

Or, conversely, do you find yourself upset that it doesn't tell you the temperature to the nearest one hundredth of a degree?

You're probably just asking to try to decide whether to wear shorts or pants.  So the only level of precision you actually need, when you ask, might be a personally-tuned device with four readout values: shorts, pants, sweatshirt, coat.

"Alexa, what's the weather?"  "Wear jeans today, Erik."  "Cool, that works."

Back in the real world, the question that attribution is likely meant to address is "was this video campaign worth the spend?"  And measuring the entire odyssey of that single lead is hundredths of a degree instead of "wear jeans."

Decision data only means anything in representative patterns.

For the second principle, I'll meander into a different metaphor.  Imagine that your great, great Aunt Bertha passed away, and that you were never aware of her existence until some probate institution mailed you a check for $400.

How should you categorize that income in your budget?

Is it a gift?  Is it some kind of negative ledger entry for "taxes" or something like that?  Should you create an entirely new category in your budgeting software to tag this with?

The answer is "who cares?"

How likely is this type of event to occur again?  You categorize things in a budget so that you can analyze income and spending and make future decisions based on trends.  The universe randomly burping a new microwave's worth of money into your checking account is an anomaly worth shrugging and being thankful for your luck.  It's not a trend worth analyzing.

I would argue that the same thing is true from a first-hand anecdotal experience of interacting with a brand in a very specific way.  Unless there's reason to think that Lavender's video campaign routinely results in internal cross-departmental Slack recommendations, Lavender should just take the W and get on with its life.

Some things are not directly measurable.

And, speaking of someone else's Slack, my last point doesn't require a metaphor.  You have to accept that you simply can't measure some things directly.

One of those things is how people at other companies interact in their private collaboration tools.  Absent enough money to bribe someone at Slack into sharing it, Lavender would simply have to accept opacity on this front, understanding that there's a black box of a world out there where brand-aware people interact with one another in mysterious ways.

But that's okay.  For the rest of the post, I'm going to address how you can reason about these mysterious ways, even if you can't prove them in the most simple, direct sense of the word.

Introducing the Fermi Problem

Let's switch gears now.  I just want to see how you think.

How many piano tuners are there in New York?  Wait, wait, no, how many golf balls can fit in a 747?

If you've ever heard of a question like this, it's quite likely in the context of a company being cute with its interview process.  (Or, more accurately, according to an internal Google study, interviewers making themselves feel smart).

The reason these questions work so well as a vehicle for asymmetrical smugness is that candidates are likely used to a simple relationship between questions and measurements.  If you want to know how many golf balls fit in a jet, stuff them in there and count.  But that's not the only way to go about answering the question.

These types of questions likely found their way into job interviews because of their Rubix-cube-esque tendency to sample incorrectly for genius.  Because, like the Rubix cube, solving the Fermi problem isn't a mark of genius.  It's just a skill that any mortal can practice and master.

And that skill is simply one of breaking a complex-seeming problem down into simpler ones that are easy to reason about.  How many piano tuners in New York (yikes!) becomes:

  • How many people are in New York?
  • What percentage of people own pianos?
  • How many pianos can the average tuner cover in a service area?

You probably don't know exact answers to those questions, either.  But I bet you feel way more equipped to take a guess at each.

Deconstructing the Value Proposition of the Video Campaign

So let's disappoint smug interviewers everywhere and turn Lavender's video campaign into a Fermi problem.  The real question, taking some liberties with Fio's original post, is probably "how does Lavender prove the value (likely leads generated) of its LinkedIn video campaign?"

Deconstructing that:

  • How many people saw the video campaign?
  • Of the people that saw the campaign, how many of them became durably brand aware?
  • Of the durably brand aware, how many became (or produced) leads?

Each of these individual components presents its own measuring challenges, of course.  But each of these components is also a lot easier to reason about.  This helps both with modeling and eventual measurement, or at least approximation, tactics.

Modeling the Video Campaign

First, let's do some modeling.  Before even thinking about attribution, which is really a lagging indicator of campaign success, we should have a decent working hypothesis that this campaign will pay off.

Something I've probably bored most of our clients with is an exercise where we document assumptions as variables.  If you've followed this blog long enough, I've also bored you with it in the past.  This is crucial because it lets you move forward with imperfect precision.

For instance, I don't know what kind of qualification rate Lavender had from impressions to durably brand aware, nor do I know how many durably brand aware people become leads.  So I guess.  Let's say, I dunno, 1% and 2% respectively.

I also don't know what video campaigns go for on LinkedIn, but I can do one better than guessing.  I'll have ChatGPT make something up.

 

Alright, works for me.  And that's all we need to get started.  Here's an initial model.

(I didn't model in video production costs because I don't want to get too in the weeds.)

Anything in purple is an assumption.  Anything in that other, different purple is a measurement (I'm no graphic designer, sue me).  We don't have any of those yet, but we can later.

Tuning And Using the Model

Let's revisit the earlier Fermi deconstruction.

  • How many people saw the video campaign?
  • Of the people that saw the campaign, how many of them became durably brand aware?
  • Of the durably brand aware, how many became (or produced) leads?

What I've done here is to hypothesize reasonable answers to these questions, using a combination of LLM wisdom and simply making things up, which I'm pretty sure is how 95% of business is conducted these days anyway.  I'm being snarky and self-deprecating here to drive home the point that a guess based on any kind of field experience is actually a pretty good seed for this type of model.

The next thing I would do is bring this to people with relevant experience for a discussion to poke holes in my uninformed assumptions.  I've done a lot of ROI math on marketing campaigns, but someone more experienced in this particular medium than me might have past data on the subject.  They might say, "ChatGPT is totally wrong -- it's way more than that for CPM -- but you can also expect way more than 1% of people to become brand aware."

Perfect, thanks!  I want someone like that to poke holes in my assumption to drive to a better hypothesis.

Now, no matter how savvy someone's educated guesses are, they're of course still guesses.  But this back-of-the-napkin exercise can help with very broad strokes evaluation of a campaign.

If, for instance, leads are only worth $50 to Lavender, given the relatively light figure I made up for LTV, then this might not be the campaign for them.  If, on the other hand, leads are worth $5K, then we should stop futzing with a spreadsheet and start making videos yesterday.

Turning Hypothesis Into Approximation and Measurement

Having formed a reasonable ROI hypothesis via modeling, let's assume we now unleash this campaign on the world and prepare to bask in the pipeline.  As we do that, we can right out of the gate turn impressions and ad spend into measurements in the model.  Those are easy and precise figures.

Where things get trickier is reasoning about "durably brand aware" and "qualified leads" from the campaign.

You really can't measure the number of impressions that turn into durably brand aware people.  But you can ask yourself whether your initial assumption and model continues to seem reasonable.

To understand what I mean, ask yourself what you might expect to happen if you relatively quickly generated 5,000 durably brand aware people.

  • You'd probably gain followers on LinkedIn itself and at a faster clip than historically.
  • Branded searches would likely increase.
  • Your YouTube channel would likely see new followers, and at a faster rate than it had accumulated them before.

And those are all things you can directly measure.  Attributing them to the campaign is a challenge, sure.  But if your channel followers increase at twice the pace that you used to pick them up, I'd say it's quite reasonable to chalk the surplus up to the campaign for evaluation purposes.

You can do similar brainstorming around qualifying leads.

Evaluating the Campaign As You Go

I want to close this out by fundamentally reframing the idea of attribution.  Or at least reframing the question that you're trying to answer.

It's tempting to try to answer the question, "how many (and which) leads can I attribute to this campaign?"  And if you have the killer toolset that can actually, precisely answer this, then I envy you.  But a more realistic question to answer on an ongoing basis is this:

How wrong would our model need to be for us to pull the plug, and is there any reason to suspect we're that wrong?

Attribution tooling can help you measure some things, and it can reduce the uncertainty surrounding others.  But it can't get you all the way there, into people's Slacks, across their browsers, over the river and through the woods.  And it can't tell you when you've run the campaign long enough to have sufficient data or leading indicators of success.

To evaluate campaigns, you're going to need to look away from precision and into the realm of uncertainty and probabilities.  That might seem daunting.  But the good news is that the skill of solving Fermi problems is tool-agnostic.

And beyond that, it shifts the burden of proof for content folks.  You can flip "prove this works" around with "here's my model, you prove it doesn't."

And either way the business wins.  Either a successful campaign keeps going if they can't prove it, or you improve the model and fix (or stop) the campaign, if they can.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .