Supra

Compete on Insights

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: November 15, 2021 * 12 min read

The success of strategic initiatives relies on three things: the analysis, a solution based on those insights, and implementation of the strategy. It can fail at any point in the cascade. But the insights are the Achilles’ heel.

There is a twofold irony: First, all resulting investments are wasted if the insights are wrong. Second, you can see if an implementation fails and if a strategy lacks consistency and rigor. But you can NOT “see” whether an insight is valid. Any insight can be turned into a plausible story.

Most CEOs and CMOs are not aware of this. It is my observation that this irony is the main reason for stagnation and the bottleneck for growth.

Let me give you some examples of typical fails I see enterprises do.

FAIL #1 – Believing What Customers Say

👉 EXAMPLE: CUSTOMER EXPERIENCE INSIGHTS

Customer Experience research has a long tradition, and the latest trend is the simplification using the NPS framework. One rating and an open-ended question will measure the level of loyalty and reasons why.

Really? 

It turns out that the most often mentioned topics in open-ended, unstructured feedback in most cases are not the most important ones.

Actually, frequency does not correlate with importance AT ALL.

But this violates the fundamental assumptions of 91% of all CX measurement programs.

The consequence is that companies prioritize topics that they should not.

Even worse, such feedback is forwarded to the customer-facing employees. By reading this, those employees learn that the most frequent topics are the most important ones.

Wrong knowledge percolates throughout the company.

How to fix this? Below is more.

FAIL #2 – Focusing on Outcomes

👉 EXAMPLE: CREATIVE INSIGHTS

Advertizing research has two parts. Part one is how to spend ad budgets. Part two is how to optimize the creative.

Independent meta-studies from ARF have shown that at least 70% of ad impact can be attributed to creative quality.

Nearly all efforts of advertizers to assess creatives are some kind of measuring technique. 

There is the classic copy test that asks for the responses and purchases intent after exposures. And there are highly elaborated Neuroscientific and Biofeedback procedures. All of it can make a lot of sense and can be useful.

But what the industry is not appreciating is that FACT does not equal TRUTH. 

Measurement just produces facts. What brands, however, need to know is WHAT TO DO to achieve a particular outcome. This if-then link is a question about causality between actions and results. It can NOT be measured. 

It can only be inferred. The science behind this is called “causal analysis”.

The hunt for success strategies is handed to storytellers and creative “geniuses” instead of proper analysis. 

How to fix this? Below is more.

FAIL #3 – Over-Simplification:

👉 EXAMPLE: NEW PRODUCT INSIGHTS

95% of CPG (FMCG) product launches do not survive the first year. This is despite brands invest billions in customer research. How can this be?

Yes, it isn’t easy. A product needs to appeal to customers before and after buying it. It needs distribution, shopper marketing, a strong brand to be recognized, and a fair price to get a yes.

We looked at all CPG grocery product launches of a year with data for all those components. Each component hardly correlates with success. The correlation of purchase intent with survival is nearly ZERO. Same with brand, product use scoring and even pricing.

Proper causal machine learning revealed that the long-tailed distribution of launch success could only be explained by one phenomenon: All success components do not add up to success; they multiply, they depend on each other.

One failure in one discipline, and you are out.

With the available data today, future blockbuster products can be predicted with 80 to 90 percent hit rate. 

Not only that. Causal machine learning can give hints on how to architect those blockbusters.

How exactly? Below is more.

FAIL #4 – Gut Over Rigor:

👉FAILS IN PRICING

Pricing today either uses gut or rigor. But it takes both to make a difference.

Most common is an explicit question to the customer (namely Van Westendorp Scale or Gabor Granger). It delivers a plausible indicator for an optimal price range. But plausibility is not a good validation of truth. 

Not only does it provide a price-demand curve, nor does it consider margins. As such, it is useless. It speaks to the gut of the decision-maker but fails to deliver rigor and validity.

The opposite is true for Conjoint Measurement. Based on multiple complex choice tasks, an algorithm can derive the utility of each product feature. Based on a market simulation, it then produces a price demand curve.

The approach falls flat because of very different reasons. Mostly the modeling assumptions turn out to be unrealistic. Often consumers select the “wait & see” option that conjoint models typically miss out – and with this – by far overestimate market demand.

The other downside, conjoint is so complex and costly that it is just applied for handpicked products.

Each sizable car manufacturer makes billions of revenue with parts that have never been empirically priced. The same is true for most consumer brands with many SKUs.

How to fix this? Below is more on a solution called Price.AI.

that.

FAIL #5 – Linear Mindset Instead Of Causal Thinking

👉 EXAMPLE: SALES, MARKETING, MEDIA MIX MODELING

Large brands spend huge marketing budgets and so want to know where to invest.

This question focuses mostly on the short-term impact of advertising, but the bulk of the impact is due to brand building and can only be found in the long run.

Ads of strong brands show larger short-term impacts, no matter where you advertise and how you advertise. Ads of weak brands can show no short or medium-term impact but the huge long-term impact by building brand equity. 

In truth, long and short-term effects must be modeled in one go to avoid misattribution. 

On top of this, the world is even more complicated. Complexity is not managed by just hiring the market-leading MMM vendor. It is by involving those vendors with the best technology.

Example? A drug brand had product sampling as one promotion channel. Any legacy modeling (even those who capture nonlinearity) had found “the more, the better”. Only the proper method was flexible enough to see that -of cause- too many samples will substitute product prescriptions.

Channels also amplify each other. Some only compliment each other. Mathematicians call this “Interactions”. As those interactions are unknown and invisible, it takes flexible learning machines to unearth them.

How to fix this? Below is more.

 

👉 EXAMPLE: Price promotion effect

It is a long-known trap. Still, most companies fall for it.

Price promotions clearly show sales uplifts. That’s a fact.

This sales uplift ALWAYS is a composition of:

  1. Customers who bought because of this promotion
  2. Customers who would have purchased it anyway but later -at a higher price
  3. Customers who would have bought it earlier but waited to know the promotion would come (Black Friday effect)

If you do not quantify all three, you do NOT understand whether or not the price promotion made sense.

How to fix this? Below is more.

 

👉 EXAMPLE: BRAND POSITIONING INSIGHTS

My last example about the negative impact of linear mindset as opposed to causal thinking is T-Mobile USA.

2013 the brand had a huge relaunch in the USA, attacking its huge competitors AT&R and Verizon. It worked, but T-Mobile did not know why.

Each feature they introduced could be even easily copied.

A revolutionary methodology found something hidden. All features were not the direct causal reason, but the perfect reasoning of T-Mobiles Robin Hood story. This story (being the Uncarrier) was attracting people.

Fast forward other features had been implemented over time to nurture this winning positioning. 

The impact has become world-famous. Today T-Mobile is on par with AT&T and Verizon with exceptional profitability and +600% market evaluation while AT&T declined.

How to extract your winning market factors? Below is more.

The Solution: A Causal Insight Mindset

The solution to all those examples is not simply “better tech”. It takes a problem awareness to see what is “better”.

With this, the ultimate challenge in enterprises is cultivating an ongoing discussion on causality and how to read the truth from data.

Everyone believes he can read the truth by looking at data. Our gut fools us most of the time. We do nothing about it.

A company that cultivates a mindset of humbleness and awareness about the art it takes to read the truth from data will be able to single out the best tech.

It takes leadership and education to make such a culture happen.

The education piece is obvious. Every manager needs to learn the 101 of gaining the truth from data. Such a training piece builds on simple insights:

Every business decision builds on this CAUSAL assumption: Action X will lead to Outcome Y. 

As such, we MUST apply “causal analysis”. This is either controlled experiments or causal modeling.

Period.

Causal Modeling In Action

Let’s review approaches to learn about causal impacts

  • Comparing facts (e.g. Male earn 20% more than female) – Is gender truly driving the income difference? Maybe, maybe not. This is a binary correlation analysis and has the same drawbacks as standard correlation analysis: Spurious correlations.

  • Correlation – neglects all other factors of being a reason for the outcome 

  • Regression – now considers other factors but fails to model indirect, nonlinear, and interaction effects

  • SEM/PLS – now this also considers indirect effects but fails to model indirect, nonlinear, and interaction effects. On top of this, it fails to provide exploration features, something elementary for business applications.

  • Bayesian Nets – now explores causal directions too, but fails to model nonlinear and interaction effects.

  • USM (Causal Machine Learning) – now is the most complete framework for business applications (available as the software NEUSREL)

Here is how USM and Causal Machine Learning can help your business to compete on insights.

Causal Machine Learning IN ACTION

👉EXAMPLE: CUSTOMER EXPERIENCE INSIGHTS

Every company has it – customer feedback like an NPS rating or stars on Amazon. Then most ask an open-ended question why. That’s all that you need.

First, make sure to categorize feedback into the topics mentioned as granular as possible. NLP deep learning systems can help to scale this.

Causal Machine Learning can unfold its magic. The categorized feedback comes as binary variables. Text AI also produces sentiment information that measures the totality of language. Also, context information can serve as additional predictors.

Causal Machine Learning can take care of so-called intermediary variables too. Besides the sentiment, a category like “great service” is such an intermediary variable as it is driven by more specific ones like “friendliness”. 

The model then can find out that friendliness is the key behind “great service”. A conventional driver analysis would have totally missed the importance of friendliness because categories are not independent. 

On average, Causal Machine Learning doubles the explanatory power of conventional driver analysis. This means it reduces the risks of wrong decisions by 50%.

The cx-ai.com is a solution that leverages Causal Machine Learning, provides CX and fiscal impact predictions as well as an ROI decision-making framework

 

👉 EXAMPLE: CREATIVE INSIGHTS

To understand how a commercial will succeed, it is not enough to measure how well it performs (this is the focus of copytesting today). Instead, you also need to measure what it does.

In a large syndicated study, we annotated (categorized) over 600 spots of 6 product categorize to describe what the TV spots actually are doing. 

Do they use a spokesperson? Do they use a problem-solution framework? Does it use a song that corresponds to the acting message? We coded the technical properties of a spot.

Then we coded the emotional message each spot was making. Each spot can be categorized into one of the dozens of topics like “it tastes good”, “it can be trusted”, “good for the family”, etc.

This data is then merged with copy testing data. With this data, Causa Machine Learning can now understand which tactics and which emotional messages work in your category.

We called the approach Causal.AI. It can not invent an actual creative conception. But it gives clear guiding rails about which strategies will work and which don’t.

 

👉 EXAMPLE: NEW PRODUCT INSIGHTS

When launching a new product, much can go wrong. Distribution, brand, packaging, promotion, first product experience, pricing – all this needs to be good enough. It’s a success chain at which the weakest link determines winning or losing.

Each step on its own as well as all together is an application for causal machine learning.

Before this, typically, you want to test a product concept and learn WHY it is not crushing the crowd. 

Test the concept with implicit response measure and then get feedback on the classical eight dimensions of product adoption. It will tell you what consumers think about the product but not (yet) why they don’t buy it.

It takes a causal machine learning model to measure how important those dimensions are. 

We ran the process for a new speaker concept. We learned the most crucial thing for marketers to look at was communicating why it was different from (uniqueness dimension) than the competition. 

Each product has its topics. It could be ease of use, appeal, utility, certainty, trust, or compatibility with the consumers’ lives.

Applying USM (causal machine learning) is essential to translate data into predictive insights that work.

 

👉EXAMPLE PRICING

Price.AI is a methodology that lends methods from psychology to measure unconscious attitudes in lightspeed. 

It tricks conscious minds by measuring reaction time on whether or not the shown price is fair or risky or attractive or with “want to buy” and so forth. AI then is trained to predict the willingness to buy. 

This AI helps to consider the attribute’s nonlinear link to purchase and lowers the required sample size.

In the end, the method delivers an accurate price demand function. It can be retrieved in an automated process with as low as 50 respondents. As such, pricing becomes not only precise but also scalable.

 

👉 EXAMPLE: SALES, MARKETING, MEDIA MIX MODELING

A MMM model based on causal machine learning solves all problems mentioned above. 

It automatically models channel interactions and nonlinear effects, especially those nobody is aware of.

Most importantly, it considers the indirect effect. The brand-building effect is an indirect causal effect. Any MMM model should include indicators of brand strength.

It also considers the biggest context and confounding factor: the creative quality. There is no ad impact if the ad is bad, no matter how much money you pour into the channel.

You don’t have data for that? If you do copy testing you do. Nowadays, you can even buy such information or teach a deep-learning AI that can predict it.

This are 5 questions you should challenge your MMM vendor with:

https://www.success-drivers.com/what-can-marketing-mix-modelling-provide/

 

👉 EXAMPLE: Price promotion effect

Understanding the impact of price promotion is a natural outcome of a holistic sales model. 

Causal machine learning enables holistic models with ease by adding predictive power at the same time.

Conceptually, sales must be modeled as an outcome of the price at the time (=price effect), the price of the past (=early purchase effect), the price of the future (=promotion anticipation effect), and all other circumstances. 

If the price of the future or the past predicts sales of today, we have the prove that purchases just shifted due to pricing)

 

👉 EXAMPLE: BRAND POSITIONING INSIGHTS

Brand positioning is a vast field. Depending on the approach, you may actually measure different things. 

No matter what you are measuring, these data can be grouped into final outcomes (e.g., purchase intention), intermediate outcomes (e.g., consideration, awareness, liking, etc.), drivers (image items, features, feature perceptions, etc.), and context (demographics, product usage, psychography, etc.).

The causal directions between variables are known for 95% of the paths based on marketing science. Causal direction tests can test the rest. This structure guides the model building.

Causal Machine Learning then does the legwork. 

The whole details of the T-Mobile case can be found here.

Winning With Better Insights

No matter what you do in marketing and sales, if your assumptions and insights about the customers are biased ….

….all your work, strategies, tactics, implementation work, and ad spending will be wasted.

This is why there is nothing more important than getting insights right from the start.

The most common misconceptions and misbelieves are these 5 fails:

 

FAIL #1 – BELIEVING WHAT CUSTOMERS SAY 

FAIL #2 – FOCUSSING ON OUTCOMES 

FAIL #3 – OVER-SIMPLIFICATION 

FAIL #4 – GUT OVER RIGOR 

FAIL #5 – LINEAR MINDSET INSTEAD OF CAUSAL THINKING 

 

There is an emerging technology readily available and already intensively tested. It provides a solution to those challenges: Causal Machine Learning, Causal AI, or USM.

It requires a causal mindset. It requires you to understand that everything relevant that decision-makers are looking for is causal insights. 

What are your thoughts on this? 

Do you want to engage in an exchange? Reach out, and let’s meet on a virtual coffee chat: book your spot here.

Cheers, 

Frank