Supra

January 26, 2024

Chapter 1

The Paradigm That Holds Us Back

Dr. Frank Buckler

Founder, Success Drivers

It was the summer of 1998 and I was about to graduate with a degree in electrical engineering and marketing. I received a precious invitation letter from McKinsey and it made my heart happy. A one-day talent event where we were to get to know the company through case studies. At the last minute, I bought myself an expensive yellow tie that would go well with my green suit. It was probably the color choice that disqualified me as unsuitable for McKinsey. But what I remember from this event were the case studies.

The room was full of “high performers” from all areas. Everyone was handed a case study description and we had 15 minutes to solve it. Specifically, it was described how many “MJ” the consultancy has available to help a company. “MJ” was a German abbreviation used by consultants for “man years of work”.

Then came the bang. Mark was the first to present his result. He was a budding physicist and began with his solution: “According to my calculations, we have gained 23 million watts”. We all sat there open-mouthed, including the McKinsey consultants. “What is he talking about?” was written above our heads. It turned out that the case study abbreviated “man-years” to MJ and that MJ stands for mega joules in physics. The eager Mark had taken the DATA for what he thought it was … and produced nonsense. He himself seemed to think this was plausible, as he was the first to point it out.

“Data is gold” is a frequently used metaphor. Mark couldn’t do anything with the “gold”. Why not? Because the metaphor is misleading.

Data is more like atoms. Humans consist of 99 percent carbon, hydrogen and oxygen atoms. If we were to place these atoms neatly separated “in three piles”, it would be impossible to build a human being from them.

Data are building blocks, but the magic in the context is not in the data. The magic is in the MEANING of the data. That’s what this book is about and what that means for the marketing management of companies.

The magic of data analysis lies in the context of meaning. However, the prevailing understanding in marketing management is more like this:

  • Collect lots of data – this is based on the belief that the more “data” you have, the more “knowledge” you can gain from it.

  • Give the data to the analysis nerds and the PhD’s. They’re the ones with the glasses. Down there in the basement, where they sit, they will use them to do a lot of calculations and produce insights.

  • Then you look at the results and if they are not plausible, you send the geeks back to the basement.

This understanding is not only very one-eyed, but also dangerous. With this book, I would like to contribute to a more enlightened and effective use of data for business decisions.

Architects, Not Just Construction Workers

Let me illustrate this with a specific example. Many years ago, I was invited by a coffee brand that sells its products through direct sales. They were looking for a data analytics consultant to help them with customer churn. The Head of Customer Relationship Management got straight to the point with his first slide: “Here’s our data. Your job is to find out for us who will be churning.

In five steps, it should become clear that the task was formulated somewhat shortsightedly.

The very first question that did not arise from the data was: “What is a churned customer? Because the brand did not have a subscription model. The customers merely showed irregular ordering behavior. We therefore decided to predict the time until the next order rather than the “churn”.

The next question to be answered was which data should be used as input. Of course, you could simply take “all” the data that belongs to a customer. But a customer sometimes has a long history.

How relevant is the purchase data from 3 years ago for today’s customer churn? It turns out that you can achieve better results faster with “common sense”.

Then there is data that describes the context or situation and does not belong directly to the customer. Seasonality, weather, advertising activities and much more can influence customer behavior. Much of this data can be obtained with limited effort, but is not contained in the data set described as “huge”. Again, it is the technical understanding, not the analytical expertise, that makes the analysis successful.

But the story goes even further. With the “right” input data and the appropriate target variable (time until the next order), we were now able to train a state-of-the-art machine learning model that estimated the order time with astonishing accuracy.

Nevertheless, it became clear that this approach was still largely missing the mark. After all, it was of course much more important for the coffee brand not to lose customers with high turnover. In this respect, the target function “predicting the next order” was not aligned with the actual goal of “avoiding loss of sales”.

One way of dealing with this is to weight the data records according to their customer value in the machine learning process. A machine learning method that allows this must be used.

Here, too, an understanding of the content of the actual objectives is necessary in order to align the analysis method with them. This understanding is not present in the data.

This case study has not yet been completed.

The machine learning model now predicts the time until the next order. Because the customers with higher sales now have a greater weight, the forecasts for customers with lower sales have a higher error than those for customers with high sales.

What do we do with this forecast? We can filter out the customers with a very high “time to next order” value and reward them with customer loyalty measures. But what threshold should we use? Again, a question that the data does not answer.

Looking at the two errors (so-called alpha and beta errors) quickly makes this clear. Figure 1 illustrates this. It will happen that the forecast selects a customer as a “churner” on the basis of the threshold value, but this customer is not actually a churner. This is a clear “false alarm”. The cost of this error is that the customer retention campaign would not have been necessary for this person. In the case of the coffee brand, the costs was 20 euros.

The second error is that a customer is not selected in the forecast, even though he will be a churned customer. The cost of this error is the customer value of the lost customer multiplied by the probability of success of the loyalty measure. For the coffee brand, this was an average of 240 euros multiplied by the probability of success of 30% equals 80 euros.

When I talk to marketing managers about churn, I often hear statements like: “We have a hit rate of 90%. That’s not bad, is it? In fact, the hit rate is a meaningless figure.

Why is that the case? It’s not about how many churners and non-churners are recognized. What matters is that the expensive errors are avoided. In our case, not detecting churners was four times as expensive as a false alarm.

We therefore chose the threshold value in such a way that opportunity costs are minimized by applying enough customer loyalty measures.

Let me summarize. It was essential to incorporate management context knowledge in at least 5 places.

  1. the task itself was wrong. The task should not be to predict which customer will churn. Rather, the task is to manage customer retention measures in an ROI-optimized way. That is a huge difference.

  2. It was a question of content as to which customer is now considered to have churned. Here we opted for a non-binary target value. In other words, we decided against categorizing customers as “churned”, as this is not known in reality.

  3. even the selection of data, which is necessary due to the long customer history, was a question of expert judgment. It turned out that central, situational information that influences success still had to be developed.

  4. it turned out that it is necessary to weight the analysis with customer value to ensure that the machine learning algorithms pursue the same goals as marketing management.

  5. Finally, the threshold for the churn classification and the customer retention campaign was chosen to optimize ROI and not the hit rate.

All this expert knowledge, which made the results successful, did not essentially require data science knowledge.

Sometimes a picture is worth a thousand words. Do you know the following situation too? You go to the garage because something is squeaking on your car. The mechanic thinks the shock absorbers need replacing. “The tires should also be replaced” he says “and the wiper fluid with anti-freeze too”.

None of what he recommends is wrong per se. But you are only concerned about driving safety and not about squeaking. You don’t need antifreeze in your region because you don’t want to drive in the mountains. You only drive your car occasionally and the tires would certainly last another two years.

It’s your car and only you know what to do with it and what you want with it. No mechanic can do that for you.

A more controversial but all the more fitting image is that of a physician. Some people go to the doctor and blindly follow his recommendations. They take symptom-relieving medication without realizing that it is usually counterproductive for the healing process.

Every medical treatment has an alpha and a beta error. Every medical treatment has the risk of side effects (analogous to the cost of a churn measure) and the risk of disease consequences if not treated.

Who should weigh up these two errors? The doctor or the patient?

Experts tend to throw smoke and mirrors with their profound specialist knowledge. They have an unconscious interest in underpinning their raison d’être. They know a thousand problems and reasons why simple strategies are problematic.

Data scientists are no different. That is why marketing management that takes responsibility is crucial for success.

What do you think? Would an engineer 100 years ago have come up with the idea of offering a Ford Model T that the customer couldn’t configure?

“You can have any color as long as it’s black,” Henry Ford is quoted as saying.

Only through standardization was it possible to produce a car that everyone could afford. I am sure that the engineers must have dismissed this crazy idea as absurd at the time.

Your data science engineers need your guidance, just like Henry Ford’s engineers needed it. Data is not gold. They are building blocks. Architects need builders to build a beautiful house out of them.

Marketing Managers Must Take Responsibility

In addition to the myth that “data is gold”, I am also experiencing another, unhelpful school of thought at board level.

The “management by plausibility” principle works like this: look at the results of the data scientists and if they are not plausible, send the “geeks back to the basement”.

I had my own personal aha moment when I was in charge of a sales team in Corporate America almost twenty years ago. Every month, I sat with the sales managers to look at the figures in the “Performance Review”.

“Why has sales in this region slumped here?” I asked Joachim. He expanded and told me three very plausible details about customers who had probably been the main contributors. Suddenly I realized that the data filter had accidentally been set to the previous year. I changed it and suddenly there was an increase in turnover. Joachim took a quick breath and began with three very clear developments that justified these figures.

I realized that the main part of leadership work consisted of listening to the stories of subordinates, checking them for plausibility and providing further impetus for action.

The more I thought about it, the more I realized that plausibility is not a particularly good indicator of truth. It only says something about whether a story fits in with previous beliefs. Plausible stories can be true. But there is a considerable probability that they are “absolute nonsense”.

Implausible stories can also be true. In fact, the most groundbreaking discoveries are “implausible” because they contradict previous false beliefs.

The same applies to data science. Marketing management should not see itself as a reactive controller. Rather, the expertise of the marketing department is the decisive input for successful data analysis and not just a control instance.

Data alone is “nothing”. Without data, everything is nothing. Artificial intelligence can create value from data. But only if it is steered in the right direction by human expertise.

To use a metaphor, “management by plausibility” is like a mute restaurant customer. Someone chooses the food for them, and all they can do is spit it out again if they don’t like it.

The new model is the restaurant guest who chooses his own food. They do not have to be able to cook themselves. However, knowledge of the ingredients is helpful. This way, the guest can not only be sure that the food tastes good. They can also ensure that it is healthy and therefore meets their expectations.

Of course, an uneducated restaurant guest who only knows fast food will hardly be able to intervene. But that’s life. You don’t have to be an expert in everything. But it helps to take responsibility for your life, your body and your relationships and to acquire a little wisdom about everything.

Marketing experts should behave in the same way. Take responsibility and ensure that data science moves in the right direction.

Data Scientists Are Not The Problem

This is not to say that data science per se is going in the wrong direction or that they are “simple-minded technical idiots”. Quite the opposite. Many are doing a great job. Many save what management fails to do.

But relying on that is negligent. It’s like getting into a cab and trusting that the driver already knows where he’s going. If in doubt, the cab driver will go where he wants to go.

From Correlation To Causality

The scandals surrounding discriminatory machine learning models are another example of how central the framework conditions that you set as management are.

A prominent example of discrimination through machine learning is a large bank’s lending system. Research showed that this system systematically disadvantaged applicants from certain ethnic minorities by offering them poorer credit terms or rejecting their applications more frequently, even if their financial situation was comparable to that of preferred groups.

Another case relates to criminal recidivism prediction software used in the US justice system. Investigations revealed that this system falsely classified blacks as future recidivists significantly more often than whites, although this was not the case with otherwise identical criminal histories.

The reason for this discrimination is a misunderstanding that even many experienced data scientists still fall prey to. They train a machine learning model with as many descriptive features as possible (e.g. skin color) in order to then predict a target feature (e.g. credit default). The problem with this is that the descriptive characteristics are often related and therefore correlated. For example, people with white skin color tend to have a higher salary due to other background variables.

Although salary is the actual cause of creditworthiness, many machine learning algorithms also use information about skin color. “As long as the prediction error in the data is small” is the algorithms’ motto.

However, the motto should be: “The main thing is that the information used is causal”. This motto will guide us throughout this book. It is the guiding principle of all causal AI methods. We will see how it leads not only to non-discriminatory models, but also to more stable and better models.

Lottery companies are usually convinced, based on their data analysis, that their target group is older people. This conclusion is drawn because we tend to equate correlation with causation. Conventional machine learning approaches do this too – only in a multidimensional space.

Smart managers question this. The application of Causal AI to data from a lottery company revealed a different picture. All other things being equal, older people are less likely to start playing the lottery. Nevertheless, it is a fact that older people play the lottery more.

Causal AI clarified the paradox. Over the course of time (i.e. years of life), people get used to playing the lottery. The more you play, the more likely you are to win at times. This winning experience in turn increases customer loyalty. Similar to the discriminatory models, it is not a personal characteristic (age) but certain customer experiences that determine their behavior.

The consequences of this realization for management could not be greater. The target group is the young, not the old. The aim is to habituate customers, not to increase casual play by offering jackpots. Frequent winning experiences keep players interested, while offering jackpots leads to customers playing less often and waiting until the money pot is full again.

The other day I wanted to play soccer outside with my two children. I looked out onto the terrace. It was wet. “Boys, we can’t play, it’s raining,” I said. My boys were not happy with my announcement and stormed out onto the patio. “It’s not raining,” they said.

Indeed. I had behaved like many data scientists and many managers. I had confused correlation with causality. The terrace was still wet from the morning’s rain. But now it hadn’t just stopped raining. The cloud cover had even cleared.

Similarly, conventional machine learning algorithms like to use surrogate information to predict target values. In practice, this approach only works by chance

What do we learn from this?

On my way to the office in the morning, I usually stop at an Italian coffee cart and strike up a conversation with other guests. When asked “What do you do for a living?”, I usually answer “My company extracts insights from data using artificial intelligence”. Most people then say that I work in the “IT industry”, which always confuses me.

The situation is similar with data analysis in companies. When it comes to analyzing data, you either think of computer scientists or data scientists. But let’s be honest. What job these days doesn’t involve data? What job can do without computers? If my grandfather were still alive, he would describe my job as “something to do with computers”.

My outcry in this book is to understand the profession of marketing managers in such a way that they also take responsibility for analyzing their data and managing its outcomes. For this task, I would like to show in this book what you should think about in order to make effective, data-based decisions.

You don’t need to have studied medicine to become an empowered patient. It’s enough to internalize a few guidelines and ask the right questions. For example, when I meet my physician, I ask her

  • Does this medication combat the cause or the symptom?

  • How do the side effects (alpha error) compare with the possible consequences of not taking the medication (beta error)?

  • What are the consequences if I postpone the decision (wait-and-see strategy)?
  • How would you decide if you were me?

I inform the doctors about my goals (e.g. avoiding suffering or accepting suffering if it benefits my health in the long term). It turns out that many doctors can deal well with mature patients.

This can also happen to data scientists. “When you have a hammer, every problem looks like a nail”. If your requirements don’t fit into the in-house data scientist’s toolbox, there may be a need to talk.

Some doctors use a lot of technical terms and a lot of Latin. A data scientist could try to get rid of you just as eloquently. That’s why rule number one applies: if something is not understandable, just ask “stupid” questions.

Another analogy from the world of medicine should encourage you to do so. In medical school and in medical practice, 99 percent of the focus is on curing diseases and serious illnesses. This great expertise is there to cure diseases. However, when it comes to the question of how to maintain health and physical resilience, the discipline has its blind spots.

So don’t assume that a data scientist will gain the best possible insights from your data just because they have a doctorate in this field.

I therefore repeat once again: for you as a marketing manager, data analysis is YOUR job. Because data analysis today means nothing other than “learning from experience”. What you learn in your field should be important to you. You should not leave it to a black box.

Key Learning

  • Marketing expertise is the key design component of data analysis.
  • Equating correlation and causality is the cardinal error that unites management and data science to this day.