Home / Cases / Great, We Have Improved … or Not?

Great, We Have Improved … or Not?

Many companies spend considerable amounts of money on customer surveys every year. Customer survey results are being used to amend strategies, design new products and services, focus improvement activities and … to celebrate success. Since the impact of customer service results can be quite hefty the data driving important decisions shall be trust-worthy. The question is: Can we always rely on what we see?
A life insurance company – let us call them MyInsurance – with world-wide market reach was celebrating their success of improving their customers’ satisfaction in 2006. They proudly presented the results: “In Thailand we have achieved 58% satisfied customers as compared to year 2005 when it was only 54%.” This sounds good, right? In a market with millions of consumers, an increase in satisfaction of 4% would mean, the number of customers who would happily buy from MyInsurance again has increased by some ten thousands.
Such kind of conclusion could be too fast. Why? For obvious reasons, MyInsurance did not really ask millions of customers for their opinion. They only managed to gather the opinion from 280 customers. And, this is called sampling. Such approach is being applied in every kind of company in every industry many times a day.

Sampling

Sampling is based on a comparatively small number of customers, called “Sample”, and it is used to draw conclusions about the “Population”. Population in this case refers to the entire pool of customers whose opinion we are interested in. Sampling has a huge advantage: it saves money and time and is especially used when it is nearly impossible to collect data from the whole population or when the process of testing can destroy the object like drop testing of mobile phones. This advantage is paid for with a disadvantage: the “Margin of Error” or “Confidence Interval”.

Margin of Error – Confidence Interval

Confidence Interval is the range in which we expect the population value to be. Since we do sampling, we can only guess what the “real” value is. In sampling, we never know. This Confidence Interval cannot be avoided, even with a perfectly representative sample under “ideal conditions”. However, this Interval can be reduced by increasing the sample size and by decreasing the variance in the population. The latter is usually not possible. Hence the only choice one has is to determine the minimum sample size for the Confidence Interval one is expecting.

Sampling M&Ms
A very simple experiment will help you understanding what sampling means:
Buy one 200g package of chocolate M&Ms. Open your package and count the number of M&Ms in your package. This number – the population – in my case was 233. Now, please count the number of yellow M&Ms. In my experiment this number was 43. It means I have got 18.5% yellow in my population.
Sampling means taking a small number of M&Ms out of the population in a representative way. I took a bowl and filled in my M&Ms. After some shaking and stirring, I turned around and counted a sample of 20 M&Ms out of my bowl – blindly. The first sample gave me no yellow at all. I put my sample back into the population and counted a new sample with 20. A second sample revealed 4 yellow M&Ms. Eight more samples gave me 2, 3, 3, 6, 3, 5, 4 and 3 yellow M&Ms, respectively.
Doing the math, my samples suggested that my population has 0%, 20%, 10%, 15%, 15%, 30%, 15%, 25% and 15% yellow, respectively. Which sample is correct? None is. All of the samples give only an indication for the real percentage of yellow in the population.
Sampling results vary even though the population is untouched. Drawing conclusions based on this variation may result in expensive mistakes.

What does this mean in case of MyInsurance? With some simple statistics we can calculate the Confidence Interval for our samples based on the sample size we have got:
In 2005, the “real” customer satisfaction level was between 48% and 60%. In 2006, it was between 52% and 64%. So, can we still conclude that we have improved? We cannot!
If MyInsurance wishes to distinguish between a customer satisfaction level of 54% and 58%, they need to have confidence intervals for 54% and 58% that do not overlap. If they would overlap, we cannot distinguish between both. Hence, we need confidence intervals of +/- 2% or less for both.
The estimation of the sample size for this requirement tells us that we would need to involve nearly 2,500 customers in our satisfaction survey each year. Again, based on the sample of 280 customers we have taken it can easily be that there has been no change at all or even worse a decrease in customer satisfaction. We will never know until we have more data to give us a better result.
Unfortunately, in our example MyInsurance has no reason to celebrate success due to increase in customer satisfaction. This assumption could be totally wrong.

Conclusion

Very often important decisions are based on means coming from small samples of data. Sometimes these small samples of data are poorly collected or have a large variation. We usually do not care a lot about variation in our daily professional or personal life. The thing that matters most is the average, the mean. This mean is easy to calculate and everyone understands what it stands for. Every mean coming out of a sample is only correct for a sample, it is “wrong” for the whole we are trying to make a decision about.

Management would take a great leap in decision making by changing the way they look at data: Don’t trust the yield you have got for your production line, ask for the confidence interval for that. Don’t make an investment decision based on a small sample of data, ask for the minimum improvement this investment will give you.
Don’t trust means, they are lies.
Print EN Print CN Print VN
Published