The most classic, most-used and best-known information gathering technique: the satisfaction survey. We’ve all done them – we’ve bought or used something and then immediately afterwards received an e-mail or a call from the product’s creator, inviting us to share how pleased (or displeased) we are. The satisfaction survey will probably stay at the heart and soul of customer care – but what can we really get out of them?
One of Europe’s premier vacation firms came to us with their satisfaction survey: a classic setup asking customers to rate their holiday experiences and evaluate the different services they used. They rely on this type of survey to keep tabs on global customer satisfaction and to evaluate the effects of their marketing campaigns and other customer-facing actions. In the past, we’ve helped them develop predictive models for their data, enabling them to do some target-setting. For them, it is quite a well-mined survey.
However, they had noticed a few issues with their satisfaction survey from the beginning. There were fluctuations in the data, with some points increasing and others decreasing after every wave, which is expected with any survey. However, some of the fluctuations made immediate sense, while others were difficult to interpret. Still, they continued to track the same metrics and continued to see mysterious changes, wave after wave.
Feeling that they were missing out on key elements of their customer satisfaction picture, the client approached us wanting to understand what was influencing the survey results. Which other elements were related to the tracked metrics? What factors could help explain their variations over time? Based on their questions, we came up with an approach that integrated other (None Market research) data into their analysis.
The first data source came directly from the client. Without realizing it, the client had tracked large amounts of respondent data through their CRM system and other internal data collection processes. Because of confidentiality agreements, we can’t disclose which additional data they had. Suffice it to say that the results of the analysis showed our client how respondent demographics and other captured data helped to explain a large portion of the effect.
The second source of data considered involved the weather conditions experienced during holiday periods. The client wanted to know which aspects of the weather relate to satisfaction – is it shining sun, temperature, or something else? In our analysis, we used information about actual weather conditions as well as data relating to ‘normal’ and expected weather for the destination being rated. We were able to retrieve this information for the majority of the destinations offered by our client.
Before running the analysis, we needed to transform the data into a usable form. Each respondent stayed at a different destination on different dates and for different periods. This fragmented data had to be linked to the weather information, which was available at the destination level by the hour over a period of two years. And even before we could start transforming the weather data, some desk research was needed to ensure that our understanding of weather conditions was correct – since none of us here are meteorologists.
Based on this research, we were able to convert the hourly data into day-based data, which we then linked to the respondent data.
Without revealing any details, we can say that the results were eye-opening to say the least – both for us and for the client. And by using this information, the client now has a better understanding of how all the tracked factors interact – and thus, a better idea of how to take action.
To sum up, by using data already in the hands of the client, we were able to help them shed light on underlying variables influencing its changes. In using more than one data source, we could colour in some of the white spaces in the client’s customer satisfaction picture to make the results more understandable, and more actionable.