With the rise of artificial intelligence and machine learning the one word that comes to mind is “automation”. Especially in marketing, algorithms such as next best offer, customer churn and cross sell prediction are taking a lot of the guesswork out of targeting, moving towards a day where shotgun advertising is a thing of the past, and we are engaging with organisations on a one to one level.
John Wanamaker once said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half”. Fortunately these days we can accurately say “which half” is wasted, and rectify the problem, resulting in stronger return on investment on marketing spend than ever before.
These innovations have given more power and credibility to marketers within an organisation; however, there is a catch. All machine-learning algorithms are what is known as “interpolation methods”, I.e. they are good at predicting data that they have seen before, but they can become wildly inaccurate when attempting to predict data that falls outside that realm. One such example is time; machine-learning algorithms are often used for forecasting, but utilising date as a predictor is fundamentally flawed.
This also exists for extraneous factors. For example, consumer behaviour is vastly different to what it was 10 or 20 years ago. We used to speak to the salesperson in a store as the source of truth for product information, now we turn to the internet. Thus, automating algorithms may have fundamental issues.
Initial cross validation can only take things so far. Cross validation does one thing well, and that is identify when an algorithm over fits training data. Cross validation does not identify poor research design, nor does it always recognise when extrapolation becomes inappropriate.
The solution is to rinse and repeat. The paradox of automation is that it should never be a set and forget, but rather just remove the heavy lifting. Machine learning may be seen as the industry’s shiny new toy, but it still has the same limitations as traditional methods. Algorithms should be benchmarked, reviewed and retrained where required.
See the example above, where the data has been generated based on a known function. I have split the data into a training and test set, and fit a model, that appears to be a good fit. The model is quite simple, hence it does not appear to over-fit the training data, and there is adequate test data to give us confidence that the model works. However, see what happens when you look at the future data (click on the legend).
There is no free lunch in statistics, and there is not one algorithm to rule them all. Machines are not rising up to be smarter than their creators are (although some of us wish it were true). Machine learning is not new, nor does it mean we can care less about the underlying statistical principals. Remember that a fool with a tool is still a fool, so just because you can write a fancy machine-learning algorithm does not mean you can get away with not considering the potential underlying causal relationships in data.
“Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care”
– Bowker, 2005