To market better, make 2019 “The Year of Predictions”

Dr. Andy Lewis,

My prediction for 2019 is that, for many companies, this year will be “The Year of Predictions.” Companies will discover that there is a more cost-efficient method to building “mountains of models” and making huge volumes of executable predictions, and they’ll use this streamlined predictive analytics capability to their advantage over their competitors.

Making predictions is a valuable activity.

Predictions allow you to anticipate your customers’ intents and needs. For example, predictions allow companies to focus investment in valuable customers that may be about to cancel a contract, or make certain a web page is populated with content that will most likely create engagement. Predictions ensure you only initiate outbound communications with customers who would actually welcome that contact at that moment in time.

In short, predictions allow you to have the most relevant conversations at the right time and in the right channel – preventing time-wasting, annoying, irrelevant, and value-destroying interactions.

If you can predict lots of aspects of what your customers want and how they will behave, you can create experiences and interactions that increase the value of the customer, rather than detract from a customer’s expectations and destroy that value. Predictions that are tightly embedded into executable interaction strategies inform business decisions.

Driving down the cost of making predictions is important, if you are to be a prediction-based company.

There is no point making a prediction if you do not act on it. Historically, cost stops companies from making and using predictions to improve engagement. Imagine a travel company that has built out predictive models allowing them to rank different destinations for a family’s next summer holiday. Would the family prefer to visit Italy or Florida? Predicting Italy as their most likely destination is not enough to create an engaging experience. The travel company might want to predict what is more appealing: food, history, beaches, or villas? They probably ought to know when someone is likely to book, over what channels, and if the family also wants to hire a car (and what class of car). Regardless of the scenario, the number of predictions required to tailor a highly personal and engaging experience can quickly escalate.

Instead of making predictive models to get the Italy experience right, the travel company might consider lots of crude, but cheaper, substitutes, like A/B testing various images of Italy, or some rough segmentation of customers. But to create a truly personalized, one-to-one experience, you need predictive models for every element of that experience. The cost of doing so often stops this from happening.

Why has it cost so much to create predictions?

There are many reasons. First, lots data is needed. Huge volumes of data may be available, but often it is in the wrong place, is old, and may be missing much of the required context. Worse, data about your new products and services may simply have not yet been generated.

Second, building models takes time and requires lots of skills. These skills are in high demand and are often best deployed to business-critical predictive models, such as fraud and risk. It can be hard to justify using such skills to decide whether to place a Facebook ad with a picture of Rome or a picture of Venice. Companies are used to creating more value from predictive models by concentrating on creating better models with advanced techniques, not creating huge numbers of “good enough” models. In the travel company example, most models you’d want to deploy need only be good enough to rank competing options on how to present that Italian holiday. Building best-in-class models here is unlikely to change that ranking. You do not need to be “correct,” you need to be effective.

Third, companies need to demonstrate that the models they create are compliant with corporate model standards. They need to be transparent as to how they used data to make predictions, and to be shown as such.

Fourth, these models need to be operationally deployed to inform decisions, strategies, and real-time engagements. An un-deployed model generates zero value. The effort demands significant IT resources to code, test, and integrate models across channels, applications, and marketing tools. Plus, models need to be refreshed often as they quickly become stale after having been deployed.

Fifth, such models need continuous reporting, monitoring, and governance to be set up in the enterprise system architecture around them.

New digital tools are changing the economics of predictive modeling.

Modern customer decisioning platforms are able to “close the loop” of making real-time prediction-based decisions, capturing behavioral responses, and training predictive models based on these responses. This allows business users to control the automated creation, deployment, and refreshing of potentially thousands of predictive models fully embedded within a real-time decision execution engine.

Companies are already putting this technology to use. For example, the Commonwealth Bank of Australia uses Pega’s Customer Decision Hub™ to solve many of these cost issues and deliver “mountains of models” to scale up the number of predictions they make by orders of magnitude when engaging with customers. In the first week, CBA was able to automatically build and deploy 250 predictive models. This increased lead volumes 10 times and trebled conversion rates. According to their Chief Analytics Officer, Andrew McMullan, the effort would have normally taken 25 of their data scientists more than three years to complete using traditional methods.

CBA was able to drastically reduce the cost of predictions by employing the full-cycle automation capabilities of the Customer Decision Hub. Predictive models are built, deployed, and executed all within the hub, solving many of the cost issues discussed above. Because the models are built in context, the training data sets can be taken directly from contextual data available at the time of decision making and immediately tied to the outcomes of those decisions – like a click, a purchase, or an accept. Expensive data collection and transformation is not required.

The models are built using a templated approach that can be validated and tested before deployment. Efficient modelling techniques that produce “good-enough” predictive models can be scaled to produce potentially thousands of models simultaneously. The Customer Decision Hub takes care of periodically refreshing models into the real-time decision strategy engine and auditing this process. Various levels of reporting and visualization allow data scientists to monitor the models, understand how data is being used, and work with business owners to improve messages and propositions.

This approach effectively creates a high-performance production line for predictions and predictive models.

This approach will help collapse costs, making 2019 the Year of Predictions, as enterprises move from handcrafted models built by skilled practitioners to fully automating the process of creating training data sets, continuous and autonomous model building, seamless deployment, and closed-loop feedback.

Learn More:

Tags

  • Industry: Cross-Industry
  • Topic: Analytics
  • Challenge: 1-to-1 Marketing
  • Product Area: Customer Decision Hub

About the Author

Dr. Andy Lewis, a principal solutions consultant for AI and decisioning, is changing the world for the better through real-time, contextual, AI-driven next-best-action conversations.