Churn Prediction in Retail Finance and Asset Management (Part 2)

October 8, 2014 Niels Kasch


Joint work performed by Niels Kasch and Mariann Micsinai of Pivotal’s Data Science Labs.

Financial firms collect large volumes of data from all realms of our daily lives. These data assets are used to build predictive models for many purposes, such as understanding and predicting customer behavior. In our previous blog post, we detailed the challenges of establishing the dependent variable given varying definitions of “customer” and “churn”. Here, we look at predictive and explanatory modeling including data assets, analytics algorithms, and resulting customer applications.

Is your data enough? Integrating internal and external data assets

Integrating internal structured (e.g., transactional data) and unstructured data assets (e.g., weblog and call log data) can inform a churn model by adding multiple new dimensions to the analysis. Using weblog data, data scientists can find the specific order of actions taken by customers on a bank’s websites and extrapolate clickstreams for customers likely to churn. Similarly, with call log data, a specific group of customers prone to churning can be flagged given the timing and the topic of their calls.

Although many financial firms consider only internal data assets for solving their use cases, these data represent only a fraction of all available data. Third party data sets are highly informative for a given modeling task. Consumer data sets can be purchased via data vendors, but a growing number of data liberation efforts under open data initiatives make useful data assets available to the public. An example of such an initiative is the US government site, a portal including 90,000 datasets covering varied topics such as finance, labor markets, weather, and many more.

In our data science engagements integrating internal and external data assets often leads to novel insights. For example, combining historical transaction data with economic data links market conditions with individuals’ investing behavior.

When considering an external data source for augmentation, you must ensure that internal and external data assets have a common identifier and cover compatible time horizons. If these conditions are not met, then gaps may render an integration work counterproductive.

How do we identify customers that are likely to churn? Building predictive and explanatory models

Raw compute power does not guarantee a successful, relevant, and precise model. The data scientist is responsible for turning a financial firm’s data assets into features or explanatory variables. A feature could describe an individual’s life stages, such as if they are single, married, or divorced. For example, newlyweds tend to consolidate their bank accounts. This presents an opportunity to not only prevent churn—i.e., preventing the life partner to close the account and join his/her partner at another institution—but provides a chance to increase assets under management by actively engaging those customers through marketing campaigns.

The parallel nature of our technology platform, the Pivotal Big Data Suite, enables us to quickly explore ever-increasing data volumes. More importantly, this platform facilitates incorporation of hundreds of thousands of features for a given modeling task. This means we can experiment with more features and iterate over more models than is possible traditional architectures. Once features are developed, the data scientist has to make a choice in algorithm to use for the model. This choice, all things being equal, can have profound impacts on model performance, accuracy, and explanatory power. We have modeled churn and retention use cases with a host of different algorithms, e.g., mixed-effects modeling, Support Vector Machines (SVMs), regression, survival analysis, and decision trees.

Our platform makes parallel implementations of these algorithms available via Madlib or PL/R. Each of these algorithms has its strength and weaknesses. SVMs scale well with large feature sets, but can be hard to interpret especially if the business wants to know why a particular individual is likely to churn or not. Decision trees rank high on interpretability, making them a natural choice if business users need to trace the reasons (i.e., examine the branch points in the tree) for why an individual is prone to churn. Mixed-effects models are ideal for time-series data controlling for features such as market cycles and seasonality. The combination of features and advanced machine learning algorithms makes up the model that allows us to identify customers that are likely to churn.


What’s the value of a model to the business? Bringing the models alive

It is not enough to build a predictive model, hand it to the bank so they can interpret model coefficients and decision boundaries and then call it a day. We show our partners that these models lead to specific actions that can be undertaken by the business.

In general, there are four ways that models can generate value:

  1. We find the relative importance of the causal factors in a mathematically precise and repeatable manner from the model. We examine features such as demographics, website usage, and life events and show their impact on churn. The model can then be used in an explanatory fashion to answer questions such as “What demographic attributes have the biggest effect on customer churn?” and “What is the root cause for married people to churn?”. These results enable a bank to target specific groups of customers and use this knowledge to develop customized financial products for these groups.
  2. We use the model to predict events of interest. A churn model can be applied to classify individuals according to the likelihood to churn in the next week, month, or quarter. This model can be deployed to prioritize call center interactions for people with highest churn risk, or to inject promotional offers in targeted marketing campaigns.
  3. We use these models to predict the outcome of hypothetical what/if scenarios. A financial institution may be interested in the impact of market conditions with respect to customer churn behavior, i.e. what happens to assets under management during a financial crash or growth period. Scenario analysis also allows a bank to assess the success of marketing campaigns – i.e., how often and when to contact customers to prevent churn.
  4. We operationalize these models. During operationalization, these models are moved into a live data lake where data sources are updated and refreshed in periodic intervals ranging from real-time to batch updates. The models serve as the digital brain–learning from new data and scoring new events. Our platform enables flexible use of each model ranging from real-time to periodic model updates/refreshes and model scoring in an automatic fashion.

The insights from models lead to direct actions to be taken by the business. In many instances, the models inform the design of marketing campaigns for specific customer groups. In particular, one of our churn models showed that a financial institution’s website arrangement had a negative impact on retention for a specific demographic group. This insight prompted the adjustment of the business’ website to not only remedy this flaw, but to also actively target this demographic group with customized promotions.

A churn model can also alter a bank’s product design choices. What fee structure minimizes churn? What account features should be offered to maximize retention? From a model a financial institution may notice that an increase in fees does not affect retention in one group of individuals, but a decrease in fees may stop churn in another group.

With these insights a bank can develop individualized products for each customer. A model allows such customization of product offerings on an individual basis with an eye on life-long retention.

About the Author


WWDC 2014
WWDC 2014

It’s been a good four months since WWDC and I can not count the number of times I have said or thought “at ...

Apache Way at Pivotal
Apache Way at Pivotal

Pivotal's Roman Shaposhnik describes his role as the resident Apache Software Foundation guy for Datafabric...

SpringOne 2021

Register Now