Credit Decisioning using Open Data

13th August 2019

Thinking back 15 years, how many ‘Chief Innovation Officers’ do you think there were in the financial services sector? The answer, outside of Palo Alto, is probably very few. But now, if you don’t have at least one person in the bank with ‘innovation’ in their title, well you’re clearly not innovative!

I’m kidding of course. But my point is that times have changed and over the last 15 years the financial services sector has undergone something of a revolution. Yet, over this turbulent period, one hugely significant and specialist cog in this transient wheel has remained relatively unchanged– credit decisioning!

Traditional Credit Decisioning

The early days of my career were spent as a credit risk analyst within a credit bureau analytics team. I had the option to join a big shiny bank in the city, but at the time I thought the bureau gave me a better opportunity to explore a much wider range of sectors including banking, debt collection, telecoms and all the other weird and wonderful ‘inbetweeners’ who relied upon data and analytics to lend money to people and businesses. Given the number of projects we worked on as a team, this turned out to be a pretty good choice and I spent an amazing decade running risk analytics, building credit scorecards and leading some of the most ground-breaking consultancy projects in the data-driven lending space.

Source: https://www.thebalance.com/who-are-the-three-major-credit-bureaus-960416

Credit decisioning back then looked pretty similar to today within the large banks. Huge amounts of historical data running through retrospective analysis to determine attributes that indicated risk. The clever word for this – Logistic Regression. This is ‘easy’ I thought (my boss will tell you that I wasn’t the most proficient coder). We’ve got millions of data points, readily available to run through an algorithm that churns out a score which predicts when someone might not pay back a loan. All it took was the tick of a box by the customer during a credit application for banks and lenders to pull back all this information on their customers and, hey presto, instant decision, instant credit, instant satisfied customer! Well almost…

 

And then came Open Banking…

After nearly a decade in the credit risk world, I started to hear murmurings about something called Open Banking, a new piece of government legislation which, if implemented, would force the banks to make financial data available to third parties – with explicit consent from their customers. At the very same time, the industry was struggling to grapple with the potential fallout of GDPR on existing data sharing models. This started to get me thinking. What would data sharing look like in the future and what knock-on effects might this have on the industry? If the credit referencing world had a “you’re not in Kansas anymore Dorothy” moment, then this was certainly it! After decades of re-enforcing moves by the regulators to promote, and mandate in some cases, data sharing with credit bureaux, along came this thing that threw most of that out of the window and put financial data firmly back in the hands of the customer! Needless to say, change doesn’t happen overnight in the banking world and the same is true for a credit bureau, but the touchpaper had been lit! The customer journeys we helped to orchestrate, the risk attributes we used, the credit scores we modelled were all about to enter a new era. The Open Finance era.

Data is the new oil. Blah blah blah.

In varying forms, the credit referencing industry has been around for hundreds of years. Data aggregation for financial risk assessment is certainly not a new concept but search the web for ‘open banking’ and you’ll find a plethora of new unicorns (£1bn+ companies) across the globe whose valuations are a direct result of this new commodity and a reflection of the changing global landscape for data sharing. Rather than the traditional data exchange model which relies upon customer consent given under ‘legitimate interest’ and a central data storage model, these new players aggregate data in real-time with ‘explicit consent’ from the customer and in many cases, no data repository is created at all. The most well-known is probably Plaid, the US-based financial aggregator whose valuation in 2018 reached the heady heights of $2.65bn after only 5 years in operation. However, the largest data aggregators are by far the big tech companies who continue to explore ways in which they can leverage this data to enter the financial services sector.

Source: https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data

Most of the industry focus is currently on consumer data. But the same procedures can also be seen in the aggregation of all sorts of business data including accounting, director and staff behavioural data, logistics, trade data and the list goes on. There is big money in financial data aggregation as companies scramble to cure the single biggest problem in financial services today – data asymmetry! Exclusion of sectors of the consumer population and smaller owner-operated businesses continues to be an issue because of the differences in their data and risk profile.

New types of data can be particularly useful when it comes to credit risk decisioning but data aggregation alone, especially of open data, is simply a tool. If considered in isolation by a bank or lender for credit risk, fraud prevention or any other type of use case, it can actually be problematic. Hence why banks typically don’t jump into new types of data-driven lending quickly.

For years, credit risk teams have strived to create predictive models based on readily available historical data from internal systems or from the credit bureaux, to build their models. Then one day the innovation team comes along and says:

“Hey guys! Awesome news – we’ve partnered with an awesome data aggregator and you’re going to get all this really cool new data coming your way.

“But sorry…. there’s no historical data on it, you may or may not get it depending on whether the customer trusts us at that point in time, the data may or may not arrive dependent on whether the third parties’ connection works and make sure you speak to the product, pricing, legal & compliance teams as they want it too – you guys speak right?

“Oh, and one final thing, make sure you speak to IT about system integration – should be a breeze as they’ve got APIs…” 😅 

Collaborating with Open Data

So, don’t get me wrong, open data is powerful – really powerful! Many have already suggested that it has the potential to supplement or indeed usurp some of the risk data from traditional sources. But gathering explicit consent from customers, working with third-party APIs and integrating these into a useable technology architecture for lending or any other use-case is not a simple task and certainly not a task which can be resolved by the credit risk team alone. Integrating open data into your risk processes and blending it with existing data requires an entirely new level of collaboration across the bank and, to date, very few have managed to crack it!

Specific to credit decisioning, siloed projects in big data, blockchain, machine learning and artificial intelligence have dominated bank investments in the last few years, but most have failed to materialise despite significant outlays by the banks. These are a great example of what we call ‘innovation silos’ where significant R&D projects are initiated in very specific areas of the bank with a very specific problem to solve.

Blockchain in corporate banking is a great example of this siloed approach to innovation with huge amounts of money being pumped into researching new forms of trade finance over recent years. Yet surveys like the one by the World Economic Forum found that on average banks are only recognising a return on 10% and almost none have moved beyond trials.

Supporting this trend, we’ve found that most banks are treating credit decisioning with open data in the same way, with experimental projects led by innovation evangelists looking to create new digital lending products and customer experiences. Typically, these projects result in fantastic design concepts but face huge challenges when it comes to actual implementation. New technology platforms like Trade Ledger offer banks a very different approach when it comes to open data innovation and the actual implementation of these new capabilities in credit decisioning.

Value, Trust and Customer experience

Today’s banks & financial services companies should feel very privileged indeed. They have plenty of something that people need (money) and the means to distribute this to their huge customer bases. The only thing that stands in their way is data. Now you might think that banks must have plenty of data, and you’d be right, but in most cases due to their ‘spaghetti-like’ IT systems, accessing it is nigh-on-impossible! It’s these same legacy systems that are holding the banks back in terms of creating value, inciting trust and building beautiful new customer experiences. So, where to start with something like open data?

Well in some ways, open data offers banks a fresh start. Technology providers like Trade Ledger make it easier for banks to create new experiences for their customers with differentiated data, replacing their legacy IT systems and creating incredible lending experiences for their customers. But how does this translate into better credit decisioning you might ask?

Credit Decisioning with Open Data

Well…have you ever wondered where your banks’ multi-billion-pound digital transformation budget went? Thought you invested in a new state-of-the-art technology ‘platform’ but 3 years later you actually ended up with yet another rigid legacy IT stack, business application or decision engine which now needs to be replaced in order for you to upgrade your credit decisioning capabilities.

No alt text provided for this image

These days, in the fintech world, every company claims to have the next best ‘platform’ but how do you cut through the noise to determine which platform is right for your lending transformation and how can it empower you to leverage open data within your credit decisioning over the next 5-10 years? We’ve pulled together a simple checklist that might help:

  • Is it an end-to-end solution? This might seem like a simple question but ask yourself and ask your teams (all of your teams!) – Does the technology work for all parts of the lending operation (risk, ops, tech, sales, legal etc.). Selecting specific business applications for specific problems creates instant legacy. Your platform must service all parts of the end-to-end lending operation and be interoperable with other specialist service providers.
  • When was this technology built? If the ‘platform’ was built more than 3 years ago then it’s probably not going to be fit for purpose in 5-10 years time. Such is the speed of change in technology in the current market inflexion point over recent years, that choosing an extensible architecture is the only way to future proof your product innovation capabilities. Your lending platform should ultimately empower your credit risk team in this new open data eco-system, not limit them.
  • Is it cloud-native and delivered ‘as-a-Service’? Recent statistics suggest that less than 5% of banking systems have been migrated to the cloud yet most people recognise that migration to the cloud is inevitable given the huge cost-efficiencies. Your lending platform needs to be cloud-native in order to recognise these cost savings and associated benefits. Consuming this ‘as-a-Service’ from providers like Trade Ledger is the simplest, quickest and most effective way for banks to migrate their lending to the cloud.
  • What is the data model? If it’s a true platform then it should have a ‘common data model’ across all of its components. This should be fairly standard practice for those wanting to create data-driven solutions. Common or unified data models provide a base for more advanced credit decisioning like ML/AI and prevent the creation of unusable data silos.
  • Is the architecture component-based? Component or API-based architectures allow for the native integration of internal or external services. This is essential for the integration of your credit services (and decisioning) into other channels like digital brokers, price comparison websites or point of sale financing. Want to provide a ‘decision in principle’ to digital broker platform or your website? Forget it if your architecture is not component-based and API-enabled.
  • Is the decision engine events-based? Most decision engines & lending platforms offer some form of interface to customise credit risk logic and scores, but most do not have an event-triggered approach. Integrated workflow engines, messaging, notifications & credit decisioning on a single platform allow you to trigger new data aggregation requests, credit risk analysis and adaptive modelling at any point in the credit lifecycle, not simply during the ‘credit assessment stage’.
  • What security features are included? Trust is everything in the world of open data sharing so make sure the platform you’re using is highly secure. A data breach could destroy customer confidence and limit the flow of open data into your open data risk models.

To summarise, using open data within your credit decisions could have a significant impact on the overall performance of your lending operation. It can help to create better data-driven customer experiences, reduce operational costs and significantly reduce impairments. But successfully integrating open data into your decisioning processes requires collaboration on an entirely new level. Platform technology like Trade Ledger can help to provide a basis for this collaboration. The rest is up to the willingness and commitment of the bank to make long-lasting and truly transformational changes as we move into the Open Finance era.

Category: Open Banking