Bankers’ Data-Driven Delusions

An American Banker article titled Bank CEOs Fear the Data-Driven Decision reported that:

“A recent study found that analytics are underused at banks and that senior executives are cold to the technology: a scant 20% said that if it were up to them their organization would be highly data driven.”

My take: As Paul Newman might have said, what we have here is a failure to communicate.

***

If you don’t think banks (in general, or the one that you work for) aren’t “data-driven,” then try the following:

1. Ask for a mortgage, but refuse to provide any information that would enable the bank to figure out your credit score or credit history. Ask the bank to decide on your loan-worthiness based on their “gut” reaction. Do this especially if you belong to a group considered to be a “minority.”

2. When trying to decide which bank branches to close, suggest giving each branch a number, then writing that number down on a piece of paper. Put all the pieces of paper together in a bowl, mix them up, and have someone pull out a piece of paper. What ever branch corresponds to the number on the piece of paper gets closed.

3. Ask for $1 million to launch a new marketing campaign. Tell the CEO he or she must make the decision to invest in the campaign without any ROI estimates, and to make the decision based on his or her “gut.”

***

The notion that banks aren’t data-driven is nonsense. From lending decisions to funding decisions, data is used all the time. In fact, I hear a lot of marketers complain that senior execs rely too much on data (i.e., ROI projections). And come to think of it, my suggestion in #2 above, regarding branch closings, involves numbers, so, in a way, it is data-driven.

Granted, the data banks use to make decisions might suck, but that’s a different issue.

***

There’s a deeper issue running through the AB article, however. It’s an issue of “definitions.” While analytics relies heavily on the availability of data, not every use of data qualifies as analytics. If you’re an analytics professional, you probably know what I mean. If you’re not, you might have no clue what I’m talking about.

Can I explain it better? No. I’m an analytics person, so I’m not very good at speaking in the language of the quantitatively-challenged. (In reality, I’m ten times better at it than most analytics people. So you can imagine how bad the problem is).

***

There’s another problem here. The notion that “data-driven” and “gut-driven” are at opposite ends of the spectrum is fallacious. How do experienced executive develop a “gut feel” for the market? Often, it’s after years and years of experience dealing with the data. (I wrote about this a while back).

***

There are countless opportunities for banks to become more analytical, or analytical-driven (although sometimes “analytics” might be overkill). But being analytically-challenged does not mean “not data-driven.” In fact, as Deva Annamalai (aka @bornonjuly4) points out in the AB article, the bigger challenge to becoming more analytical isn’t necessarily access to data, but organizational barriers and lack of business processes.

Bottom line: I’m not buying that bank CEOs “fear the data-driven decision.”

Advertisements

Trouble For Small Credit Unions?

When it comes to blog fodder, at one of the spectrum are sources like Forbes blogs, Fast Company, and Motley Fool, all of which occasionally publish stuff so crappy it just begs for Snarketing treatment.

At the other end of the spectrum is Callahan Associates, who I have a ton of respect for, and who I believe does great work. That view wasn’t changed last week when I sat in on Callahan’s quarterly review of credit union industry performance. Great data, great analysis.

Overall, Callahan painted a very optimistic picture of the credit union landscape–loan volume is growing, and market share is increasing in many areas and markets.

There were, however, a couple of slides that warrant further analysis and questioning–specifically those that related to the performance of credit unions by asset size.

***

Before we get into those slides, I’d like to state for the record that I’m not here to comment on the viability of small credit unions. I’m simply commenting on some data that Callahan presented and (respectfully) challenging their interpretation of the data.

***

The first slide that caught my eye contained data regarding credit unions’ 12-month loan growth, broken out by asset category. According to Callahan, the credit union, as a whole, grew loan by 10.3% in the 12 months ending September 2014. I’m assuming the data refers to dollar, and not unit, volume growth.

For the six asset categories of credit unions with less than $1b in assets, loan growth was below the 10.3% average. The largest credit unions–those with more than $1b in assets–grew their loan volume by 10.7%.

20141117 Callahan1I may be missing something here, but I can’t understand how the tail–the small number of credit unions with more than $1b in assets–is wagging the dog. If the–what 150? 200?–$1b+ credit unions grew lending volume by 10.7%, and the thousands of <$500m CUs grew at 5.1% or less, then the $1b+ CUs must have an incredibly large percentage of the overall volume of loans.

***

Now, if it is true that the largest CUs have an incredibly large share of CU loan volume, then the next slide in Callahan’s deck–which was titled “Smaller credit unions are posting some extraordinary growth rates”–is misleading.

20141117 Callahan2While it may be true that some small credit unions are posting “extraordinary” growth rates–like the one with less than $20m that grew loan volume by 104.7%–the reality is that the actual dollar volume must be incredibly small. And that the credit unions in each of the smaller peer groups with “extraordinary” gains are few and far between.

***

If I’m reading the Callahan data correctly, then, although the credit union industry as a whole may be doing well (in terms of loan volume and market share), that good health is not evenly distributed across asset groups.

And I just can’t buy in to Callahan’s attempt to spin the results as positive for smaller credit unions.

***

Is this evidence that small credit unions are doomed, and will disappear in the next few years? I don’t know. That’s for someone else to argue. I’m simply trying to make sense of Callahan’s data.

Can We Trust The Trust Numbers?

Each year, there seems to be no shortage of well-publicized surveys showing how much trust consumers have in banks, and how that trust has changed since the previous survey. What is it about the banking industry that makes bankers obsess over whether or not their customers trust them? You’d think that brain surgeons would be as concerned with their patients’ level of trust in them, but noooo.

A recent Harris poll (as reported in The Financial Brand) found that:

“Local credit unions and local/community banks are the most trusted institutions, with over three-quarters of Americans having some or a great deal of trust in them. Big national banks rank second to last, having the trust of only 50% of Americans….while 42% state they have no trust at all or very little trust in these institutions. Online-only banks are seen as the least trustworthy, with only 39% of Americans having at least some trust and 47% having no or very little trust in them.”

***

With these data points in hand, credit unions go into full-on Sally Field mode: “They like us! They really like us!”

A lot of good it does, though. According the BusinessWeek’s estimates, the top 3 banks in the country have 33% market share, with the top 10 banks holding nearly 50%. And those market shares appear to have grown over the past few years. Despite low levels of consumer trust.

One community bank CMO used the trust data to support his contention, in an editorial in American Banker, that community banks and credit unions should go “negative” against big banks in marketing campaigns. Like community banks and credit unions haven’t already been doing that for the past five years, and to practically no avail.

Here’s what I have trouble reconciling: On one hand (as reported by Harris), credit unions are the most trusted financial institutions. On the other hand, it would be appear to be common knowledge that credit unions are the industry’s “best kept secret” (you can peruse the 16.2 million results to a Google search for “credit unions best kept secret”).

How can credit unions be the most trusted if nobody knows about them?

***

Harris also found that online-only banks are the least trustworthy. REALLY? How would consumers know that? What percentage of the population has actually done business with an online-only bank? (I don’t know. It’s a serious question).

I can’t imagine that it’s a particular large percentage. Yet some researcher thinks it’s OK to ask consumers about their trust in something those consumers have no experience with, and to publish the results as if it were gospel.

Interestingly, a day or so after publishing the results of the Harris trust survey, Jim Marous published a piece in The Financial Brand titled Is It Time For Digital-Only Banking? citing a Javelin Research study which found that “71% of those who use mobile banking say that online and mobile banking is sufficient for their needs.”

Marous is asking a great question, and I don’t dispute Javelin’s findings in any way. But how do you reconcile that with Harris’ findings that consumers have little (or no) trust in online-only banks?

***

According to the Harris poll, half of US adults say their trust in banks has declined over the past few years.

Edelman would beg to differ. Or, at least, they could beg to differ.

According to Edelman’s trust survey, which they conduct annually, 38% of Americans trusted banks in 2009. In 2014, that percentage increased to 46%. By the way, as a point of comparison, 48% of Americans’ said they trusted “businesses in general,” so the level of trust in banks isn’t too far out of line.

So, is trust in banks increasing or declining? The answer is: Whatever you want it to be.

***

I might not be comparing apples to apples with the Harris and Edelman studies.

Harris appears to capture the “change in a consumer’s level of trust” while Edelman is reporting the change in the “overall percentage of consumers” that trust banks. It’s conceivable that there are a lot of people who a few years ago said they had “very little” trust in banks, and today said they have “no” trust, which, of course, would be a declining level of trust.

This raises questions I don’t hear a lot of people asking: What does it really mean to have a “great deal,” “some,” “very little,” and “no” level of trust?  Is your definition of “great deal” of trust the same as mine? Is trust a “bucket” which we can accurately measure how filled it is?

***

Bottom line: If the trust survey data tell a story that supports your financial institution, please don’t let my comments–or any modicum of common sense–get in your way of using the data to your advantage. That’s what quantipulation is all about!

But please don’t deceive yourself into thinking that the findings from these studies have any correlation to who consumers bank with, or how they make their decisions about who to do business with.

 

Durbin Delusions

If you follow me on Twitter, you know that I’m fond of responding to delusionary and BS tweets by saying “don’t bogart that joint–pass it over to me.” It’s a reference to a line from a Little Feat song (if you’re not familiar with the song–or God forbid, the band–please leave this site now, and go back to listening to your little hippity hop Pandora station).

I’m reminded of that line–once again–by a press release (dated September 30, 2014) from the Merchants Payment Coalition which made the following claim:

“Debit card reform has helped consumers save almost $18 billion and supported 100,000 new jobs in three years.”

My take: Total BS.

***

The date of the press release is important because the savings claims come from a 2012 study which claimed that, after its first year of implementation, the Durbin Amendment saved consumers $6 billion, and that merchants added 37,500 jobs as a result of the regulation.

The claim of $18 billion in savings and more than 100,000 jobs created comes from (as the press release puts it) “extrapolating those findings.”

Unfortunately (for the MPC), these claims hold the water. Below is a chart showing retail employment trends from the Bureau of Labor Statistics. What it shows is that:

  1. Between January 2004 and January 2008, when pre-Durbin interchange rates were in effect, retail employment grew by half a million jobs. In other words, despite so-called “unfair swipe fees.” retailers added jobs to their payrolls.
  2. Between January 2008 and January 2010, more than a million retail jobs were lost. You could argue this, but I’d say the overall economy caused that, more so than high interchange fees.
  3. Between January 2010 and January 2012, about half a million retail jobs were added back. Since the Durbin Amendment didn’t go into effect until October 2011, it’s a stretch to attribute too many of those jobs to the regulatory changes.

Retail Trade Employment (source: Bureau of Labor Statistics)

The following chart shows retail spending in the US over the same period of time (2004 to 2014).

What do you notice? The change in employment pretty much tracks the change in retail spending. Reality: Changes in merchant employment levels have little to do with the Durbin Amendment.

***

What about those $18 billion in savings we consumers have supposedly seen? The MPC conveniently ignores other studies, like one published in the Loyola Consumer Law Review, titled Misguided Regulation of Interchange Fees, which found:

“Gas retailers received over $1 billion in annual savings due to reduced interchange fees.While this should mean savings of roughly $0.03 per gallon, no savings have been passed on to consumers. This is especially  disconcerting as debit cards account for one third of all transactions and over half of non-cash payments for gas retailers. If retailers that receive such a significant portion of their payments from debit cards are not passing along the saving to consumers, it is likely most retailers would refrain from doing so as well.”

The Loyola paper cited an Electronic Payment Coalition that found that “consumer prices one year after the implementation of the Durbin Amendment actually rose 1.5%.” The Loyola paper, however does point that the EPC study “failed to hold certain factors such as inflation constant, [so] it is unclear the actual effect the Durbin Amendment had on consumer prices.” In addition, the Loyola study points out that “small business owners were forced to raise prices for consumers because of the interchange regulations.”

An academic study titled Recent Trends and Emerging Practices in Retailer Pricing points out that “retail consolidation, changing manufacture practices, and advances in technology directly affect both retailer cost and prices. In addition to the medium or channel (Internet vs offline), other marketing mix variables, such as advertising and promotion, customer factors, positioning of the retailer, and competition within and across channels or store formats, influence retailer prices.”

In other words, the MPC’s claims of consumer savings resulting from the Durbin Amendment are without merit.

Redefining Analytics In Banking

A decent-sized bank recently held what it called a “Digital Summit,” bringing together senior execs from various business units and functions to discuss the bank’s opportunities regarding digital technologies.

Predictably, a good portion of the discussion involved customer data and analytics. Well, to be more specific, predictive analytics. Actually, to be even more precise, next best action models.

A rather large fintech vendor–who I bet most of you have heard of–presented on the topics of analytics and next best action (not “offers,” mind you) models. The presentation contained more BS than a field of cows.

This particular vendor could not refrain from spewing more numbers regarding the improvement in response and conversion rates its clients have seen from deploying their technologies–60% lift here, 54% improvement there, 2x this and 4x that. Absolutely no substantiation of the claims.

***

Fortunately, I don’t think many of the execs bought into it. At least, that’s the sense I got, when the president of the retail banking group asked me at the end of the day to sum up my thoughts. I told them that I was afraid that they were placing way too much faith in just one type of analytical model–next best action–at the expense of other models, at the expense of more fundamental data management capabilities, and more importantly, at the expense of higher business priorities.

The look on his face–as well as on a lot of others’–told me my fear was actually unfounded. They were looking for confirmation of what they silently believed.

***

Why was I down on next best action? The vendor’s pitch was basically: Get a 360 degree view of the customer, and then you’ll have the data you need to determine where to go with the relationship, and that, at any interaction, you’ll be able to recommend what the customer’s next best action–not just offer–is.

My take: Nonsense. Total freaking nonsense. First of all, if you integrate all the data you have about your customers across lines of businesses and channels, what you end up with is all the data you have about the customer. What about the rest of their financial relationships? What about their past history? What about their future goals? What about…a thousand other things that you really need to know in order to make the right recommendation to a customer?

The vendor here demonstrated absolutely no understanding that, when a next best action model is developed, the recommended actions must be pre-defined. What that means–and I had to explain this to the head of the retail banking division–is that there’s a pretty good likelihood that many ancillary services that the bank provides could be left out of the model because of an insufficient volume of data about them.

Reality is that there can be an infinite number of “best next actions” a customer could take. A bank lacking a well-honed, well-developed, long-history of analytics deployment isn’t a good bet to get it right on the first try. Especially not working with a vendor that provides technology, and not marketing analytics services.

***

The underlying problem here is…well, there are probably a bunch of problems at play here. Let’s focus on one of them: The definition of analytics.

At one of the breaks during the day, I was chatting with one of the attendees who hit the nail on the head when he said: “I don’t think we’re on the same page regarding what analytics is.” Bingo. He went on to say: “We’re limiting the discussion to just advanced statistical techniques.” Jackpot.

***

In his book Competing on Analytics, Tom Davenport included what he called an Analytical Maturity Model:

I wrote about this three years ago, pointing out that the model is wrong. There is no shortage of organizations that do predictive modeling and have optimization models that do not “compete on analytics” or have anything that comes close to a “competitive advantage.”

But there is one aspect of this model that should be quite useful to banks–and specifically to the one I’ve alluded to above. It’s the spectrum of elements that comprise analytics.

What Davenport got wrong (sorry, Tom)–and that bankers need to understand–is that you can’t place a value judgement on the various elements. That is, standard reports are not “less valuable” or “less important” than ad hoc reports, which, in turn, are not less valuable or important than queries, alerts, or advance statistical techniques.

If running an ad hoc report gives you the information you need to make a great decision, taking six months (and who knows how much cost) to build a predictive model that helps you come to the same decision is not better.

***

Source: SAS Institute

What Davenport got right (gotta give the man a little credit, no?) was that, in building an analytics capability, you don’t start in the upper right quadrant of his model. The lack of understanding on this point on the part of the FinTech vendor was just another thing that ticked me off about them.

If they’re so good at analytics, I would expect them to understand two things: 1) That there’s a path to go on (from the left side of Davenport’s model to the right), and 2) That the key to analytics success is not in integrating the data or deploying technology that enables sophisticated analytical techniques, but in using the data in the context of the organization’s sales and support processes.

This is why I told the bank I was fearful they were putting too much faith in next best action models: I didn’t think they knew how to use them. Will their branch tellers really be willing to make product suggestions? Will their branch sales people feel comfortable doing what the model suggests if their gut tells them otherwise? Is there any consideration of what the contact cadence should be? That is, if an offer is shown when somebody logs into their account on Monday and isn’t accepted, should we repeat the offer when she comes into the branch on Tuesday?  Should we move to a different offer–oops, I mean action?

Bottom line: Successful deployment of analytics means business process change. Some bankers–and a lot of vendors–need to stop thinking of analytics as some kind of panacea that’s going to magically improve marketing performance.

Redefining analytics in banking means thinking about it as “the effective use of data” and not narrowly as just advanced statistical techniques.

Making Analytics Strategic

Yeah, I know. I’m supposed to think through all the issues, formulate my thoughts, compile my wisdom, and then publish it all in a blog post. Afraid that’s not the case this time. I’m breaking the rules because I’m hoping a few of you will be able to help me with the issue identification/though formulation process.

At the upcoming 2014 Customer Analytics in Financial Services conference, I’ll be presenting on the topic Making Analytics Strategic. My premise is that, although the topic of analytics has become more popular over the past few years (a trend further fueled by the emergence of the Big Data craze), the analytics groups in many banks struggle to be seen as significant contributors to the strategic direction (both from a marketing and business perspective).

Even in FIs with a strong analytics presence, and years of experience in data-driven marketing techniques, the analytics group itself is viewed as a bunch of statistical nerds, relegated to some remote section of a floor in headquarters that none of the executive team has ever been on–or would want to be seen on.

***

Why is this? That is, why are analytics teams–even those with strong quantitative modeling and measurement skills–seen as not having (or deserving of) a spot at the strategy table? I have some theories:

1) Communication breakdown. I’ve worked with a lot of analytics folks (once upon a time I considered myself one). Hard truth is that are a lot of them I wouldn’t dare put in front of a senior exec in a large organization. Honestly, I’d be worried that they’d go off on some technical, mathematical discussion oblivious to the fact that the exec in front of them has no clue what they’re talking about.

2) The data remains the same. Despite the proliferation of new data types and sources over the past few years, some analytics groups resist using these new data elements. Maybe they don’t know how to use it, maybe they don’t want to use it, maybe they can’t get the data in the first place. Whatever the reason, the result is that, among the rest of the organization, the analytics group is either missing out on the Big Data movement or not helping the organization capitalize on it.

3) Marketing campaign focus. The work that many analytics groups do is focused on the individual marketing campaign level, and that they have neither the exposure or ability to see above and beyond those campaigns. Many analytics groups are very good at what they do–modeling and measurement of marketing campaigns–but lack the skills to contribute at a strategic level.

5) Trampled under foot. Lastly, some analytics groups just don’t have the leadership to steer them out of the backwaters. They do what they’re asked to do (create models and reports), but never dare ask “what else can we do?” or “what could we be doing?”

***

My challenge is compiling a list of recommendations for what an analytics group can do to raise its SQ (strategic quotient). Some of the new things I know I will include:

1) New metrics. Analytics groups shouldn’t be sitting around waiting for some smarmy, self-serving consultant to come up with a new metric like Net Promoter Score. They need to help the rest of the business find new ways of measuring and evaluating marketing and business results.

2) New language. Analytics groups need to speak the language of the business. On one hand, sadly, this language is full of buzzwords and platitudes (like “omnichannel” and “customer experience”). On the other hand, it gives the analytics group an opportunity to change the way the business understands and addresses those buzzwords, and how analytics can contribute to realizing real business results from them.

3) New connections. In my humble opinion, the key to a functional group’s success in an organization–and whether or not they (or their leader) has a seat at the strategy table–is the group’s personal connections throughout the organization. Analytics groups must find opportunities and ways to work across organizational and departmental lines to integrate analytics.

Think “analytics inside” or “analytics everywhere.” Making that a reality is no easy feat, but critical to making analytics strategic.

***

What do you think? Do you have stories, ideas, suggestions for how to make analytics strategic? Not only would I greatly appreciate any input you have, but if I use it in my presentation I will certainly cite you as the source.

And if you can make to New York on September 15th and 16th, I hope to see you at the conference.

Competing On Performance: Marquis’ Member Value Statements

 

A recent Financial Brand article titled Proving The Value of Credit Union Membership highlighted Marquis Software’s Member Value Statements (MVS), a new service that “calculates specific dollar amounts for each members showing the relative value of their credit union’s products compared to similar products offered by nearby FIs.” The article quotes Marquis’ GM/Creative Director Tony Rizzo:

“We wanted Member Value Statements to make personal, specific and relevant comparisons for people, rather than produce pithy stats that are good soundbites for a newsletter. We decided to make ours so that it would show members all their relationships with their credit union—all of their deposit and loan products—then compare them with the top 15 or 20 banks in that credit union’s specific footprint.”

My take: That’s what I’m talking about!

When I talk about “competing on performance,” that is. In a previous post here titled Competing On Performance, I wrote:

“The next wave of banking competition is competing on performance. That is, who best helps the customer manage and improve their financial lives—and not who has the best rates or fees, or who claims to have the best service.”

Although I still believe we need a single score–a Finscore–to capture our financial health, Marquis’ MVS is a great step towards changing how FIs approach marketing. It isn’t enough for FIs to tell consumers “we will be the best FI for you”–they have to show consumers that they have been and will continue to be the best FI for consumers.

In other words, marketing doesn’t end at the sale. That might seem obvious, since every FI cross-sells their customers and members (to death). But pushing additional products and services does little to help customers confirm that they made the right decision when they selected an account from that FI.

—————

While I love the MVS concept, there are some elements of Marquis’ approach that I would change (so what else is new?):

1) Comparison points. Marquis recommends against including other credit union in the comparison, because, according to Rizzo, “when I look at market share and penetration numbers across product categories, credit unions—while important—are not a significant force in most markets. And besides, credit unions don’t usually turn on one another.”

I disagree. This is about the members (or, if a bank was doing this, its customers). Not about the “let’s all hold hands and bash the bank” attitude too prevalent in credit union land. If a credit union’s members had to choose between other credit union when selecting an account from the chosen CU, those credit unions should be included in the comparison.

2) Channel deployment. Marquis says that “with data and postage and printing, you should figure about $1.5 per member, per project” to send out MVS. As long as we’re talking averages, the “average” credit union has somewhere around 15,000 members, so sending out MVS would cost Average CU less than $25k.

I’m not looking to take away any revenue opportunities from Marquis, but FIs should be able to avoid the mailing cost by posting a quarterly MVS to the online and mobile banking platforms. Nothing wrong with sending a paper MVS, but given the cost of postage and printing, FIs would get more bang for their buck with more frequent communication of the value statements.

3) Macro- vs. micro- deployment. Member value statements help each individual member see how they’re doing, and the statements may very well be effective at driving additional product sales, as Marquis claims.

But FIs should use this concept to create MemberSHIP value statements. That is, at the macro- level, how have all members of the CUs performed? This would likely have to presented in percentages and ranges, as in, XX% of the CU’s members saved between $Y and $Z in 2013, versus what they would’ve spent/saved/earned if they had chosen a different FI.

Granted, this would be harder to compute, but that’s what “competing on performance” is all about.

4) ROI. Marquis says that credit unions that have executed the program have received a minimum of three times their investment back in terms of net profit. Smart Financial CU says they “can’t say they’ve received that kind of ROI quite yet” and Wright-Patt says it was able to attribute approximately $50 million in new relationships as a result of the project.

I’m having trouble with some of these numbers. Three times their investment in terms of net profit? How do you know what the net profit of a particular account really is? $50 million in new relationships? I thought these statements went out to existing members? Where did the new relationships come from? And what does the $50 million number mean? Is that 10 $5 million mortgages?

I understand the need to demonstrate ROI on marketing investments. But if the CMO has to justify every friggin’ marketing expense in terms of ROI, that CMO has a bigger problem (namely, no confidence among the executive team in marketing). 

MVS represents a new approach to marketing an FI’s products and services, and how it communicates value to its members/customers. My suggestion to FIs looking to deploy MVS is to run a test. Define two sets of members: a test group and a control group. The two groups should look as similar as possible in as many dimensions as possible, but especially tenure, number of products currently owned, balances, and demographics.

Then, send out MVS to the test group–both paper statements as well as online delivery–but suppress other marketing messages. Continue to market as usual to the control group. I don’t know how the CUs currently sending out MVS define their tracking period, but I think you need to give this test at least a year.

After four quarters of tracking the impact of MVS, the true return on the investment–as well as the return on the new approach to marketing–should be more apparent.