Why Content Marketing Falls Short

In a Harvard Business Review blog post titled The Content Marketing Revolution, the author writes:

“We are in the midst of a historic transformation for brands and companies everywhere — and it centers on content. Nine out of ten organizations are now marketing with content – that is, going beyond the traditional sales pitches and instead enhancing brands by publishing relevant information, ideas, and entertainment that customers will value. The phenomenon of content marketing and brand publishing has unfolded rapidly because it responds to consumer preference. According to the Content Marketing Institute, 70% of people would rather learn about a company via an article than an ad.”

There is at least point of evidence supporting this “phenomenon.” A Marketing Daily article reports that:

“At the end of a Proctor & Gamble quarterly earnings call, P&G’s chief financial officer, Jon Moeller, revealed a striking shift in a core business strategy. Moeller announced that the company ‘thinks it can impress consumers who buy household products such as Head & Shoulders shampoo more effectively by ramping up marketing via social media, mobile, search engines and digital content on the Internet.'”

As the article notes, this is as much remarkable for the fact that this was the CFO–not CMO–talking as for the shift in marketing philosophy.

My take: Content marketing isn’t what it’s cracked up to be. The real revolution in marketing–for products with any degree of consumer consideration–will be activity-based marketing.

***

First, let’s dismiss the “finding” that “70% of people would rather learn about a company via an article than an ad.”

Of course people are going to say that. No one wants to admit to being influenced by advertising. And a majority of consumers will always say that they make rational, well-informed decisions. Yet when we look at consumers’ actual purchase decisions–from everything from soap to Presidents–we realize that that’s hardly true.

***

The HBR post claims:

“Brands are no longer merely peddling products; they’re producing, unearthing, and distributing information. And because they do, the corporation becomes not just economically important to society, but intellectually essential as well.”

Intellectually essential? Give me a break. Exactly how does “content” about toilet paper and laundry detergent qualify as “intellectually essential”? If I’ve learned anything about marketing, it’s that some marketers will glorify the latest trend as the end-all be-all, and content marketing is fitting the mold nicely.

Another thing I’ve learned about marketing is that marketers–unlike accountants who have a nice neat little thing called a chart of accounts–have no standard definitions for the terms they throw about. Review the past seven years’ history of “customer engagement” to help prove my assertion.

And so what, exactly, qualifies as content? Is there an agreed-upon definition regarding how much brand-related information is allowed in something called content? No, of course not.

“Content marketing” is merely the evolution of advertising from messages (1960’s thru 1990s) to stories (1990s to present) to content. No matter what you call it, it’s still about persuasion–getting someone to buy your product, or at least, be more favorably disposed to your company and/or product, so they will buy one day.

***

The results of a study from Demand Metrics goes to lengths to prove that “interactive” content is more effective than “passive” content. The interactivity range of content (and percentage of marketers whose content fits the various segments) were as follows:

Content Type % of Respondents
Very passive: content produces little or no engagement with the buyer 13%
Somewhat passive: content produces slight engagement with the buyer 23
Moderate: content produces some level of engagement with the buyer 39
Slightly interactive: content produces measurable engagement with the buyer 19
Very interactive: content produces highly engaging experience with the buyer 6
Source: Demand Metrics, June 2014

The study goes on to show that 70% of marketers consider interactive content to perform “moderately well” or “very well” on producing conversion, versus 36% who rated passive content in those two rating categories.

A couple of problems here. First, the study found that17% of participants have no content effectiveness measurements in place, and 49% only use basic metrics such as clicks or downloads. So how could they know how effective interactive or passive content really is

Second, if content is “intellectually essential,” as the HBR post claims, then there’s no reason why passive content–which could stimulate thought processes as easily as interactive content–needs to be inferior to so-called interactive content.

This line of reasoning on the part of Demand Metrics does nothing except take us back to the argument over what “engagement” means.

***

The lack of a common definition of what content is and isn’t isn’t the only thing holding back content marketing from being the “revolution” that HBR claims. The Demand Metrics study also found that

“During their journey from need to purchase, only 25% of buyers reveal their interest to vendors early. 90% of study participants believe that it is somewhat or very important to engage buyers earlier in their journey.”

And therein lies the real reason why content marketing falls short: In and of itself, it does nothing to change the timing of interacting with customers and prospects, it only changes the nature (or maybe I should say “form”) of the message.

***

The real revolution in marketing (for considered products) is activity-based marketing:

Marketing within the context of an activity being performed by a customer or prospect.

Activity-based marketing doesn’t simply change the nature of messaging, it changes (and, in some cases, creates) the buying or decision journey or process.

USAA’s Auto Circle mobile app is a great example of activity-based marketing. It’s an app that creates, guides, and simplifies not just the car buying experience, but the associated purchases and decisions like car financing and car insurance.

Sites like Edmund’s can provide content to help car shoppers understand the differences between Car A and Car B. But an activity-based marketing app like Auto Circle creates and tracks the entire buying/decision process. Activity-based marketing has a number of benefits. It:

1) Helps marketers understand where a prospective buyer is in the decision journey and what type of message (whether it’s content or something else) is most appropriate. By tracking activities and actions over time through an activity-based marketing app, marketers can collect the (big) data needed to better understand the effectiveness of certain marketing actions at the various points in the consumer journey.

2) Gets marketers involved in the customer’s decision process far earlier than they ever have been in the past. What gets a USAA (in the case of Auto Circle) involved early in the process isn’t content–it’s process. It’s the promise of creating a better, simplified, and more convenient process that gets a prospective car buyer to download the app, and use it to manage their car (and financing and insurance) shopping process.

***

Bottom line: Marketers can fawn over content marketing all they want. It’s just another stop in the delusionary journey of advertising that makes them believe they’ve got something new to offer. They don’t. Not until they develop activity-based marketing apps, that is.

Advertisements

BigDataDoofus Hunting

In my personal life, I’m not a hunter, but I have nothing against them. As long as you’re not the Vice President of the United States, and not shooting your friends on hunting trips, I don’t care what you do.

But in my business life, I do like to go hunting. For fakes, phonies, scam artists, and generally anybody else with serious delusions about the way business works. Over the past few years, I’ve found a new target to hunt for: The (not so) elusive BigDataDoofus.

BigDataDoofuses have multiplied like rabbits in the past few years (I think that many of them could be the offspring of a species of animals known as SocialMediaMorons). They roam the blogosphere and social media channels, dropping little BigDataPoops all over the place. Like:

“Advances in capturing and analyzing big data allow us to decode human DNA in minutes, find cures for cancer, accurately predict human behavior, foil terrorist attacks, pinpoint marketing efforts, prevent diseases and so much more.” (source)

Cumbaya.

Or how about that article, published on LinkedIn, titled The Story of the Way Big Data Shaped World War II? Man, that was a stinky one!

—————

A white paper from 1010 Data, titled 2013 Big Data in Business Study, caught my attention. White papers like these are great hunting grounds for BigDataDoofuses. I think I found me some there.

The author(s) of the study surveyed 158 executives and asked them ““Which industries, if any, will see the most success with Big Data by the end of 2013?”

The industry most-frequently cited, by nearly half of respondents, was Finance (I’m assuming that’s financial services). Retail was second, mentioned by 42% of respondents, followed by Consumer Products (what’s the difference between Retail and Consumer Products?), which was listed by 39% of respondents.

20140109 DoofusFig1

Were the respondents right? I have no idea how anybody could possibly say yes or no. What does “Big Data success” mean? 

—————

The next question asked of the survey respondents is even more incredulous: “When, if at all, do you think Big Data will reach its potential in the business world?”

As the white paper puts it:

“An overwhelming majority of technology executives (79%) believe that Big Data will reach its potential in the next one to five years. A smaller proportion (12%) believe Big Data has already reached its full potential.”

20140109 DoofusFig2

What, exactly, pray tell, is Big Data’s “potential”?

How can anyone answer the question posed in the survey without clarifying what Big Data’s potential is?

To the 12% that believe Big Data has already reached its full potential: What exactly has been achieved that makes you conclude this?

Personally, I don’t think the 12% necessarily have it wrong. It’s quite possible that Big Data’s “potential” is so limited that that potential has already been reached. 

More than a third–35% to be exact–of respondents said Big Data’s potential will be reached by the end of 2014.

If Big Data’s potential is, as described in the quote above, “find cures for cancer, accurately predict human behavior, foil terrorist attacks, prevent diseases,” then the 35% of executives who think this potential will be reached by the end of 2014 are BigDataDoofuses. 

—————

Roughly half of the respondents to this survey said that the reason that companies are holding out on using Big Data is that they “need better time to value for Big Data” and “need simplified solutions for Big Data with fewer moving parts.”

I don’t even understand what that first answer means. “Better time to value”? What is that?

And to the 35% that think Big Data will reach its potential by the end of 2014: If companies are holding out on using Big Data, how can Big Data’s potential be reached so soon?

The only reasonable answer to that is that Big Data’s potential is very limited. 

—————

The great thing about being a BigDataDoofus hunter is that, for now at least, they’re anything but a rare breed. If we’re lucky, though, they will become an extinct species. 

Bigadatabytes

At the recent Consumer Insights and Analytics in Banking conference (which was very good, BTW), no fewer than 5 speakers made it a point to tell the audience:

“<Fill-in-a-number> of <fill-in-a-data-measurement-metric> are created every <fill-in-a-time-period>.”

Needless to say, there was no consistency in the metric. Two petabyes a day, five terabytes a week, the numbers were all over the map.

Sadly, this is what it has come down to: A made up statistic is used to justify a made up management concept.

—————

I think I’m safe in assuming that in business conferences everywhere, across industries, that same claim about the amount of data being created day is being made by speakers.

It’s sad when you realize what uncritical, unthinking sloths we’ve become to allow this to happen. For two reasons:

1) The claim has no emotional impact on people. I’m willing to bet that the conference speakers who throw out this new-data-every-day stat do so in an attempt to impress upon the audience the vast quantity of new data being created every day. Just one problem: 99.9% of us have no clue what a terrabyte, petabyte, zettabyte or yottabyte is. (And why wouldn’t they go in alphabetical order when naming this stuff? What kind of idiots are in charge of this?)

If you tell someone that something weighs a ton, we get it. A ton is 2 thousand pounds. We typically weigh anywhere from 100 to 300 pounds. There’s a sense of context and relativity.

What the hell is a terabyte or petabyte? Is that a lot? I bet most of us have no idea how many gigabytes of information we have on our hard disks. There’s no sense of context or relativity when it comes to measures of data for the vast majority of us.

So when a conference speaker proclaims “We create 2 petabytes every day!” it means nothing — absolutely fricking nothing — to pretty much everybody out there.

2) There’s no way anybody can estimate how much data is created. In the movie The Usual Suspects (one of my favorite) Verbal Kent says “The greatest trick the Devil ever pulled was convincing the world he didn’t exist. And like that, poof. He’s gone.” The greatest trick the consulting devils ever pulled was convincing the world that 2.5 quintillion bytes of data are created every day. And like that, poof. It’s true.”

There are three unknowable things in life: 1) How we got here; 2) How much data is created on a daily basis; and 3) What my wife will spend our money on today. 

I shouldn’t have to explain this to you. There is simply NO WAY IN HELL anybody can even begin to estimate the amount of data that is created. The reasons start with the explanation that “data” in and of itself is not a commonly defined concept. 

—————

But debunking the claims of the amount of data that is created every day doesn’t get us around the problem here: Conference speakers need a statistic they can throw out there to impress the audience and make Big Data appear to be something that it’s not (i.e., real). 

So I would like to propose a standard here: Can we all agree that from here on out we claim that “42 bigadatabytes of data are created every day.

Why 42? As my friend @GeoffIDC tweeted “42, right? It’s always 42.” Yep. 42 it is. 

—————

Big thanks and apologies to @gilesnelson and @GeoffIDC who kind of came up with the concept for this post. I have no problem stealing ideas from other people. But I do want to give them credit for it. 

Marketing Analytics Malpractice

Marketing Charts reports on a study from Duke University regarding marketers’ use of marketing analytics:

“According to the latest CMO Survey from Duke University’s Fuqua School of Business, the percentage of projects that use marketing analytics has dropped to 29% from 35% at this time last year, while two-thirds of CMOs now say their companies don’t formally evaluate the quality of marketing analytics, up from 53.2% last year.”

Other findings from the study show that:

  • CMOs’ rating of marketing analytics’ contribution to overall marketing results (1-7 scale where 1 = Not at all and 7 = Very highly) dropped from an average of 3.9 in August 2012 to 3.5 in August 2013 (64% rated it 4 or less). Financial services CMOs rated marketing analytics contribution an average of 3.5, significantly lower than Communication/Media CMOs’ rating of 4.9, Transportation CMOs’ rating of 4.6, and CPG CMOs’ rating of 4.4.
  • Just 15% of CMOs said they’ve quantitatively demonstrated the impact of social media on their business, yet, on average, they expect social media spending to increase from 6.6% of the marketing spend to 15.8% in the next five years. And when asked how well integrated social media is with overall marketing strategy ((on 1-7 scale where 1=not integrated and 7=very integrated), 56% rated it 4 or less.
  • Two-thirds of CMOs say they’re under pressure from the CEO or Board to show the impact of marketing (and 60% say that pressure is increasing). This shouldn’t be surprising, since only 36% said that their company can show the short-term impact of marketing spend on their business and 32% said they can show the long-term impact.

My take: This all adds up to marketing (analytics) malpractice.

—————

Overall, these findings are very consistent with the research I’ve done (in conjunction with The Financial Brand) specific to the world of financial services. They point to some very significant issues regarding marketing management in banking:

Branding is winning the marketing civil war. I’ve long held the belief that there is a civil war going on in marketing between branding-oriented marketers and quantitatively-oriented marketers. The branding-oriented marketers (who, in moments of weakness, I affectionately refer to as “brand bozos”) focus on advertising campaigns to generate marketing results. The quantitatively-oriented marketers, often referred to as database marketers (but feel free to call them geeks or nerds) like to develop quantitative models and run marketing campaigns. With the percentage of projects incorporating analytics on the decline, and the percentage of companies formally evaluating the quality of their analytics on the rise, I can only conclude that quantitatively-oriented marketers are [still] losing the war.

Too many CMOs just don’t understand analytics. One of the reasons that branding is continuing to win the war is that many CMOs come from the branding (vs. quantitative) side of the marketing house. So naturally, their efforts and priorities will be focused on the branding efforts. But that’s not an excuse to completely ignore the analytics side. But that must be what they’re doing. How else can you explain why so many CMOs can’t demonstrate the value of marketing or  the impact of social media in the face of increasing pressure from the CEO and Board to do so?

Big Data is stillborn. If I had a nickel for every article I’ve seen proclaiming the transformative impact of Big Data on…well, everything…I wouldn’t freaking be here! The results of the Duke study show me that way too few businesses are capable to do anything with Big Data (even if Big Data were actually something other than a buzzword). The value of Big Data isn’t in the data — it’s in ability to do something with it. If the results of the Duke study suggest to you that there are a lot of marketers improving their ability to do something with it, please show me the data you think suggests that. Less than 50% (41% to be more exact) said that they use customer data for targeting purposes. What the hell good is throwing more data — and more types of data — at them going to do if they’re not even using the customer data they have?

—————

Bottom line: There’s a helluva lot of marketing analytics malpractice going on out there. The reality of marketing analytics capabilities is nowhere near the hype and promise being put forth by the Big Data gurus.

The Shelf-Life Of Marketing Data

The MIT Sloan Management Review recently asked Does Data Have a Shelf Life? According to the article:

Creating insights from data is an important, and costly, issue for many companies. They spend time and effort collecting data, cleaning it, and using resources to find something meaningful from it. But what about after the insights have been generated? Do insights have a shelf life? If so, when should knowledge gleaned from old data be refreshed with new data?

[Researchers} suggest that for real-world Knowledge Discovery in Databases (KDD) — applications like customer purchase patterns or public health surveillance — new data is imperative. “It could bring in new knowledge and invalidate part or even all of earlier discovered knowledge. As a result, knowledge discovered using KDD becomes obsolete over time. To support effective decision making, knowledge discovered using KDD needs to be kept current with its dynamic data source.”

What do the researchers suggest companies do? “Model an optimal knowledge refresh policy.”

The author of the article wisely asked the researcher to explain what this is, in lay terms, and was told:

“The model itself aims at deciding when to run KDD to refresh knowledge such that the combined cost of knowledge loss and running KDD is minimized,” wrote Fang in an email. He explained that knowledge loss refers to the phenomenon that knowledge discovered by a previous run of KDD becomes obsolete gradually, as new data are continuously added after the KDD run. Knowledge loss has impacts on several levels: if KDD is run too infrequently, for instance, customers may not respond to promotions that are based on obsolete customer purchase patterns; yet there is a personal cost of managing the KDD process, and there are computation costs of running KDD, regardless of whether it’s run in-house or in the cloud, so running it frequently isn’t the answer.”

My take: I love it when the geniuses at MIT create stuff too complicated for any Harvard MBA to understand.

—————

But seriously…

I can’t speak to data or “knowledge” regarding public health surveillance. I can tell you, though, that the use of the word “surveillance” by the researchers was not the smartest choice of words right now.

I can speak, however, to using data about customer purchase patterns to generate “knowledge.”

—————

Why is “knowledge” in quotes? Because I have no idea what the researchers are talking about when they use the word. Ask marketers what their current “knowledge” regarding customer purchase patterns is, and 999 out of 1000 will say “Huh?” (The other one will cite his firm’s Net Promoter Score).

Using data in marketing doesn’t go through some neat and orderly process (e.g., Data -> Insights -> Knowledge) like some academics would like to think.

Roughly speaking, there are two paths data does go through: 1) Data -> Model -> Action, and 2) Data -> Human Intervention -> Decision.

—————

The first path describes database marketing efforts, where data is input into (and used to develop) a marketing model, and after the model is run, action (contact/no contact) is taken (I could have called this third step “decision”, but it might be worth distinguishing an automated decision from a human decision).

This might sound like a straightforward process, but the number of data elements that go into any model is a messy process, that involves testing, and is subject to a cost/benefit analysis of acquiring the data.

The second path describes the other trillion ways in which marketing decisions get made. It’s messy. Lots of data, some elements more relevant (and/or timely) than others. But lots of human intervention. And lots of iterations.

But nowhere in these paths do marketing decision-makers stop and think about what “knowledge” they’ve gained.

In this context of the first path, the model could be thought of as “knowledge.” Since I don’t know of any marketer who would argue that the relevancy and accuracy of any marketing model is constant over an infinite period of time, you could say that that knowledge has a shelf-life.

Many marketers evaluate the effectiveness of their models at various stages in the life cycle of the model. A well-performing model isn’t likely to get messed with. As a result, a model to predict the shelf-life of the model isn’t something I see too many marketing departments adopting.

In the context of the second path, good luck identifying the “knowledge.”

Marketing practitioners just don’t think in terms of “knowledge.”

—————

What drew my attention to the Sloan article was the title:  Does Data Have a Shelf Life? The article, however. isn’t really about data, it’s about identifying the shelf-life of knowledge.

Too bad, because the shelf-life of data is the more interesting topic.

The question, as stated, however, is a no-brainer. Of course, data has a shelf-life. The challenge isn’t figuring out whether or not data has a shelf-life, it’s figuring out what that shelf-life is. Reducing the problem down to a formula or model just isn’t realistic. Why not? Because of religion and politics.

—————

If you don’t think there’s religion in marketing, you’re a naive fool  There are countless marketers who believe something about marketing that can’t be empirically proved. And if you believe something on faith alone, that’s called religion.

As for the politics of data, assume for a moment that I have data that proves the marketing channel you manage produces superior results compared to other channels. Would you care if that data is three years old? You wouldn’t. But the managers heading up the other channels (looking to increase their budget) would care.

—————

Bottom-line: There’s no question that marketing data has a shelf-life. But determining what that shelf-life is subjective, and I can’t imagine any marketing department relying on a model to figure it out.

As more data sources become available and are used by marketers — and the need to act on that data on a more real-time basis grows — figuring out the shelf-life of marketing data will become a bigger issue for marketers.

It may turn into an advantage for data providers, however. Those that can demonstrate the shelf-life of their data (as well as the shelf-life of competitive data sets) — and successfully defend the determination of that shelf-life — may gain competitive advantages. 

Data Storyology

It’s conventional wisdom by now that, with all the data we have to analyze, we have to find the “story.” Experts like Tufte have done wonders to improve our capabilities regarding data visualization and presentation — but that’s different from the understanding the story that the data is telling.

A recent HBR blog post titled How to Tell a Story with Data offers the following points of advice: 1) Find the compelling narrative; 2) Think about your audience; 3) Be objective and offer balance; 4) Don’t censor; and 5) Edit, edit, edit.

My take: My points of advice differ. And I think we need more rigor (dare I say methodology) regarding data storytelling.

—————

I don’t have an issue with “find the compelling narrative” and “think about your audience” but these points are actually part of a broader process that the article doesn’t define.

Think of data storyology — the art and science of telling stories with data — as having two broad components: 1) Finding the story in the data, and 2) Telling the data story.

If I were to draw a picture, it would look like a yin/yang diagram, not a flow.

Finding the story in the data is an iterative process that involves utilizing data management and statistical tools to cut and analyze data. But in also involves applying human judgment and experience to figure out what the “story” is.

The HBR blog author describes “finding the compelling narrative” as:

“Giving an account of the facts and establishing the connections between them. The narrative has a hook, momentum, or a captivating purpose. Finding the narrative structure will help you decide whether you actually have a story to tell.”

I wish he would have left that last sentence off. If you find a narrative structure, you have a story. Whether or not that story is worth telling is a different issue.

Finding the narrative structure is more than “giving an account of the facts and establishing a connection,” however. In fact, the “account of the facts” is probably the least important part of the story because it’s the part that many people either already know or think that they know.

The interesting part of the narrative is the why, who, and when (more so than the what).  The “what” is the plot, but the “why” is what gives the plot some depth. And just as poor character development in a book diminishes the quality of the book, leaving out the “who” in a data story produces an incomplete (and potentially boring) story.

—————

Finding the story is just the Yin part of the equation. Telling the story is the Yang.

This is where the “think about the audience” part comes in. Good data storyologists (or data artists) often define or uncover multiple stories in the data. Those stories likely have different levels of appeal to different audiences. Telling the story starts with defining who the audience is for the data story, and which of the data stories that were defined is most relevant, or how those stories tie together.

—————

At this point, however, my opinions veer from the blog author’s.

Telling the data story is anything BUT being objective and balanced. Data storyology is about educating, influencing, and motivating people. As a data artist, the last thing you want to do is be objective and balanced. You want to draw upon your insights, opinions, and experience — which are all subjective — to tell the best story. The article says that “a visualization should be devoid of bias.” Perhaps a point for future discussion, but I think that this is simply impossible.

The article also says that “Balance can come from alternative representations (multiple clustering’s; confidence intervals instead of lines; changing timelines; alternative color palettes and assignments; variable scaling) of the data in the same visualization.”

First off, this is a very narrow interpretation of “balance,” in that relates to just visualization. Data storyology is about more than just data visualization. Visualization is not the story.

In addition, I would encourage any budding data storyologist to “censor like hell.” The absence of censorship equals data dump.

—————

With a story and an intended audience, there’s still the art of telling the story.

A number of years ago, the analyst firm I worked for brought someone to train us on the art of storytelling. Still one of the best training sessions I’ve ever had.

The story trainer told us to think about the development of a story in terms of the story’s impact on the audience’s mood, and to strive to achieve the following mood pattern:

To summarize, think of the story development as: 1) Stuff is happening (neutral mood), 2) Things are going to get worse (or the things that are happening will cause problems, doom, despair) 3) Stuff happens or will happen to make it all better.

Story example: Little red riding hood is walking in the woods (#1), she gets captured by the big bad wolf (#2), she gets saved by the Woodsman #3).

Data story example (in financial services): Consumers are fed up with paying the high cost of checking accounts (#1), new providers are coming into the market to steal banks’ customers and drive profitability even lower (#2), banks can deploy new technologies and marketing analytical techniques to provide new forms of value to consumers to retain them and make them more profitable (#3).

—————

All the talk about the rise of data scientists misses the boat, in my book. We need people who can take the data, and not just find the story in the data, but the tell the story in a way that educates, influences, and motivates people. That’s not science — it’s art. It’s data storyology.

Gonna Manage Big Data Like It’s 1999

I started working at Forrester Research in 1997. In retrospect, I think I got hired not because I demonstrated great potential to be an analyst, but because my boss and colleague needed a sucker to join the team and write a report that nobody else wanted to write.

So I joined Forrester and wrote my first report on the hot topic of the day: Knowledge Management (it was a terrible, terrible report).

I interviewed executives from about 50 companies about what they were doing about knowledge management. What I heard was confusing. For the most part, what these companies were doing with IT and data was pretty much what they had been doing for the prior 10 years.

What was different (in 1997), was that now these initiatives were called “knowledge management initiatives.”

There were two key success factors (or barriers) critical to the success of knowledge management initiatives: 1) employees with the right skills in knowledge management, and 2) management support and commitment.

After all, sucking the “knowledge” out of people’s heads and making it available to everyone else in the organization wasn’t easy, and wouldn’t be successful if management didn’t sufficiently invest in it.

—————-

Roll the clock ahead 16 years and you’ll find that nothing has changed. Except the labels.

In a creditunions.com article titled Big Data At A Growing Credit Union, an interview with a credit union executive went like this:

Q: Can you define Big Data?

A: Not really. But in a way, Big Data is what people have been doing all along — looking at and analyzing data. I don’t know the tipping point where a credit union moves from generally looking at data and is suddenly in Big Data.

Q; Can’t data also overwhelm and slow decisions?

A: It can unless you achieve the balance of talent and training. If you put the right data in the wrong hands you can be swimming in that data forever. You’ve got to get people to the point where they understand what’s relevant and what’s not, and that takes time.

Q: What do you feel is critical to success with Big Data?

A: You have to have directors and senior managers who are supportive and understand there are revelations this data can provide.

You’ll pardon me if I can’t help but think that this all sounds vaguely familiar.

—————

If you can’t define Big Data — other than saying it’s what “people have been doing all along” — you are not going to get management support for the efforts.

If you think Big Data represents a different way of managing your business, but you can’t articulate that difference to your employees, you will not get broad employee support for the efforts.

Management is usually willing to fund some initiatives to try something promising. Employees, on the other hand, are generally loathe to change unless the pain of the existing is too much to bear. You might argue that they’re willing to change if the potential upside is appealing enough, but I’m not so sure about that.

Jumping on the management fad bandwagon is a prescription for failure. It trains employees to put everything they want to get funding for under the fad banner, and diminishes whatever potential value really lies in the core of the new concept.

————–

My take: You won’t find anyone more supportive of using data to make business decisions than me. But the path to becoming more data-driven doesn’t mean managing like it’s 1999 and jumping on the fad bandwagon.