Tuesday, November 22, 2016
In Building Loyalty, the Details Matter!
Dale Carnegie, the self-help expert whose book How to Win Friends and Influence People was one of the world's most phenomenal bestsellers said, “a person’s
name is to that person the sweetest and most important sound in any language”. He explained, “Respect and acceptance stem from simple acts such as remembering a person’s name and using it
11:29 am est
Carnegie's Words Should
Be Remembered by All Marketers
No matter how long ago he wrote them, they remain pretty good words of wisdom for any company
trying to develop loyal customers and advocates. If you don’t believe him, think about the flipside and the damage of
failing to make that positive impression. As college football season begins, consider the aggressive world of college
football recruiting, where coaches and assistants work to establish and keep contact with 16-18 year old high schoolers and
their parents. It’s a high-stakes game facilitated by events, visits, phone calls, letters, and text messages all aimed
at gaining a commitment from potential star players .
Data Entry and How to 'Shoot Yourself in the Foot'
consider all that hard work and the price paid by the University of Michigan for apparently not attending to how it entered
information in its recruitment database. Reportedly Michigan sent a nice note thanking four-star commit, Aubrey Solomon
and his Mom, for attending a recent BBQ. But there was just one problem (well, maybe two). First, neither Aubrey
nor his Mom attended the event! Second, the school had also managed to misspell both his first and last names in its communications!
Aubrey’s reaction as he decided to de-commit and open up possible recruitment from other
schools – “ I guess they don't have tabs on me."
Folks in Ann Arbor are pretty unhappy today, and their condition can only be blamed on sloppy data handling.
Though Not As Newsworthy,
Our Own Anecdotal Example...
Following a doctor’s visit last month we received an invitation to participate in a patient satisfaction survey.
In opening the email we immediately saw how little the hospital system knew about our experience or, apparently cared.
They had listed our doctor’s last name first in a sentence clearly structured for first name/last name. (In this
case there could be no mistaking which was the doctor’s first and last names.) While the juxtaposition may have
worked fine with some entries; our guess is that the rules weren’t clear for how names were to be entered into the physician
database, or somebody wasn’t careful enough, or the cover letter writer and the survey programmer didn’t communicate
with one another. They had spelled our name correctly, but with the doctor’s name butchered it demeaned the whole communication.
We felt like a number in a system which was just chugging-out impersonal junk emails that no one on their side had even glanced
at.How Good is the Quality of Your Prospect/Customer
building with customers depends on leveraging that "sweetest and most important sound". And doing so both
substantially and correctly. Making a regularly scheduled check of your customer database quality as a part of your
relationship-building process, and regular “data hygiene work” will go a long way to helping win prospects to
your brand and in better retaining current, desired customers into the future
Friday, November 11, 2016
Presidential Polls - How Could They Be So Wrong
The trouble with polls and attitude surveys is that they can frequently
be wrong. In 2012 an average of the leading polls predicted an Obama-Romney tie, yet President Obama won by a 51.1%
to 47.2% margin. The inaccuracy of the 1948 prediction of a Dewey win over Truman put egg on the face of pollster George
Gallup. But it’s not just an American problem, pollsters dramatically misread the desire of Scots to leave the
United Kingdom or Brits to exit the EU. Though you may feel that political polling is somewhat different from the research
surveys marketers use to gauge the potential of a new product concept or the appeal of a competitive positioning strategies,
both forms of opinion-monitoring are subject to the same evolving difficulties.
2:51 pm est
The Main Problems
Confronting Opinion Polling Today
The flurry of post-mortems which will be conducted as a result of yesterday's
Presidential race will likely identify the following as key reasons polling is getting more difficult and less trustworthy.The Presidential
Race Predictions of 2016
- Willingness to participate in a survey –
response rates have been on a dive for the last 30 years. In the 70’s and 80’s it was still possible, with
call-backs, to achieve a 30%-50% response rate. Today rates are reportedly as low as 5%! (This raises the issue
of ‘non-response bias’. If one doesn’t attempt to sample non-respondents, there’s the possibility
that they’re systematically different than respondents and their position never gets acknowledged.)
- Accessibility to survey respondents –
gaining access to potential respondents has been a problem of increasing difficulty. Over the last fifty years contact
methods have necessarily evolved, because of evolving technology, concerns for personal safety, and cost considerations.
The resulting progression has seen interviewing migrate from personal interviews conducted in homes to outbound telephone
interviews to inbound telephone interviews to invitation-only and open online surveys. With each step, pollsters have
lost some degree of control.
- Displacement of the landline telephone – Random digit
dialing was once able to create a credibly-representative sample of a population. However, as landline penetration (versus
the cellphone) has declined from being the only telephone device to one favored by fewer than 40% of the population, access
to a representative sample of the population has become increasingly difficult. That’s because Federal Law prevents
robodialing to cell phones.
in modeling the voting population – to determine how representative a responding sample
is, one requires knowledge of the various factions in the population that might influence response to the issue at question.
New groups – like the apparent ‘Trump Democrats’ or ‘newly registered voters’, or any unrecognized
group increase the complexities of adequately profiling a population within a sample.
- Problems arising from sample weighting –
when a population segment is underrepresented in a sample, the opinions of that segment can be weighted to bring its proportion
of the sample closer to its presence in the population. However, if the weighted attitudes aren’t representative
of the larger segment, then the weighting can severely bias the survey results.
- Failure to recognize bias from social undesirability –
sometimes a measured issue is clouded with social undesirability. It’s been suggested that many American women
may have felt uncomfortable admitting to their decision to vote for Donald Trump. Consequently may have misreported
their voting intentions. (In marketing research we also face 'demand biases' - the desire of a respondent to tell the
interviewer what he or she thinks the interviewer wants to hear. This often leads to acceptance of lackluster product
- Assumption that
respondents truly know what decision they're going to make - it's important to make respondents
comfortable in responding 'I don't know' or 'I haven't made up my mind'. Neutrality or indecision must be acceptable
answers if a survey is to be accurate.
How much each or any of these problems contaminated the polling surrounding the Clinton-Trump
Presidential race will no doubt be debated for some time to come. Attempting to control for each is what keeps many
of us interested in our field. After all, no one said predicting human behavior should be easy....
Tuesday, November 8, 2016
What % of Customer Complaints Are You Hearing?
10:19 pm est
before the social media mushroom, and while the Internet was just becoming popular, the Office of Consumer Affairs investigated
American corporations’ responses to customer complaints. The study found an information flow that mimicked an
95% of all customer questions and complaints were buried beneath the
never reaching corporate management!
But That's All 'Ancient History', Right?
- In 50% of the cases customers didn't attempt to complain, because they: 1) simply didn’t care enough to pursue the
issue; 2) felt the process of writing letters or wading through call center phone menus was too cumbersome; or 3) assumed
they (and their complaint) would just be ignored.
- In 45% of the cases, customers attempted to communicate their problem, complaint or question. But they only communicated it to a ‘line employee’ (a clerk, a
teller, a cashier, or service rep, etc.). These folk either lacked a process to push the issue upstairs or had learned that
there were few rewards in their corporate structure for the bearer of new problems and negative comments.
- Only 5% of the problems ever “saw the light of day”,
reaching the attention of corporate management.
50%/45%/5% “iceberg” was based on data from a time before the public voice was amplified through the wide-scale
adoption of blogs, review sites, Facebook and Twitter, and so much more. We haven’t seen any fresher data, but we thought
it would be instructive to speculate about how those proportions may probably have changed.Yes, No, Maybe....
us to believe that 50% of customers with a problem or complaint have become suddenly more active; it's likely they're still
taking no action (beyond perhaps giving up on the brand at least temporarily and switching to a competitor).
As to the 45% who unsuccessfully attempted to voice a complaint face-to-face through front-line employees, our guess is
that those numbers have declined. All those years of trying to reach out to companies and organizations while seeing little
or no action must have taken a toll. And, of course, the newer online-options for being heard, have surely had impact. The
new, online environment offers unhappy customers alternative channels; if not for resolution, then at least for voicing their
dissatisfaction and thereby achieving revenge.
So a lot more than 5% of problems and complaints must be reaching
corporate management and as a result customer experience is improving, right?
For the brands that are listening with open minds,
continually monitoring social media, responding to customer concerns and problems, and tracking and analyzing what customers
are telling them, yes. For such brands it's likely that more than 5% of problems and complaints are being heard by management
and the customer experience is likely improving. But the situation could be improved even more if management would begin
tracking not just the public social media “word of mouth”, but if they would
also start finding a way to understand what’s being spoken and written about them in the private social
media (emails, texts messages, phone calls and even face-to-face conversations).
some brands don’t even respond to comments left by customers on their own websites. They fail to monitor social
media and they continue to refuse to listen with open minds. These companies are likely hearing even less than 5% of
their customers' complaints and in the process are probably increasing their customers' frustrations.
well are you hearing and responding to your customers' problems and complaints? Is your answer instinctual or based
on objective information?
Tuesday, November 1, 2016
Are You Reporting Misleading NPS Results
4:57 pm edt
not suggesting you stop using NPS. On the contrary, we believe the Net Promoter Score (NPS) key question (“How
likely is it that you would recommend…”) is an important one to ask of all customers. After all, we're
committed to the value of word of mouth in generating new customers. And we think NPS has done a phenomenal job focusing
the C-suite's attention on the importance of improving the customer experience.There's A Need for a More Descriptive Picture
Despite its compelling message, there is something that can
be very deceiving about NPS scores as they are currently reported within most companies. Take for an example an NPS
score of +8. The score has been calculated by subtracting the percentage of Detractors(those
customers awarding the brand a 0-6 score) from the percentage of Promoters (those awarding
the brand a 9-10), while setting aside the Passives (those awarding the brand a 7-8).
You see the ambiguity? A +8 NPS could mean that a small percentage of your customers would be willing to recommend your
brand, say 12%, while 4% would not (with 84% of your customers scored as Passiveswith no strong feelings in
either direction). In that case we would describe this NPS distribution as an example of a "Passive-Majority".Helping to Establish the Right Priorities
But that same +8 NPS could have resulted from 50% of your customers scored as Promoters who
feel very positive about their experience and would likely recommend your brand, while at the same time 42% of your customers
are potential Detractors who would be unlikely to recommend and could well be about to
churn away from your brand. We'd characterize that distribution as "Opposing-Extremes".
The problem is that with the way NPS is currently reported, such totally different underlying scenarios can be
totally masked from senior management. By not revealing the distribution of scores, management doesn't have all
of the information it needs to formulate the correct remediation strategy. The first distribution described above (with
just 12% Promoters) desperately calls for a strategy that ignites the passion of customers for the brand, advancing them from Passives to Promoters.
In contrast, the second scenario above cries for some reconciliation of the 42% Detractors who are ready
to do considerable damage to the brand.
So, What Are Our Recommendations?
Clearly knowledge of the distribution of NPS ratings underlying the NPS
score would help management formulate more realistic and effective strategies. So...
- We recommend that all brands
provide a distribution of the NPS data that underlies the NPS score. That is, along with the reporting of the overall NPS,
also report the percentage of Promoters and the percentage of Detractors that produced that NPS. As an example, NPS
of +8 (with 12% Promoters and 4% Detractors).
- For those with a Detractor percentage
of over 25%, consider whether you could afford to lose 25% of existing customers and whether you can afford to have such a
significant portion of your existing customers out 'poisoning the well' of future customers with their "recommendations"
against your brand. If not, conduct an analysis to identify the greatest points of dissatisfaction driving those Detractor attitudes
and make fixing those your top priority (before trying to generate any more Promoters).
- For those with
a high percentage of Passives (7 and 8 scores being awarded by customers), don't ignore
them just because they aren't included in the NPS calculation. Recognize that emotional attachment to a brand is a big part
of customer loyalty. Assuming that the Detractor percentage is not seriously
impacting the total NPS, conduct a key driver analysis to identify the factors having the greatest impact on satisfying customers
and formulate tactics to convert Passivesinto Promoters.
It's all about focus and prioritization!
But for both you really need more than just the NPS score