headertext headertext promolink

Posts tagged with ‘customer satisfaction ratings’

In Defense of Anecdotal Evidence

During the last two decades traditional retail business has experienced a disruption similar to an earthquake delivered by the proliferation of ecommerce. That earthquake caused tsunami-like floods of online customer reviews describing personal experiences with specific products. Those retailers, who embraced this wave of untamed customer feedback, surfed it to higher “visit to conversion rates”, growth and profitability.   The way I feel isToday millions of customers share their experiences online about a wide variety of products and services, both personal and business related. Based on multiple studies, the trust other consumers give to these reviews is increasing from year to year.   While the flood of experiential information provided by customers and its influence continues to grow, many marketing researchers still question its value to business. To be fair, there were some interesting studies conducted that found correlations between the quantitative aspect (star rating) of customer reviews and the restaurants’ revenue. However, qualitative research of the actual reviews is being sneered upon and labeled “anecdotal”.

“The expression anecdotal evidence refers to evidence from anecdotes. Because of the small sample, there is a larger chance that it may be unreliable due to cherry-picked or otherwise non-representative samples of typical cases. Anecdotal evidence is considered dubious support of a claim; it is accepted only in lieu of more solid evidence. This is true regardless of the veracity of individual claims.”  The underscore is mine.

Interestingly, the above quote comes from Wikipedia, that itself has been attacked by status quo defenders as “inaccurate”. Yet, this quote is the best definition I could find online, after checking more “official” sources like Oxford and Merriam-Webster.   Since Customer Experience is a perception, there is no more meaningful evidence to communicate it than an anecdote. Based on the definition, two primary reasons for not using it to form strategic decisions are the size and quality of the samples in terms of representation. When it comes to customer reviews, the available volume (sample size) often exceeds the size of samples collected by most quantitative marketing research projects. Mining of these anecdotes produces very meaningful insights with a real business return on investment that quantitative methods are not capable to discover. Such techniques allow:

  • discovery of patterns and trends within the multitude of “anecdotes”,
  • measurement of their relative importance to customers,
  • measurement of the sentiments associated with these patterns.

The cross-sectional representation of these findings may subsequently be validated via traditional quantitative methods.   The internet democratized many aspects in our lives and not everyone likes it. The selection of sampling strategies for research used to be the prerogative of professional researchers, who often act like high priests of the illusive cross-sectional representation probability standards. In reality, very few of them actually practice any probability sampling methods beyond relatively basic demographics. The proliferation of inexpensive online survey tools enable people, without special training, to conduct marketing research. Most marketing executives, the recipients of this research, have neither the background to venerate these methods nor have experienced a measurable advantage using them. On the other hand, customer reviews often can be subjected to sampling based on gender, geography, age, time published, etc. to improve probability of more full representation of the customer base.   Those who continue to belittle a value of untamed customer feedback to business will fall victim of their own elitism and become even less relevant than they are now. Change before you have to.

Message to CX profession – Transparency begets trust

I get requests to complete surveys quite often. They come from my bank, after in branch transactions, websites I visited, customer service of my credit cards and cable providers. caged bird tweetsThey all want to know how I would score whatever is important to them, and leave a little space for my comments. Some of these surveys are just 2 or 3 questions long, but others expect me to answer pages of seemingly repetitive and circular questions.

I have never seen a survey request that explains coherently why my opinion is so important to them. In other words, they never indicate what is going to happen after I have completed the survey, carefully answered all the questions, and provided very detailed comments. Presumably, if the tabulated scores are high enough, whoever created or sponsored these surveys, will high five each other and cash their bonuses. But what about my needs? Would my contribution help anybody to make a better selection? How would I know if my responses contributed to a better product or service? Sometimes a company proudly advertises their customer satisfaction success, but I wonder if  their claims can be taken seriously because there is no way for a consumer to validate them. For these reasons I stopped answering survey requests a long time ago.

Amazon is considered by many, the poster child of customer centricity. I have done business with Amazon for over 10 years and made hundreds of various purchases over that time. I cannot recall a single survey request from them, ever. Could it be, customer-centric Amazon does not care about the customer experience they provide? I think they don’t survey their customers because they understand the power of authenticity that is growing fast with the advance of social consumer. Amazon understood that consumers will never trust a brand more then they trust each other. A long time ago, instead of collecting self-serving survey ratings, they decided to enable their customers to share their experiences with each other in an open forum. Yes, over the years there were incidents of manipulation attempts. Yes, the Liekert stars are not particularly informative. However, overall the customer reviews are extremely valuable to consumers who learned how use the reviews to reduce the uncertainty of their purchasing decisions.

“Amazon does not make money selling goods. Amazon makes money helping customers make good purchasing decisions.”

According to Keller Fay Group research, two primary reasons customers write reviews and publish them online are:

  1. (90%) Help other consumers to make the right choice for them – kind of: “pay it forward”
  2. (70%) Help brands to improve their performance. Consumers rely on the transparency of their input to motivate brands to act

I can only guess that since Amazon does not survey their customers, they probably use the content of reviews, posted on their properties, to measure the level of customer satisfaction of doing business with them. There are plenty of very informative references in many product reviews that indicate how customers regard their experience with Amazon. Explosive and continuous growth of this company is also a pretty good indicator of the consumers’ affinity.

So why do so many companies still shy away from exploring the content, provided by their customers without solicitation? The answers I’ve been given by Voice of Customer practitioners over the years have a common thread:

  • Lack of control over the process
  • Doubts in authenticity of reviews
  • Fear of negative sentiments

In other words, it seems these companies do not trust consumers, who provide their feedback transparently. Yet, these very companies expect consumers to trust them with their feedback without any transparency at all. How reasonable is such expectation?

Fake ROI and Customer Experience

Fake ROI and Customer ExperienceMany of us are familiar with a request to justify any project from the return on investment perspective. Corporate management’s fiduciary obligation is to control the use of financial resources for the best interest of the company’s stakeholders. I have no quarrel with this notion. I do have a quarrel with how it is frequently practiced.

There are two major points of contention:

  • The Practical definition of who are a company’s stakeholders – actions speak louder than words. Choices of internal funding, that favor short-term returns at the expense of long-term sustainability of business, indicate that the leadership does not consider employees and customers to be the stakeholders.
  • Departmental (silo) approach to ROI analysis – the reduction of a single department’s operational cost without evaluating the effect it may have on performance of other departments.

There are often inescapable Customer Experience consequences associated with efficiency initiatives that show fake ROI:

  • Replacement of inside sales force with automated phone bots may look like an excellent ROI initiative. However, every recorded call received by a potential or existing customer chips away of the brand value. What additional investment in marketing would it require to at least balance the negative effect?
  • Reduction of training budget for customer service representatives can jumpstart quarterly earnings and may inch up the share price for a week or two to please hedge fund managers. How will it affect customer churn rate, and what is the expense of replenishing the lost customers?
  • Implementation of community-driven customer support seems like a sure winner until its effect on conversion rate from free to paid subscription is examined. From that perspective, the increased cost of the company customer support looks like much better investment.
  • Streamlining cost of customer intelligence acquisition process is a no-brainer. Just lay off market researchers, delegate to product managers DIY survey process and offshore tabulation and interpretation of results. How would it affect your product success rate? What would just 5% drop do to the company’s bottom line?

The point I am trying to make is that  the silo approach to optimization of business processes often looks like “pound foolish, penny wise” tactics. Excessive focus on efficiency, i.e. cost reduction, may cause disproportional increase in expense of attracting and keeping customers, and that destroys an enterprise’s effectiveness.

 “The purpose of business is to create and keep a customer.”  Peter F. Drucker

Do not confuse Customer Experience with Customer Service

CX is not CSThere are too many people who use the customer experience and customer service/support terms interchangeably. Even well respected authors and customer centricity consultants, like Don Peppers, occasionally slip into this ambiguous trap. Here are some basic definitions found on the web with a simple query:

 

“Customer experience (CX) is the sum of all experiences a customer has with a supplier of goods and/or services, over the duration of their relationship with that supplier. This can include awareness, discovery, attraction, interaction, purchase, use, cultivation and advocacy.”

 

“Customer Service is the assistance and advice provided by a company to those people who buy or use its products or services. “

 

Customer service is just one of the attributes that comprise customer experience, but it is most definitely not the same thing. For some businesses it could be the most important ingredient, and for others in could be completely inconsequential one.

 

Here are some examples to make the distinctions a little more clear:

 

• You can have great customer experience without the participation of the customer service department at all, but sometimes even the best customer support efforts cannot salvage overall customer experience:

o The most attentive waiter can’t improve a poorly cooked dish, but a scrumptious meal can be remarkably experienced in self-served establishment.

o Expertly installed TV cable service does not guarantee quality entertainment.

o Customer Success Managers can only help to retain customers for a short period of time if the software does not perform as expected.

 

• A product plays the leading role in delivering customer experience, not efforts of customer-facing employees. If a product sucks, no heroics of the front line personnel can deliver excellent customer experience. From this perspective it is difficult to understand how product managers, and even more so product marketing managers, manage to avoid the customer experience responsibility spotlight. These are the people who interpret customer needs and wants into a product design. It is a best practice to have them handle customer support lines on a regular basis to learn firsthand how accurate were their interpretations.

 

• Marketing is the group that creates customer expectations, and when these expectations do not meet reality of a product, customer experience suffers. Classical marketing is supposed to “learn” what customers need and translate this learning to product designers and advertising messages that attract the “right” customers to the “right” product. Instead, marketing is too often focused on “pimping” products designed by engineers overseas without any connection to actual consumers. Focus groups and survey are designed to figure out how to sell what they have got, rather than to make what customers want. No wonder the distinction between “market research” and “marketing research” is so blurry. Customer service can be very helpful to facilitate the return of an unwanted product and deliver great product return experience, but it cannot deliver a great customer experience.

 

Confusing customer service/support with customer experience puts an unfair and unbearable load on the shoulders of an organization that already is the second most stressed group, after sales, in the company. Even though its performance has relatively limited ability to influence delivery of customer experience, it is measured, dissected and optimized completely out of proportion. When you see that happen, it is the first sign that the company is focused on financial engineering – not on their customers.

 

Why doesn’t an abundance of analytics translate into actions?

big data analytics doesn't help when poor goal setting produces absence of actionMost companies heard about value of data-driven decisions, and every company accumulated more data than they know what to do with. Some enterprises invested in technology tools sold to capture transactions and mine oceans of data in hopes of increasing their revenues, reducing their expenses and improving every other KPI in between. While dashboards and interactive reports present us with an abundance of analytics results in real-time, I would like to ask you Dr. Phil’s question – “How’s That Working for Ya?”

For many people. distilling insights from analytics is challenging enough—translating those into specific actions is often impossible.

“A perfection of means, and confusion of aims, seems to be our main problem.” Albert Einstein

The good news is that you cannot buy another tool or technology to resolve this problem for you. If it was, our unemployment rate would be even higher. The job of setting right aims is still best performed by humans. Setting of aims should not be confused with creating hypotheses or forming assumptions. If you do that, you limit potential actions only to validation or rejection of those hypotheses. Meanwhile, the most valuable potential actions may never discovered.

I’ve seen the best results when:

1. The scope is focused on 3 to 5 potential targets directly associated with specific business goals at the time:

a. “Is there an opportunity to disrupt the XXX market segment?” or

b. “Is it possible to improve our customers’ experience by linking it to a performance and compensation model of employees from departments other than sales and support?”, or

c. “What is the confidence of executive management in our sales forecasting process?”

2. The targets are potentially achievable, i.e. specific actions could be taken if the insights, guiding such actions, were known. In other words, you have authority or influence to act upon insights to reach the selected targets. If you don’t, the best insights are not likely to result in decisive action that requires changing somebody’s mind when it is already made. The insights that the bourgeoning MP3 player market segment could be disrupted by redesigning user interface and offering a source for high quality digital content, would not help to introduce iPod revolution without the influence of somebody who had an authority to act on such an insight.

3. Appropriate data sources can be identified and triangulated. The intersection of signals from multiple sources provides better guidance for action. Think of it as GPS requirement for signals: a minimum of 3 satellites are required to produce relatively accurate position information. The minimum accuracy (confidence level) and volume (statistical representation) factors have to be assessed before the aim is set. There is no point in producing insight that does not give you confidence to act.

The short-term, “ready, fire, aim” approach is not likely to produce meaningful ROI from a technology investment into analytics.

Apple at the crossroads

As many products and services are becoming more agile by design, even the best-designed products have shorter and shorter time to enjoy superior profit margins before competition starts to catch on. Patent protection and brand recognition do help to extend this time, but the clock keeps accelerating.

Apple’s iPhone provides a good case study for this phenomenon. Ever since its initial introduction by Steve Jobs, the iPhone was the gold standard for the smartphone product category. It became a status symbol of Silicon Valley technocrati and every new model was greeted by millions of lined up fan boys and girls. Apple bestowed the honors of selling it to a chosen few channels and demanded handsome subsidies in return.Social NPS apple vs galaxy The reason for this success was the iPhone’s superior customer experience, designed into the product, that exceeded any other one in that category by a wide margin. Well, this margin finally shrunk.

Last week’s financial news show that hegemony of iPhone may be over. Make no mistake, it is still a great product, but competition has caught on in creating a customer experience as good or better than Apple’s. People bought 31.2 million iPhones last quarter, but their experiences were rather underwhelming. Not because there is anything wrong with their iPhones, but because their expectations were too high as many new customers came to Apple for the first time. As Samsung, HTC and Nokia customers have “upgraded” to iPhone, they do not find the overwhelming difference they expected.

These scores may not likely match company-sponsored survey results, as they are extracted from sentiments customers express reciting their experience in unsolicited reviews they share online, often anonymously. However, what these scores say is that only 6% of iPhone 5 customers (Net Promoters) would actively put out a Word of Mouth to promote the phone, compared to 18% for Galaxy S4.

The differences are even more pronounced when iPhone is compared to Blackberry Z10, HTC One or Lumia 928.

Out of 25 attributes of customer experience that are most important to customers of these smartphones, Apple still dominates in one – Design, as customers rate it 28% above the group average. Details are available on request.

It appears that marginal improvements introduced in the last two iPhone models – 4S and 5 –  failed to separate the brand from the pack. In NPS trend by modeladdition, US carriers started to experiment with unbundling handsets from services, and the subsidies to the manufacturers like Apple are threatened. At this point the category shows signs of maturity. In the past, Apple’s market researchers were able to discover and exploit latent needs of increasingly demanding consumers. Is today’s Apple capable of inventing a new category of products to march on as the industry leader?

Social Consumer challenge to Traditional Brand Management

As most marketers are well aware, when consumers have trust in a brand the products associated with this brand are capable of delivering higher margins than the competition and sustaining adverse economic conditions without loss of their market share. Unless consumers consider a product/service to be a commodity, or choices are limited by regulatory authorities, brand reputation often outweighs price considerations.

Brand Reputation over price

Historically, brands managed their reputation mostly by means of PR and advertising efforts. However, with the advent of Social Customer these efforts became much less effective.

As more customers become publically vocal about their experiences many brands started to see their reputation as being threatened. Since publically expressed sentiments often do not match customer satisfaction data collected by a company, it easy to adopt a defensive attitude toward social media word of mouth. Some marketers feel that social customers are there to rant and most of the online feedback is negative. The data does not support this theory – of over 49 million reviews for products left on Amazon, the median Liekert score is 3.78 (out of 5). The analysis of customer reviews posted on Yelp and TripAdvisor provides similar results.

Here are few ideas for managing brand reputation in the Age of Social Consumer:

  1. Accept that you cannot control customer behavior. Word of Mouth has been around for a very long time. In the past, very few people could hear it and it was easy to out scream it with paid PR. You cannot control it anymore. The more you try the more your brand reputation suffers. If , or should I say when, you get caught manipulating Word of Mouth, it will damage your brand reputation a lot more than a bad review.
  2. The best way to improve the social reputation of a brand is not to hunt and attempt to destroy the negative comments of customers who were disappointed with their experience. The best practice is to understand the root cause of their disappointment and correct underlying problems with your product or service. If you do that and let them know publicly, the social reputation of your brand will soar.
  3. Most negative reviews point businesses to ways of improving their offers. They also help consumers decide if negative comments resonate with their own expectations or not. Consumers are smart enough to understand a difference between legitimate grievance and angry rant.Ostrich Strategy
  4. Public sentiment, regardless the measurement scale, may not match your internal Customer Satisfaction or NPS® scores, but they often correlate. Most importantly, consumers trust social sentiment more than a brand’s internal metrics. Resistance is futile and amounts to the Ostrich Strategy.

Customer Satisfaction—the Ultimate Vanity Metric?

Almost every company measures Customer Satisfaction or its variations at considerable expense and effort.

Some companies attempt to use the metric for advertising. The metric is supposed to convince a shopper to join the ranks of the company’s customers because they are supposedly 97% satisfied. These numbers are impossible for a consumer to validate, methodologically or anecdotally. Besides, there may be information floating in Social Media that disputes the company’ customer satisfaction claims, however unfairly. In my opinion, brandishing the customer satisfaction scores, without complete transparency, will more likely lead to erosion of trust than to increase in sales. Social Customers trust each other’s experiences more than they do brand claims.

csat bullshitMany companies use the customer satisfaction metric to judge departmental performance while their customers keep churning, for reasons the measured business unit may have no control over whatsoever. They do well if the last calendar period metric is scored higher than the previous one. If the last score is lower, bonuses are not paid and changes to a status quo are demanded. Sometimes the change is a switch to different methodology of measuring customer satisfaction.

Disconnect from the Customer Satisfaction Score, regardless of methodology, and from specific and systematic action that targets improvement of customer experience, makes the score an ultimate vanity metric.

Richard H. Levey wrote thatbullshit survey

“True customer insight requires first knowing (discovering) which attributes matter to the customer and then determining how the firm is performing on those attributes. If a customer’s experience occurs over multiple interactions and various media, then each needs to be measured to drive insight precision. ”   Stop Measuring Customer Satisfaction and Start Understanding It. (emphasis mine).

I would add-stop counting the clicks, “likes” and re-tweets, and start understanding WHY customers do what they do. Stop tabulating survey scores and start reading the comments—you may learn something that would help promote an action of positive change.

Customer Satisfaction Is A Relative Term

Customer perceptions of products and services, or companies and brands, are measured using different scales and methodologies. Regardless of any ambiguity of definitions and sophistication of methodology, any scale you choose reflects a fundamental consideration: how does the product (service/brand/company) experience compare to customer expectations? The expectations are formed by a company’s marketing communications and advertising, other consumers’ word of mouth and (in this age of the Social Customer) pundits and existing customer reviews published online. There are many well documented ”purchasing journey” maps produced by respected researchers. Here is one example.

Most of the studies agree that the choice a customer makes is based on the expectation that the selected product will be more satisfying than most other products within the segment. Yet, many businesses measure the Customer Satisfaction of their offerings without comparing the results to their market averages. Considering that these sentiments are very dynamic, competitive comparisons make the process even more volatile and difficult to measure. However, the results are often well worth the effort, as they generate ideas for differentiation, marcom efforts optimization and operational improvements that could produce significant financial gains.

The example below shows Nokia Lumia products exceeding their customers’ expectations by a much wider margin than their top competitors and the smartphone segment average. If you are involved with Customer Experience Management, a deeper look into the reasons behind the trend may help to improve your customer journey.

CSAT is a Relative Term

The following example measures aggregated Customer Satisfaction with Small (Kitchen) Appliance Brands against average satisfaction level within that Category. It is based on content analysis of 65,379 customer reviews published online over one year period.

Kitchen App Brands CSI vs Average

Such measurements can be produced using most popular scales (such as NPS or CSAT), done for any market segment that has Social Customer engagement, and results can be aggregated by brand and/or distributed by a channel.

Valuable insights into channel performance

poor surveyKnowledge of customer satisfaction and experience delivered by a specific channel can be very illuminating from a brand manager’s perspective. It could be even more enlightening if customer satisfaction metrics also analyzed units sold by each channel and units returned. When these streams of data consistently correlate and/or trend together negatively, it is likely to indicate systemic channel performance problems.

From the channel perspective, customer satisfaction with specific brands – and even more importantly, with specific products – can help optimize shelf space for maximum profitability.

The detailed analysis of customer feedback (reviews) and customer support communications associated with a troubled channel or brand can provide root cause(s) and ideas for corrective actions.

Below is an example of a report on customer satisfaction with smartphones by channel/carrier. The information was mined from 142,369 online customer reviews published prior to March 30, 2013. AT&T customers who use Nokia smartphones reported customer satisfaction 21% above average across all major carriers* and all major brands.

Social Customer Satisfaction per Channel

* Sprint did not offer Nokia smartphones during the reported period.

Deeper analysis may reveal customer satisfaction by model, time period, customer gender, age group, other personal characteristic, or geographic region. Please contact us to discuss methodology for mining intelligence in your market segment.