Almost every company measures Customer Satisfaction or its variations at considerable expense and effort.
Some companies attempt to use the metric for advertising. The metric is supposed to convince a shopper to join the ranks of the company’s customers because they are supposedly 97% satisfied. These numbers are impossible for a consumer to validate, methodologically or anecdotally. Besides, there may be information floating in Social Media that disputes the company’ customer satisfaction claims, however unfairly. In my opinion, brandishing the customer satisfaction scores, without complete transparency, will more likely lead to erosion of trust than to increase in sales. Social Customers trust each other’s experiences more than they do brand claims.
Many companies use the customer satisfaction metric to judge departmental performance while their customers keep churning, for reasons the measured business unit may have no control over whatsoever. They do well if the last calendar period metric is scored higher than the previous one. If the last score is lower, bonuses are not paid and changes to a status quo are demanded. Sometimes the change is a switch to different methodology of measuring customer satisfaction.
Disconnect from the Customer Satisfaction Score, regardless of methodology, and from specific and systematic action that targets improvement of customer experience, makes the score an ultimate vanity metric.
Richard H. Levey wrote that
“True customer insight requires first knowing (discovering) which attributes matter to the customer and then determining how the firm is performing on those attributes. If a customer’s experience occurs over multiple interactions and various media, then each needs to be measured to drive insight precision. ” Stop Measuring Customer Satisfaction and Start Understanding It. (emphasis mine).
I would add-stop counting the clicks, “likes” and re-tweets, and start understanding WHY customers do what they do. Stop tabulating survey scores and start reading the comments—you may learn something that would help promote an action of positive change.
Customer perceptions of products and services, or companies and brands, are measured using different scales and methodologies. Regardless of any ambiguity of definitions and sophistication of methodology, any scale you choose reflects a fundamental consideration: how does the product (service/brand/company) experience compare to customer expectations? The expectations are formed by a company’s marketing communications and advertising, other consumers’ word of mouth and (in this age of the Social Customer) pundits and existing customer reviews published online. There are many well documented ”purchasing journey” maps produced by respected researchers. Here is one example.
Most of the studies agree that the choice a customer makes is based on the expectation that the selected product will be more satisfying than most other products within the segment. Yet, many businesses measure the Customer Satisfaction of their offerings without comparing the results to their market averages. Considering that these sentiments are very dynamic, competitive comparisons make the process even more volatile and difficult to measure. However, the results are often well worth the effort, as they generate ideas for differentiation, marcom efforts optimization and operational improvements that could produce significant financial gains.
The example below shows Nokia Lumia products exceeding their customers’ expectations by a much wider margin than their top competitors and the smartphone segment average. If you are involved with Customer Experience Management, a deeper look into the reasons behind the trend may help to improve your customer journey.
The following example measures aggregated Customer Satisfaction with Small (Kitchen) Appliance Brands against average satisfaction level within that Category. It is based on content analysis of 65,379 customer reviews published online over one year period.
Knowledge of customer satisfaction and experience delivered by a specific channel can be very illuminating from a brand manager’s perspective. It could be even more enlightening if customer satisfaction metrics also analyzed units sold by each channel and units returned. When these streams of data consistently correlate and/or trend together negatively, it is likely to indicate systemic channel performance problems.
From the channel perspective, customer satisfaction with specific brands – and even more importantly, with specific products – can help optimize shelf space for maximum profitability.
The detailed analysis of customer feedback (reviews) and customer support communications associated with a troubled channel or brand can provide root cause(s) and ideas for corrective actions.
Below is an example of a report on customer satisfaction with smartphones by channel/carrier. The information was mined from 142,369 online customer reviews published prior to March 30, 2013. AT&T customers who use Nokia smartphones reported customer satisfaction 21% above average across all major carriers* and all major brands.
* Sprint did not offer Nokia smartphones during the reported period.
Deeper analysis may reveal customer satisfaction by model, time period, customer gender, age group, other personal characteristic, or geographic region. Please contact us to discuss methodology for mining intelligence in your market segment.
October is here, and that signals the arrival of Piplzchoice quarterly smartphone Customer Experience report. The past reports are available upon request.
Here are a few words of explanation of the methodology used to produce this report:
We interpret and measure “Customer Experience” according to a definition and understanding articulated by Forrester Research analysts as “how customers perceive their interactions with your company.” However, we expand it further to “your brand” and “your product” to make the measurements more actionable by branding and product management.
we do not conduct any surveys, pose any questions, or contact any customers in any form or shape. That also means that no assumptions or keywords are constructed by Amplified Analytics personnel to produce this report. The information published here is based on proprietary, automated Opinion Mining of unsolicited and customer-generated description of their experience with specific smartphone models.
We start with a view of a “universe” of 355 smartphone models described by 108,963 customers. We then focus on smartphones that have been reviewed during the last 3 months (39 models).
This is not a “buzz” sentiment monitoring exercise, as we ignore any content that cannot be reasonably attributed to experience of a paying customer. More on the methodology can be found here.
Spotlight on Brand
A share of customer reviews illustrates a level of their engagement with a brand and correlates with dynamics of their market share performance. The chart below shows a share of customer engagement with the smartphone brands during the third quarter of 2012.
As predicted in the last two reports, HTC’s share of engagement had finally come down as the Thunderbolt customers stopped describing their experiences with the long-obsolete model. Apple’s share of attention will likely rise substantially in the fourth quarter since the slow delivery of iPhone 5 has just started to produce a trickle of their customer reviews.
The big winner of customer share of attention this quarter is the Samsung at 39%. However, it is not a surprise, as its engagement with customers was growing consistently for each period on which we reported. The only other brand that shows consistent increase, albeit on a much smaller scale, is Nokia (6%).
The Average Customer Satisfaction per Brand chart paints a picture that is quite different from the results of most popular surveys that were published in a recent past. We calculated the average Customer Satisfaction of a Brand by averaging Customer Satisfaction scores of each model that belongs to the Brand.
This approach highlights how specific models can impact overall Customer Perception of a Brand. A wide range of Customer Satisfaction scores with Samsung models, from Galaxy Note=1.47 to Intercept=0.88, is responsible for lowering the Brand average. I think our approach provides better guidance for proactive Category/Brand decision management.
Spotlight on Operating Systems
Android keeps dominating the share of Customer Engagement, but Windows OS’ slice of the pie continues to grow.
The trending picture below shows the surprisingly consistent increase in Customer’s engagement with Windows OS from quarter to quarter. This is the first period WP Customers “out” reviewed the Apple Customers by 83%.
The rate of engagement correlates to a high level of Customer Satisfaction, as Windows-powered smartphones are locked in a statistical tie with Apple iPhones as five out of ten most popular models are Windows phones from different manufacturers.
It will be interesting to see how the inflow of iPhone 5 customer reviews impact that battle.
It is worth repeating that these scores reflect aggregate, average satisfaction with the phones and not with their operating systems. Let me know if you need the detail view of Customer Satisfaction with operating systems themselves.
Spotlight on Smartphone Models
Samsung Droid Charge Customers generated the largest number of reviews at 2,084 of all smartphones reviewed during this quarter. The latest arrival, Apple iPhone 5, understandably has the smallest number at 56. I expect it to change dramatically during the last quarter of 2012.
The complete list of all models included in this report can be found here.
This quarter, the HTC Radar smartphone came with the top general satisfaction score of 1.78, exceeding its customers expectations by 78% (N=406). Another HTC Window phone, Titan II (CSAT=1.69/N=83), and HTC Amaze 4G (CSAT=1.56/N=208) were the closest contenders. That is a sweep for HTC. The cellar is occupied by Blackberry Bold 9900 (CSAT=0.69/N=84), LG Cosmos (CSAT=0.76/N=644), and Samsung Intercept (CSAT=0.87/N=441). Both Intercept and Cosmos have been on the bottom for the last three quarters, and I wonder why the brands managers continue to allow the overall equity erosion to perpetuate.
In the previous installments of this report, I have presented a detailed account of Customer Experience measurements for selected models. Since the volumes of information become too large for this format, and a level of the readers’ interest in the details is not clear, I will conclude the analysis here. The detailed comparison of specific smartphone models for personal use can be found by following this link. If you are interested in SmartPhone, or other market’s, Customer Experience Measurement (CXm) dashboard implementation, please email to me directly.
I finally found time to correlate these numbers to our “qualitative to quantitative” Customer Satisfaction research numbers from Customer Satisfaction with Windows smartphones rise by 18% report. We extended original IDC table with Customer Satisfaction columns and dropped Operating Systems that we do not track Customer Satisfaction for.
The Customer Satisfaction scores are expressed in 0-2 scale of measurement, where any score above 1 represents a percentage by which Customer Expectations were exceeded by their Experience.
Additional Year-over-year Change columns for Customer Satisfaction and Market Share also provide additional insights into this very interesting and dynamic market segment.
Click on the table image to see larger version.
It is easy to see a strong correlation between dramatic increase in a Window Phones Customer Satisfaction and explosive rise in its Market Share, however the drop of iOS Market Share despite a double digit increase in Satisfaction, does remind us about distinctive difference between correlation and causation. We often tend to forget it-particularly when it suites our biases.
Regardless of size or industry,there is hardly a company, that does not measure Customer Satisfaction. It seems to be clear to most people that Customer Satisfaction is a predictor of business success because customers, who have choice, will not stick around if they are consistently disappointed with products or services provided to them. Regulated monopolies are excluded, but even they measure Customer Satisfaction for reasons that defy explanation.
The correlation (or causation) between customer satisfaction and profitability, revenue growth, and equity shares performance, is relatively well documented. I did not provide any links to these studies because each one makes it sound that customer satisfaction measurement methodology is the most important factor in the success of the subject’s study. I happen to believe that the key to any business’ metric improvement is caused by improvement of the customers experience, properly measured as a customer satisfaction. In other words, how you measure it is less important than what it is you are measuring, and most importantly whether the company is committed to action based on these measurements.
If the previous paragraph seems obvious and self-explanatory to you, it is perhaps because you are not aware that many companies measure customer satisfaction without clear definition of the metric and a plan for action. There are a few good reasons for this unfortunate state of affairs:
Regardless of a scale one selects to use, there is a lot of ambiguity on what exactly the results are “telling” to people outside of a Market Research department. The only exception is NPS® methodology (Net Promoter Score) and that explains its popularity in executive suites.
Absolute measurements of Customer Satisfaction are meaningless. Whether your customers are 100% satisfied with your product, or rate your service with 4.25 stars on the Liekert scale, or profess their Net Promoters intentions at .35 NPS – it makes you feel good only as long as you don’t know that a competing product scores 25% higher.
The score itself is just a tip of the proverbial “iceberg”. The scores, without a root cause analysis, cannot provide actionable intelligence. Considering that many companies in reality “listen” and score brand affinity/sentiment, as opposed to customer satisfaction with specific product or service, no specific action is even possible.
Here is the Q2 2012 update. These links – Q1 2012 and 2011 – will take you to the previous reports. This edition includes 353 smart phone models and the aggregation of 104,691 unsolicited customer reviews. We have used the Opinion Miner® software to extract specific attributes of customer experiences with these smart phones and to measure the customers’ sentiments for each attribute. In the interest of consistency, we again filtered for smart phones that have been reviewed during the last 3 months to focus on currently sold models.
Disclaimer – Nokia is our client, however neither Nokia, nor any other company or organization, have sponsored or influenced this study.
Customer Reviews per Model
Only the 51 models that were actually shipped to paying customers during the past quarter are included in this study, as we are interested only in actual customer experience, not in marketing accolades.
If you are a new reader, I would like to stress that no survey or focus groups methods were used to collect data for this study. Please check our methodology if you are interested.
The most customer-reviewed smart phones that made through the filters for Q2 2012 are the same models as the last quarter, but their content contributions have slowed down dramatically during the last couple of months – HTC Thunderbolt (5,892), Samsung Droid Charge (2,009), and Motorola Droid Bionic (1,633). Obviously, the time of a phone introduction impacts a total, aggregated number of customer reviews published for these devices. Click on any image below to make it larger.
Customer Reviews per Brand
The enormous number of customer reviews of the Thunderbolt, which is almost obsolete by now, keeps the HTC brand as the most reviewed one by their customers. We anticipate the next report to reflect this change.
It is worth mentioning the trends of a dramatic decline for LG in a number of customer reviews and steady rise of popularity for Nokia (see figure below). We have studied before the correlation between number of customer reviews published online and a number of units shipped, and therefore found it important to use it for comparison. These trends suggest similar dynamics in customer purchases.
Customer Reviews per Operating System
The reviews of Android phones absolutely dwarf ALL other operating systems, and the trend does not suggest any major change. One thing that is worth noting is the growth in numbers of reviews of Windows phones from 1% at the year-end to 4% at the end of Q2 2012. It would be interesting to see if this trend continues.
iPhone aficionados are a very finicky crowd, and even though the earlier models continue selling, virtually no reviews of those are still published by their purchasers. This fact explains the precipitous drop in iOS share of the reviews after the end of 2011.
Customer Satisfaction per Operating System
Customer satisfaction with Android phones continues improving from the previous reports, but Windows phones satisfaction has seen the most dramatic increase since the previous quarter – 18%, while customer satisfaction with Blackberry phones keeps sliding down. In fact, five out of ten smart phones with highest customer satisfaction scores are Windows models.
It is worth repeating that these scores are the aggregate, average satisfaction with the phones and not with their operating systems. We will look at Customer Satisfaction with an operating system later in this post.
This time, the Samsung Galaxy S Blaze smart phone came with the highest general satisfaction score of 1.81, exceeding its customers expectations by 81% (N=106). The Motorola Atrix 2 (CSAT=1.78/N=186) and Nokia Lumia 710 (CSAT=1.72, N=318) were the closest contenders while LG Cosmos (0.79/590), Samsung Intercept (0.88/1,186) and Blackberry Curve 3G 9330 (0.91/410) have disappointed their customers the most. The two out of the last three have been on the list last quarter, as LG Cosmos’ reputation has continued to sink deeper.
Attributes of Customer Experience by Importance
Our Market Intelligence Analysis of the segment indicates that the following Attributes of Customer Experience are most important to the customers:
This 2-minute video explains the methodology for this chart in greater detail -
More specific insights in customer perception required the application of additional filters to select models representing different Operating Systems in significant numbers. The following models were selected:
The graph below shows Customer Satisfaction scores for the smart phones Reliability experience. Blackberry Bold 9930 is the only model from this sample that disappointed its customers
Not surprisingly the iPhone continues to dazzle their customers with the quality of its Display. However, all models included into this analysis earned very high marks. A score 1.0 represent “Satisfied” value as defined in our methodology and interpreted by our algorithms as an equivalent of this statement, “I experienced what I have expected”.
In the interest of the space limitations, I would like to suggest that more details and customer feedback verbatim are available on request via access to the dynamic dashboard for this segment. I will be happy to provide the link free of charge.
In this post I will describe how to use Market Intelligence for reducing perception of risk in the mind of a retail buyer. This knowledge will help you to make your negotiation processes shorter and your promotional subsidies lower.
So what are the challenges of getting your product on the shelf?
It should not be that difficult and expensive. After all, you are offering them an opportunity to make money, right? Right, but when you ask someone to sell your product, you also ask them to make an investment in:
Shelf space in their stores. That space produces revenue only if the product is selling well. They do not know now how well your product will sell. Your sales forecasts are based on your assumptions and your biases, but you ask them to accept a risk of losing revenue and bear the cost of their shelf space.
Promotional cost. Channels need to bring traffic, potentially interested in your product, into their “space.” And that is not cheap, particularly if your product is relatively new. Your sales forecast is not based just on early adopters success.
Cost of transaction. Even if your products fly of their shelves, there is no guarantee of profit. The cost of returns and exchanges can ruin the margins very quickly if customer satisfaction is not met. Overpromising by your marcomm may create expectations in the minds of your customers that cannot be met by the product experience. Substandard QA can result in Reliability crisis that will tax the channel’s margin and negatively impact their Brand value.
Product training cost. The employees of the channels have to understand the benefits of your product to recommend it to their customers. Training takes time and money. If there are too many products on the shelves and proper training investment is not done, your product will not likely be selling well. A good example is that Microsoft complains about mobile retailers’ sales force ignoring Windows phones as the reason for their low market share. Another example is described in the Customer Intelligence Analysis of Best Buy Downfall.
Remember – the channels have options to carry other products that compete in the same space and present lesser risk in their estimation. So here is the challenge: What can you do to help them reduce their risk assessment of your product?
You can start by watching this video:
Like this video? Share this tweetable -
“Advertising can help you sell good products, but only your customers can help you build a great Brand!” – Click to Tweet
Bring the third party, higher authority to this negotiation
That is right! The buyers do not believe your sales forecast and the market research you have paid for. They also know how easy it is to solicit desirable response by posing cleverly loaded questions in your surveys and focus groups. The buyers want to know why consumers will choose your product over those already available in the stores. And they want to hear this directly from consumers, without solicitations and influence over their opinions. From the consumers whose opinions matter because they have spent their own money to experience products like yours.
1. Identify the products you want to displace.
2. Leverage their customers’ experiences to find your product competitive purchase drivers.
3. Quantify the impact of this information on your sales projections, and share the evidence with your channel partner.
4. Celebrate and watch the product rolling off their shelves, but don’t forget to monitor customers’ feedback to make sure they still think your product is worth buying over other competitors.
Let me know if you want to ask any specific question privately about how this strategy could be implemented in your situation, or share your thoughts about this Marketing Intelligence advice by leaving a comment below.
A few years ago, while I was looking to buy my first smartphone, I noticed that everyone was trying to sell one to me based on device specifications. No doubt, the specifications, features and functions are important, but it is not always clear how they translate into customer satisfaction with the smartphone after the purchase.
I realized that customer reviews are much better sources of information to help me choose the right smartphone for me. They allowed me to understand better how the phones perform in specific circumstances and to decide which one is the best for my personal priorities.
The only problem with this approach is the time and effort it takes to find customer reviews for the smartphones I am interested in, read through many stories and distill the critical information to help me make the right decision. I had really good experiences with every product I chose using this approach. Here is an example: Five years ago, I was shopping for a small but powerful notebook for business travel. My technologically superior friend was shopping for a similar device at the time, so I asked for his recommendation. He chose a very popular brand with great specifications and reasonable price. I decided to check the reputation of the recommended notebook and found it to be spotty at best. I spent 58 hours researching customer reviews before making my choice, and it still works great for me, 5 years later. My friend had all sorts of reliability problems with his, and is on his third notebook now.
Now to the good news. You may have heard about the “Big Data” technology that helps governments and large companies to decipher huge volumes of data to produce meaningful and useful information. They use the technology for national security, market trading and marketing research applications, to name a few. Well, now this technology can help consumers to make better smartphone selections based on their personal priorities and massive volumes of customer reviews in seconds.
Here is the site where you can find unbiased smartphone recommendation based on Opinion Mining of customer reviews and your personal priorities – http://www.what-is-the-best-phone-for-me.com/
“Consumers are overwhelmed by the volume of choice and information they’re exposed to, and marketers’ relentless efforts to “engage” with them.”
Simplicity is NOT limitation of choices. In fact, the complexity of offers often tends to mask the fact that multiple offerings are not really different from each other. Marketing noise overloads consumers’ cognitive capabilities and drives them to alternative sources of information to help them make purchasing decisions. That undermines the purpose of marketing communications and diminishes quality of overall Customer Experience. The customer satisfaction scores often reflect that.
We all read news earlier this year of Samsung overtaking Apple as the largest manufacturer of smart phones; however, the latest American Customer Satisfaction Index survey still finds Apple iPhone customers to be much more satisfied than any other smart phone customers. It is interesting to note that our own Customer Intelligence research shows that
” Apple 4S satisfaction has really jumped 12% with retirement of iPhone 4.”
while only one Samsung model out of dozens, marketed at that period (Q1 2012), earned similar accolades from their purchasers. How many of you heard about Infuse?
The question is, why does Samsung (and other companies) make it so complicated for consumers to choose their products? If product specifications differences are so important, why are there thousands of consumers searching the Internet with inquiries like “which smart phone should I get iPhone or XYZ?”?
One may say that these questions are not relevant anymore since Apple’s lock on the smart phone market was broken. I would like to suggest that the number of units sold does not automatically translate into profitability. Customer satisfaction much more closely correlates with higher margins. Samsung has to subsidize carriers to sell their phones, while the same carriers have to subsidize Apple for privilege of selling iPhones.
“It is unrealistic for any company, even Apple, to hold 100% of any market. The iPhone will never have majority share. It does not have to. What matters is profit share, and that is where Apple is winning. Apple earns73% of the cell phone industry’s profits with just 8.8% market share. ”
This is just an example of a simple choice vs traditional approach to marketing, and it is not limited to smart headset market. Consumers do not buy technical specifications, feature sets or functions – that is what your engineers sell.