headertext headertext promolink

Posts tagged with ‘Voice of Customer’

Experience of Customers Helps to Forge Shoppers’ Expectations

I thought you would be tallerIf you believe, like I do, that happiness is about expectations management, customer reviews are your best bet for selecting your next car, smartphone or restaurant because they will likely deliver an experience you expect.

“The big advantage of a major brand over a small competitor is a residual expectation in a consumer mind of reduced probability to be disappointed. When quality is hard to predict a brand serves as a proxy to likelihood of good experience. The detailed and product specific experiences, shared by actual customers, help to decide if this product is for you. Surely, this information is not perfect, but if it is in statistically representative volume, it the best way to shrink the gap between your expectations and your experience.”

Skeptics often cite that reviews from customers, who may not be like you, make the usefulness of these reviews highly questionable because people have very different attitudes and product adoption skills. While this is undisputable, the large number of reviews and filtering options available allow for a reasonably easy match between a shopper and the customer profiles. The personality and attitudes of a customer shine through the language of the reviews and help a shopper to “try on” an experience of people like her. The absence of a “story” is one of the key reasons why ecommerce sites that substitute actual reviews with score cards experience lower traffic and visit-to-purchase conversion rates than their competitors who publish complete reviews.

Most of content generated by customers is fact based. There is no sugar coating or attempts to manipulate your emotions. The language of reviews tend to be more specific, more matter-of-fact and focused on the personal experience the writer had with a product in question. Warm and fuzzy is much less effective when it faces meaningful competition from more “rational” sources.

The language also betrays fakers and dumb marketers who sometimes try to manipulate the market. Faking reviews effectively is not as easy as people may think. The language used, vague description of details and lack of personal experience knowledge are easily noticed not only by an attentive reader, but even by algorithmic filters that consistently give them low confidence score. In addition, it is impossible to tip the scale with an occasional fake review, and a sufficient volume of them can be easily spotted and tracked to the source. The financial penalties imposed by FTC for publishing fake reviews have run into hundreds of thousands of dollars, but that fades compared with damage to the reputation of the company that commission such activities.

It is surprising how few marketers consider customer reviews to be a valuable source for marketing intelligence because they cannot control and/or manipulate it. Instead they prefer to rely on “big data” acquired without customers’ consent and often against their wishes. Those marketers who do hear what real customers want to tell them quickly discover what specifically make one product more valuable than the alternatives to their best customers and prospects. Actual use of this intelligence to support their product marketing processes helps them consistently outsell their competitors by a wide margin without price discounting.

You cannot eliminate an uncertainty, but experiential information provided by customers helps to resolve it much faster and much more specifically than any brand advertising or company centered survey.

 

 

 

 

 

 

Lies and big data

Big data and lies it tells usThe other week someone brought to my attention an article with a title “Lies Data Tell Us” by Steven J. Thompson, CEO at Johns Hopkins Medicine International. The title took me aback, but as I read it I realized the article was really about better practices required for data to be more useful. Use of the provocative and somewhat misleading title resulted in nearly 12K views, dozens of comments and hundreds of shares in social media. When I started looking for this article again, the search brought a number of links that associate data, big data, etc. with “lies”. Most of the authors blame data or unscrupulous mining and analysis technology vendors for all sort of business problems resulted from “data lies”.  It seems some of these authors use the following definition:

 Data Scientist (n): A machine for turning data you don’t have into infographics you don’t care about.

I would like to examine a process people often follow when they deal with data.

Since the term “big data” is thrown around a lot, I would like to define it in the context of this article. Mere volume and velocity of data does not constitute “big data”, but multiplicity of data sources and data formats does. From that perspective the term “big data” describes an enterprise data aggregated from multiple departments and multiple data bases (i.e. data warehouse model), linked with data from sources external to a company, in a structured and/or unstructured format. Mining such set of “right data” may produce very valuable intelligence. However, all can also result in waste of money, efforts and opportunities if

  • The mining process does not produce relevant new intelligence, or
  • The intelligence is not used for action.

We act when we believe the action will result in a desirable outcome. We never know for sure, but we estimate probability based on our experiences in similar circumstances. These dynamics influence how we select, search and interpret the data into intelligence, or lack of thereof. Subconsciously we select data that See no evidencewould likely provide confirmation of our existing beliefs. This usually means that we heavily rely on internally generated (controlled) data  and heavily discount externally generated data. 

We like to use such terms as unbiased and objective, but the very process of selecting a data set introduces bias and subjectivity. It is unavoidable. It is a much better practice to embrace and understand a bias that is pragmatic, and define a purpose of an inquiry. You don’t see people mining a mountain to find “whatever” is there. They carefully select and test an area for an indication of high concentration of desired mineral before the exploration and mining start.

If the purpose of your inquiry is improvement of customer experience, assemble a data set from the most relevant internal and external data sources available. If you limit your data set to a company controlled data, you introduce a company bias.  In such a case the likelihood of discovering any new intelligence for improving your customers experience is quite low. Forget about data mining and just continue your archaic surveying exercises of “guess and validate”. If you include data generated by customers without solicitation and control, you will introduce customer bias. Introduction of channel generated return data and customer service data will allow for balancing of the biases. Correlation of trends in controlled and external data sources will help to discover potential gaps between your beliefs and emerging evidence. However, even the best evidence cannot automatically make people abandon their beliefs and start acting differently, but that is a subject of another article.

The point is – data cannot lie to us; we have to do it ourselves by not mining it honestly and competently.

B2B Customer Experience Management – a story from the trenches (Part 2)

In the last week post I wrote about the reasons the examples of B2B Customer Experience Management successes and failures are not as widely available as B2C ones. I also started describing a specific example of pitfalls on a journey of CX discovery. This post is about negotiating these pitfalls and translating findings into actions. 

Data-Insight-ActionThe suggestion was made to take another stab at the problem, but from a different perspective. Until now all inquires were focused on the product. Perhaps focusing on why the visitors downloaded the free version, or even why they came to the website on the first place, would provide insights that would help to increase the conversion rate.  It was clear that the free version was not sufficiently meeting customer expectations as only 18% were still using it for “minor” projects after 6 months from a download. What was not clear is what kind of projects did they hope to tackle with the product. This is not an easy question for tabulating answers. However, a clear and statistically representative answer would help us understand if our website attracts “wrong” customers, i.e. there is a mismatch between problems they have and solutions we offer.

Interestingly enough, the “survey” (with open ended, free format, questions that focused on customer’s experience, instead of company’s problems) generated over 13% response rate. The automated analysis of this feedback exonerated the digital marketing team – they have been attracting “right” customers—but the customers’ actual experience with the product did not live up to expectation created by the marcom. The detailed analysis of collected customer feedback produced  a list of CX attributes in order of their importance to the customers, as well as a measurement of delta between their expectations and experience (example). Two issues stood out, as they caused customer disappointment by 25% and 38% respectively – customer support and usability. The first finding incensed the customer support team, who stormed in armed with NPS=87. The customer support team was serving only paying customers, while the fremium ones were supported by other customers (community) and an automated knowledge system. That caused us to separate the customer feedback data by paying and fremium customers contribution. Indeed, paying customers did not experience any disappointment with customer service. Usability was a problem for them as well, although with much lower impact.

A deeper dive into their comments produced the following insight: the product has sufficient functionality to meet target customer expectations, but only the savviest and/or most technical users are capable of figuring out how to draw on this functionality to achieve the goals they expected the product to deliver.

Plan of action:

  1. Extend full customer support (2nd tier/priority) to fremium customers and offer 1st tier as a paid option without conversion to a paid license.
  2. Launch-process-driven. i.e. customer-centered, UX study to learn how to simplify use of the product
  3. Re-design the product front end based on the UX study findings.

Afterword

Implementation of the first step of the plan resulted in 4% of additional revenue from increased conversion ratio after the very first quarter. Learning from Zappo’s experience, the management shifted 20% of the marketing budget to customer support, which is now considered a revenue generating department. The subsequent steps, when gradually implemented into production, reduced customer support load to below original cost per customer.

Most of us are very focused on what we think we do – product people are product-centric, customer service people are support-centric, etc. – but we all are in business of delivering the best customer experience, and we should excel in our part of it without losing a focus on the big picture.

 

B2B Customer Experience Management – a story from the trenches

B2B Customer Experience Management – a story from the trenches

You may have noticed that most publically available research into Customer Experience is   focused on consumer products and companies. There are a few good reasons why this happens:

 

  1. We all are consumers, and it is easier to write and to relate as a reader to examples and ideas that involve consumer related issues and experiences.
  2. The evolution of social customer affords greater transparency – consumer goods/services customers are rarely limited in their capacity of sharing their experiences (customer feedback) publically. Corporate customers are severely restricted from doing that, limiting the opportunity for open analysis and discussion of specific examples and practices. We, humans, learn best from stories.
  3. B2B Customer Experience Management practitioners often limit their ambitions to Customer Satisfaction and User Experience areas of the discipline. Tim Carrigan explored the set of beliefs in this excellent article – B2B versus B2C – Debunking Five Customer Experience Myths.

It is important to design the research based on outside-in perspective because poorly focused B2B CX inquiry can miss business targets entirely and discredit a CEM initiative.

Outside-in perspective

Here is an example.

A couple of years ago we were working with a medium-sized B2B software company that was relatively well known to business community in its market segment. The company engaged with most of its sales prospects via their website, where visitors could learn about the products and download a free copy for evaluation. While the site traffic and the rate of downloads were reasonably healthy, a conversion from freemium to paid use license was miserably low. Two possible hypotheses were developed to explain this problem:

  1. Download and installation complexity may have prevented users from experiencing the value of the products. We could measure a number of downloads, which was reasonably good, but not how they were installed, configured or used.
  2. The paid product lack of valuable functions and features compared to the free version and did not provide sufficient motivation for users to convert.

Marketing launched a survey initiative to validate the first hypothesis and designed a 5 question form that was emailed to the visitors a few days after they downloaded the free version of a product. Despite a very low participation rate (below 1%) the survey responses overwhelmingly rejected the first hypothesis as 89% of respondents had no negative experience with download and installation.

The second supposition proved to be much more difficult to tackle. Survey questions about product functionality yielded even lower response and provided no clear guidance. A focus group was presented with a list of functions and features considered for future development which participants were asked to prioritize. They were asked if inclusion of these high priority functions into the paid version of the product would help them justify conversion from the free copy, and the majority gave the positive answer. However, upon the new version release the conversion rate did not improve at all, and corporate management was coming hard on Marketing, who pointed a finger on Engineering who pointed it right back – the blame games began!

 

I will continue with the conclusion of this story next week.

Ode to Customer Feedback from Social Media

VoC ResearchThere are 5 reasons why Voice of Social Customer is more valuable than traditional Customer Feedback programs:

 

1.     Social Media Voice of Customer is unsolicited – the customers share their experiences online motivated primarily by one of the following desires:

  • to help other consumers make a good purchasing selection
  • to get attention of the providers by making their grievances public
  • to assert themselves as consumer mavens

Solicited feedback, in the form of survey response or focus group results, is motivated by participants’ consideration for the emotions of the researcher or moderator. I’ve seen quite a few times when consumers, making enthusiastic promises to buy and recommend a food product they just tasted, spitting out with disgust after leaving the sight of a tasting booth.

2.      Social Media Voice of Customer is customer centric – the customers describe their own experience rather than answer somebody else’s close ended questions. They describe what is important to them in their own words. Sure, that is not easy to tabulate, but “easy” does not make it valuable.

“The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.”

Daniel Yankelovich. “Corporate Priorities: A continuing study of the new demands on business.” (1972)

3.      Social Media Voice of Customer is voluminous – it often provides much more representative data sets for analysis than traditional, company-controlled methods.

4.      Social Media Voice of Customer is inclusive – customers describe experiences that are not limited to your products or brand. It offers the opportunity to learn and compare how customer experience provided by competitive products measures to yours.

5.      Social Media Voice of Customer is authentic and transparent – everybody can see who said what, where and when about a product. Consumers can relate to how the product was sold and used. They can decide if its limitations and benefits would apply to their circumstances. Consumers are smart enough to distinguish genuine experiences of their peers from idiotic and illegal attempts to fool them into a purchase of product that does not fit their needs. They are also capable of understanding the difference between a legitimate grievance and an angry rant. Fostering social media customer feedback builds your brand and improves sales results in addition to providing customer experience intelligence. Traditional, company-controlled Voice of Customer is only meaningful for internal consumption and even that is often only for a self-serving pat on the back.

Study shows that 93% of people who conduct research on reviews sites typically make purchases at the businesses they look up

I am not arguing to abandon traditional methods – they can be very valuable for hypothesis validation. However, social media Voice of Customer can provide much richer market intelligence, second only to ethnographic research, but without its cost and statistical representation limitations.

 

Customer Experience: Easy to Measure, Hard to Change part 2

This sequel was inspired by comments and questions posted in multiple LinkedIn groups where the first part was published. Special thanks to Richard Hatheway.

Cheif Customer OfficerOne of the first reason people give, to explain the difficulty of customer change implementation, is lack of leadership support. This causes a debate in customer experience management communities about the need, some say the rise, of the Chief Customer Officer. Personally, I always thought that was the role of the Chief Marketing Officer of an organization, but apparently they, as a group, do not live up to that expectation – i.e. focus on priorities other than making their companies more customer centric.

In my experience, the addition of another title, without P&L accountability, has never magically created the leadership presence the CX communities are yearning for. In most instances the CxO’s political power directly relates to their proximity to revenue generation (in fast growing companies) or spending control (in the rest).

I am not an expert in leadership theories, but I would like to suggest that if you really want to see a change in the way your organization relates to its customers, you would have to take the risk of doing what needs to be done to enable such change. Here are a few challenges to consider:

Common knowledge is often a myth

 

not my jobIt is a common misconception within executive ranks and their subordinates that the Customer Service organization is responsible for Customer Experience.  If you are reading this article I assume you know better. If you need convincing, read this before you go further. The best way to debunk this convenient myth is to use evidence in the form of data that comes directly from customers’ mouths.

 

 

Holistic experience is the synthesis of many attributes

Holystic

 

The use of aggregate and derivative metrics will not be very effective if people, who need to change their thinking, cannot see the connection between their actions and the effect of these actions on customer experience. Brandeis offers the guiding principle:

“If the broad light of day could be let in upon men’s actions, it would purify them as the sun disinfects.”

You have to isolate and link specific and measurable attributes of your customers’ experience with operational and financial metrics of an appropriate department and/or process within your company, or your channel, to expose cause and effect.

Linkage adoption is the most critical condition for actionability

 

FactsIn the Customer Experience business we often are not dealing with facts, but with evidence-based opinions. Using the survey method – the best for validating a hypothesis – as a primary source of evidence will inevitably taint these opinions with the bias of the survey designer. The best practice for overcoming the bias and representation arguments is to provide multiple sources of evidence that point to the formation of a coherent opinion. That will not eliminate argumentation, but will make it much more constructive. A fact is often nothing more than commonly accepted opinion, supported by commonly believed evidence. Therefore, it is paramount to socialize proposed linkages very extensively, to gain acceptance of the concept and subsequent adoption, before publication of any measurements.

 

Measure and benchmark a change.

 

challenge to do betterPeople are motivated to act to catch up to a leader or to maintain the leadership position.  When people accept your metrics, they surely are inspired to do the right thing, particularly if they can see that their actions produce desirable and measurable results.

“Static” measurements do not motivate action nearly as much as measuring how CX, and linked operational metrics,  have improved since the change in a process was implemented. These are particularly effective if there is relative competitive information available for comparison. Example – a company we have worked with measured a social reputation of their consumer product a quarter after its launch. It was lower than expected and about 6% below the average reputation of products in its segment. Detailed CX analysis discovered that some customers figured out how to program the product to control home appliances (not intended use). Marketing was very hesitant about doing anything about the finding, but eventually decided to publish instructions for programming and communicated them in Social Media. The quarter after, the NPS jumped 22% and the product took a lead in the segment. That change alone was attributed to 17% increase in the sale through rate.

The suggestions above are obviously not a comprehensive guide to organizational change management of customer experience, but they are focused entirely on what CEM professionals can do to provide operational leadership in spirit of the words of Mahatma Gandi

You must be the change you wish to see in the world.

Apple at the crossroads

As many products and services are becoming more agile by design, even the best-designed products have shorter and shorter time to enjoy superior profit margins before competition starts to catch on. Patent protection and brand recognition do help to extend this time, but the clock keeps accelerating.

Apple’s iPhone provides a good case study for this phenomenon. Ever since its initial introduction by Steve Jobs, the iPhone was the gold standard for the smartphone product category. It became a status symbol of Silicon Valley technocrati and every new model was greeted by millions of lined up fan boys and girls. Apple bestowed the honors of selling it to a chosen few channels and demanded handsome subsidies in return.Social NPS apple vs galaxy The reason for this success was the iPhone’s superior customer experience, designed into the product, that exceeded any other one in that category by a wide margin. Well, this margin finally shrunk.

Last week’s financial news show that hegemony of iPhone may be over. Make no mistake, it is still a great product, but competition has caught on in creating a customer experience as good or better than Apple’s. People bought 31.2 million iPhones last quarter, but their experiences were rather underwhelming. Not because there is anything wrong with their iPhones, but because their expectations were too high as many new customers came to Apple for the first time. As Samsung, HTC and Nokia customers have “upgraded” to iPhone, they do not find the overwhelming difference they expected.

These scores may not likely match company-sponsored survey results, as they are extracted from sentiments customers express reciting their experience in unsolicited reviews they share online, often anonymously. However, what these scores say is that only 6% of iPhone 5 customers (Net Promoters) would actively put out a Word of Mouth to promote the phone, compared to 18% for Galaxy S4.

The differences are even more pronounced when iPhone is compared to Blackberry Z10, HTC One or Lumia 928.

Out of 25 attributes of customer experience that are most important to customers of these smartphones, Apple still dominates in one – Design, as customers rate it 28% above the group average. Details are available on request.

It appears that marginal improvements introduced in the last two iPhone models – 4S and 5 –  failed to separate the brand from the pack. In NPS trend by modeladdition, US carriers started to experiment with unbundling handsets from services, and the subsidies to the manufacturers like Apple are threatened. At this point the category shows signs of maturity. In the past, Apple’s market researchers were able to discover and exploit latent needs of increasingly demanding consumers. Is today’s Apple capable of inventing a new category of products to march on as the industry leader?

Social Levers for Effective Brand Management

Growndswell

This article is a sequel to The Essence of Brand and Customer Experience post that I wrote a few months ago. It explores further how to use Social Reputation metrics as the levers for pro-active brand management.

Over five years ago, Charlene Li and Josh Bernoff wrote in “Groundswell”

“Marketers tell us they define and manage brands. […] Bull. Your brand is whatever your customers say it is. And in the groundswell where they communicate with each other, they decide.”

These words inspired software entrepreneurs to create hundreds of social media listening tools and sent brand managers to shop for them in droves. However, these words did not motivate many marketers to abandon futile attempts to “direct” a groundswell and to start “surfing” it instead.

Tools (i.e., technology) cannot change the way people conduct business. A change in business process, leveraged by technology, is needed to achieve that. As long as business practices are focused on inside-out “shouting,” no “listening” technology can help to ride groundswell to larger market share and better margins.

Here are a few ideas for the integration of social levers into brand management process:

  1. Customer Satisfaction or Loyalty metrics are much more meaningful if they come transparently from Social Customers as opposed to company controlled surveys. Internally generated metrics may make your management feel good, but they don’t help to increase your brand value in the minds of consumers who learn about your brand reputation in social media you cannot control.
  2. A single point of reference, your brand CSAT or Loyalty metric, is meaningless in terms of a business process. You need to understand the competitive position of your brand to chart a course for improving that position.
  3. Brand sentiment alone is not sufficient for understanding which products associated with your brand are detrimental to its social reputation. Optimization of a product mix by channel, based on social customer feedback and operational metrics, can substantially improve overall brand reputation.
  4. Linkage of operational and financial metrics with social media signals produce the most reliable estimates of the suggested action impact.

The entire process description, along with specific examples, can be downloaded from Amplified Analytics website.

 

Social Customer and the Quest for Better Margins

New Product DevelopmentIt is no secret that most new products taken to market do not perform to management expectations. While there may be a myriad of reasons to explain the high rate of failure, I would like to focus on the fundamental inadequacy of commonly used market segmentation methods. Don Peppers wrote that

“Many, if not most, corporate CRM efforts have floundered or failed because they were oriented exclusively around segmenting customers by their value to the company – diamond, platinum, gold, whatever. But if you want to be smart about your customers, then you have to know how customers differ from each other in terms of what they need from you, not just what you want from them.”

Following this line of logic, most product developers fail because they segment their customers by their demographic characteristics (age, gender, education, income, etc.) rather than their needs. The assumption these marketers make is that similar demographic segment would have similar needs.  The introduction of customer “personas” allows marketers to assign a proposed product’s functions and features to specific customer groups within the demographic segments. Subsequently, all these assumptions are validated by surveying consumers, who belong to the chosen segments and correspond to a “persona” as defined by product developers.

I see three fundamental flaws in the described methodology:

  1. Multiple layers of assumptions are constructed before the validation of an entire construct;
  2. The validation methods are highly subjective and often produce low integrity results;
  3. The method uses an inside-out view of potential customers.

These flaws lead to the introduction of products that are very difficult for consumers to differentiate, and as the result lead to lower profit margins.

The advent of The Social Customer provides product developers with an alternative segmentation method – an approach that at once zeroes in on a market segment composed of customers (not consumers) who have purchased existing products that the planned product intends to challenge. In particular, it analyzes actual customer experience with those existing products. The relative frequency with which customers mention certain attributes tend to indicate those attributes’ importance to the customers. On the other hand, customers’ satisfaction or dissatisfaction with a product in terms of those attributes tends to denote particular strengths or deficiencies. Accordingly, those white spaces where existing products are unable to meet key customer expectations present potentially lucrative opportunities for new product differentiation.

This outside-in approach may be reminiscent of ethnographic market research, but instead of actual observation of customers it relies on analysis of online customer reviews to generate much more statistically representative results much faster.

Customer Satisfaction—the Ultimate Vanity Metric?

Almost every company measures Customer Satisfaction or its variations at considerable expense and effort.

Some companies attempt to use the metric for advertising. The metric is supposed to convince a shopper to join the ranks of the company’s customers because they are supposedly 97% satisfied. These numbers are impossible for a consumer to validate, methodologically or anecdotally. Besides, there may be information floating in Social Media that disputes the company’ customer satisfaction claims, however unfairly. In my opinion, brandishing the customer satisfaction scores, without complete transparency, will more likely lead to erosion of trust than to increase in sales. Social Customers trust each other’s experiences more than they do brand claims.

csat bullshitMany companies use the customer satisfaction metric to judge departmental performance while their customers keep churning, for reasons the measured business unit may have no control over whatsoever. They do well if the last calendar period metric is scored higher than the previous one. If the last score is lower, bonuses are not paid and changes to a status quo are demanded. Sometimes the change is a switch to different methodology of measuring customer satisfaction.

Disconnect from the Customer Satisfaction Score, regardless of methodology, and from specific and systematic action that targets improvement of customer experience, makes the score an ultimate vanity metric.

Richard H. Levey wrote thatbullshit survey

“True customer insight requires first knowing (discovering) which attributes matter to the customer and then determining how the firm is performing on those attributes. If a customer’s experience occurs over multiple interactions and various media, then each needs to be measured to drive insight precision. ”   Stop Measuring Customer Satisfaction and Start Understanding It. (emphasis mine).

I would add-stop counting the clicks, “likes” and re-tweets, and start understanding WHY customers do what they do. Stop tabulating survey scores and start reading the comments—you may learn something that would help promote an action of positive change.