headertext headertext promolink

Archive for October, 2009

How it all started

I wrote this story awhile ago, before we had this blog started, to eventually publish here.

It all started in August 2007, when I had to travel across the continent almost every week and started shopping for a lightweight laptop computer suitable for business travel.

While selecting parameters for my planned purchase in terms of specifications was not too difficult, if you know what you want, predicting the quality of your ownership experience may prove to be much more complicated. I’ve been an online shopper for a long time, but the recent cost of shipping and the hassle of dealing with less and less responsive customer service reps, started to outweigh the original savings and conveniences.  As a result I became much less impulsive with my online purchases – in this economy there is less money to spend and less time to waste, so I figured a small investment into initial research would be a good thing.

I found three major sources of information that were quite useful to assist me in making the choice:

  1. Product specifications provided by the Original Equipment Manufacturer that are part of advertising and marketing collateral and designed to create our expectations of functionality and performance, but provide little help to gauge the probability of these expectations being met;
  2. Editorial Reviews provided by magazine and online publishers that offer us a glimpse of potential user experiences which are quite valuable, but substantially removed from the regular consumer environment – the editors test carefully selected and tuned equipment provided at no cost to them by the OEM, and don’t have to deal with fulfillment, delivery and customer service issues. Unfortunately Consumers Reports did not review the laptops I was interested in and their recommendations were not available. A relatively quick tour of a few popular web sites helped me to create a short list of the two laptops that met my requirements;
  3. Consumers (Users’) Reviews provided by actual purchasers of the product who share their personal experience and rate their satisfaction with this product. There is a relatively high probability that one will be very satisfied with a product that has been rated very highly by most reviewers. Adversely, you will do well avoiding products that are rated very low by most reviewers.

Two laptops that I had short-listed for purchase, based on product specifications and editorial reviews, had very similar reputation ratings of 3.5 stars out of 5, which is not perfect, but acceptable. So what is the next step? Toss a coin? Is it safe to assume that these two products have the same reputation and would be equally satisfying purchases?

It turns out that this would have been a very bad assumption.

Usually I lack in patience (those who know me, please stop laughing – “Good men know their limitations”), but this time I decided to assess whether the reasons, that prevented these laptop users from giving the highest satisfaction ratings of 5, are “showstoppers” for me, or not – we all have our own limits of tolerance to different experiences. I ended hunting for and reading through dozens, or sometimes hundreds of reviews and found out that the most negative experiences with laptop #1 were centered around overheating issues and resulting customer service hassles, where negative reports of laptop #2 were focused on order processing and fulfillment problems.

I had invested 8-10 hours of my time in research and spent $120 more than originally anticipated, to purchase the laptop #2 from an online retailer that had it in inventory, to bypass the fulfillment problem and have been enjoying my laptop without any reliability problems. I hear the laptop #1 OEM has finally found workarounds for the overheating problems and is now working to pacify many of their very vocal and unhappy customers.

As smug as I am about this experience, the efforts required to do such research require too much time and patience and I wondered how much easier it would be to have more meaningful product reputation ratings. So I looked, but could not find anything unbiased, consistent and verifiable to make it work for me. That is how my new project, Amplified Analytics was born. Please look around the site and let me know if it makes any sense to you. I would love to hear your experiences related to product reputations and user reviews.

Commentary on Desired Customer Outcome

I have encountered some mixed emotions among some Market Research and Customer Experience Management practitioners about the usefulness of Customers Reviews as a source of real business intelligence, as opposed to their use as marketing gimmicks. I do not fancy myself as a true professional in these fields as I lack true hands-on, hard core operational experience; however, I doubt these mixed emotions and remain determined to develop technology that “listens” to the stories of customers to “learn” and measure how a product experience meets customer’s expectations.  I ran across this post today from ClearAction that clarifies some of these doubts:

What’s the difference between the way customers volunteer feedback versus the way they’re requested to give feedback? One revolves around outcomes in the customer’s world, whereas the other revolves around customer satisfaction enablers in the company’s world. True customer-centricity requires primary focus and decision motivations be centered on the customer’s world, rather than the company’s.

It is easy to imagine that politics, real or perceived loyalties and conflicts of interest can easily skew the results of customer satisfaction research. However biases, mistakes and algorithmic-imperfections can also result in low quality output. The method is less important than the intent.

customers “hire” a product or service to get something done for them. When we understand the circumstances motivating the customer to hire a product or service, then we gain insight into the customer’s jobs-to-be-done. A great way to identify customers’ desired outcomes throughout the customer experience is to scan customer-generated inputs on your brand category. Good sources of customer-generated inputs include contact center and sales call logs and social media.

Ethnography, or observation research, is also instrumental in understanding outcomes in the customer’s world. What value does your organization place on these customer outcomes sources relative to your formal research that is typically organized from a customer satisfaction enabler viewpoint? Why not consider revising formal research to focus on customer outcomes rather than enablers?By really understanding customers’ jobs-to-be done, constraints, work-arounds, hassles, and other elements of their world, new insights emerge for superior alignment with customers. Adopt the customers’ jargon — don’t make them adopt yours. Cater to the customers’ world — don’t make them cater to yours. Your jargon and world are customer satisfaction enablers, or a means-to-an-end toward customers’ desired outcomes. The outcomes are the direct link to re-purchase behavior and propensity to recommend a brand. In the end, it’s only the outcomes that matter.

The important point is that no single source of data, or method by which such data is acquired, produces viable knowledge. At this point I need to channel Chance, “The Gardener” from “Being There” by relying on my sailing experience – you cannot navigate by less than 3 points of reference; that is why the word “triangulation” was introduced. Our technological approach does not change this any more than the invention of GPS.

From Data to Wisdom

A customer review can produce one point of data relevant to a product. A statistically representative number of customer reviews of the same product can produce a much better quality of a single data point relevant to that product.  It is a very good start, but it is still just a single point of data-CSI, CSAT, NPS, etc., depending on the methodology used to collect this data. So what is the value of this point of data? Apparently it is quite significant when used for marketing as people do pay attention to the recommendations of their “peers” or “influence-rs”. Online retailers know that conversion from visit to purchase is much higher for products that have a significant number of relatively positive reviews, and that is why they invest in collecting and managing access to these reviews. The value of this data for a product manufacturer varies from industry to industry.

A couple of weeks ago I attended a presentation by Munjal Shah, the CEO of Like.com and one of his slides really made me think.

Path 2 Wisdom

In other words data itself is not actionable. Consider the actions a marketing product manager can take based on the data that their product ABC has a low satisfaction score. I can’t think of any other action than to learn more, i.e. to discover more data. Presumably information is created when our marketing product manager (or product marketing manager) compares ABC’s product satisfaction score with the one of a competing product, hence comparison of two points produce information, i.e. higher value.

Correlating the information produced by tracking these two data points over time with sales numbers can create knowledge – “product with an inferior reputation tends to undersell its competition by X%, when sold at competitive (i.e. similar) price”. Now, this is an actionable piece of knowledge as our MP/PM manager can attempt to discount the ABC product to stimulate sales or attempt to improve the customer’s opinion about it.

I already wrote that most CE Marketing Product managers (area of my focus) do not think that they can actively manage a reputation of a product released into the field. However this belief is not based on any wisdom, empirical knowledge, or current data. It is based on the experience of working with traditional tools in an archaic (pre-social media) market environment. The availability of customer feedback about their experience with a product, combined with modern tools, capable to extract actionable knowledge, enable organizations to create causation “pro-active product reputation management produces higher profitability than price discounting and defensive advertising”.

Re “Marketers Ignoring Customer Feedback from Social Media”

Very interesting results of the survey:

A Social Media Survey conducted on behalf of PRWeek and MS&L by PRWeek and CA Walker found that marketers don’t make changes to their products based on customer feedback, despite monitoring feedback being one of the most common business uses of social media in the first place.

The survey found that 70% of marketers say they’ve never made a change to a product or marketing efforts based on feedback from consumers on social media sites.

I have to second Larry Malloy’s comment.

I believe there’s two reasons for this.First, we are still in the early stages of social media as a marketing tool. I believe as the technology matures, potentials are stretched, metrics are determined, and processes are developed this will change.

Second, there could be a disconnect between marketing and product management (you said the survey polled senior level marketers). As a product manager, I often used social media throughout the product lifecycle, and the executives I reported to often did not know where the new product ideas came from. And, what I learned through social media, I often further tested through more traditional marketing technologies like surveys, customer visits, interviews, etc.

Most Product Management and Marketing executives I have talked to are interested in listening, but have no strategy, processes, methodologies or best practices to act on customer feedback. Most tools available today are not providing particularly actionable data either. I am not sure what would or should come first, but without these elements you cannot produce any ROI. I attempted to come up with a “calculator” to measure an impact of customer feedback on product profitability, but it is just a rudimentary attempt for discussion and anybody who wants a copy can find it here.

Recent Site Problems…

Yesterday it was brought to our attention that our Review Display Module was acting rather strange. Users were experiencing issues with the wrong product’s reviews displaying while using the V2P Accelerator; some complained that no reviews were showing up at all! After some investigation on our end we discovered that a recent update we had made in relation to our website’s analytics was the source of these problems.

There are two convenient ways to reach us when you experience issues/problems: Our Contact Us page, or by using our Feedback Tab located to the left-side of every page of our website.

Feedback Mechanism

We apologize for the inconvenience and frustration that this problem may have caused. Thank you to all of you who reported these issues allowing us the opportunity to resolve them as quickly as possible.

Great conversation about “Social CRM – Lipstick on CRM or Transformational Business Model?”

This great question was posed by Jason Breed’s #socialmedia and moderated by Social Marketer Aaron Strout. The post raised some very good and specific questions, but the comments got me really thinking.  Particularly this one from Paul Greenberg:

The social CRM strategy needs to be consistently based on the best means to engage the customer, rather than just manage them. But it doesn’t mean DON’T manage the processes and data. It means use them differently to help you deal with this newly empowered customer.

I always had a problem with interpreting CrM as managing customers, as opposed to managing relationships (cRM), but in practice a lot of strategies were built on that unfortunate approach and Paul is right on the money again, as he usually is. Attempts to utilize existing (legacy) business processes to manage social customer engagements will be ineffective and uneconomical.

A more fundamental change may come from re-evaluating the reasons for engaging the customer – companies have always wanted to market to and to sell to, which both represent a push engagement model,  more effectively and to service (push/pull engagement) more efficiently. These are already being explored by a number of vendors and companies, with varying degrees of success.

The evolution of the customer’s role in a Social Media context also offers an opportunity to engage customers as  co-creators of value in product or service development and management processes on a larger and more meaningful scale than traditional Market Research and Competitive Intelligence practices.  There are a number of vendors offering tools for “listening” to Social Media, but I would like to learn more about methodologies, best practices and business processes, that show real returns on “hearing” what is said.

Commentary on “A future vision of CRM”

I read a very interesting post on the Wikinomics blog today called “A future vision of CRM”

I’ve heard the argument that traditional CRM “is dead,” but this is far from the truth. In fact, as Brian notes, Social CRM does not replace transactional CRM systems, rather it augments them. What CRM is in desperate need of is new data sources and tools that help integrate and analyze this data. The future vision of CRM also requires that companies get involved in new channels and cede a certain amount of control to the customer – it’s less about management and more about engagement.

and left a comment I hope you find interesting:

One of the challenges for Social Media channels and CRM integration, is the fact that they “speak” different languages – SM is mostly communicates in unstructured text, while CRM is using formalized data structures.

There is a potential for tremendous benefits and cost savings for Marketing, but scalability, transformation of data into knowledge, and new processes for translating this knowledge into measurable actions, still need to take place.
Your examples of corporations adopting SM channels, while sexy and newsworthy, may prove to be uneconomical in the long run as a Customer Service operation mechanism, unless the automation of these processes and work-flows, can be automated.

Let me know if you agree.

Financial Impact of Product Reputation on the bottom line

dial I’ve been following a good number of discussions, on blogs and Twitter, about ROI in Social Media. While many of them are debating issues of advertising, public relations and marketing, the most interesting to me are those of Social CRM, or extension of CRM functionality into Social Media.

Within that area, I find the most exciting discussions to be those surrounding Customer Loyalty value, because it is so hard to define and to measure. While some CRM thought leaders, like Esteban Kolsky (@ekolsky) have flat-out declared that Customer Loyalty does not exist, others, like Kevin Stirtz of the AmazingServiceGuy.com attempt to come with methodology to estimate it.

Customer Loyalty Value Calculator does not provide ultimate answer for every business, but it does identify factors and logic that allows to illustrate the impact customer loyalty makes on bottom line.

image

“For any business, Top line is vanity, Bottom line is sanity, Cash flow is reality”

Product Reputation is another term which is difficult to define, measure and manage. It is often misunderstood as measure of Customer Satisfaction with a product, and both are certainly related, however the methodologies around measuring them are quite different and that makes measure such as CSI (Customer Satisfaction Index) or NPS (Net Promoter Score) results not as actionable, in my opinion, as Product Reputation scores. However all of these do, arguably, influence financial results, and overall brand value of the associated products.

I define a Brand Reputation as an aggregation of Reputations, Products associated with the Brand, enjoy with their Customers. Deterioration of Product Reputation can eventually erode the value of the Brand. We see Product Reputation as the delta between customer expectations and actual experiences, and this delta can be measured using semantic analysis and opinion mining tools.

Inspired by Kevin, I decided to build a similar simple calculator, which I called Product Reputation Impact calculator.

image

While it is a simplistic and rudimentary model, but it is useful for understanding how even small declines in the Product Reputation can result in sizable financial shortfall. The good news is that it also shows that there are opportunities to defend your Product Reputation, if you know what is (are) the cause(s) of the problem. In the above example the Product Functionality Reputation is under pressure and Customer Feedback verbatim analysis may indicate that modification of the marketing messages, that are creating these inflated expectations, can easily be adjusted to bring the Product Reputation into balance.

I would love to receive your feedback about this approach and if you would like to take a closer look at the Calculator, I will happily send you this spreadsheet.

Authenticity of Consumer Reviews

I started to use and contribute to customer reviews almost 10 years ago when Amazon introduced them in their book store. Even before that time, I participated in user discussions about this-or-that product on various user groups, but the absence of structure of these venues limited their value severely.

The October 5, 2009 article in the Wall Street Journal “On the Internet, Everyone’s a Critic But They’re Not Very Critical” by Geoffrey A. Fowler and Joseph De Avila (sorry,  WSJ will ask you to subscribe) reopens an old, by never-ending discussion on the authenticity of computer reviews.  There are actually two claims that need to be addressed in this framework:

1. Retail sites, that sport customer reviews, allegedly manipulate/mitigate/obstruct the visibility of the true reputations of products they sell.

2. The overall value of the reviews is questionable for a consumer who looks to reduce the uncertainty of their purchase.

I have very little personal experience with any evidence of breaches in the integrity of the reviews management process over the years. However, I am well aware of private and public recounts of such practices and I did read Bazaarvoice claims that they employ “mitigators” for the reviews they manage. It is not clear from the published writings, what exactly their role and responsibility in the matter was, but here are some examples of such claims:

Mechanist.tm writes “I recently purchased a NAS from a well-known online computer component shop. I have purchased several items from the website and have never had much trouble before. That was until I realized what I had bought was a terrible NAS. All the reviews on the site from users seemed very good. After a little research, it became clear that the product in question was indeed terrible. After finding the product pretty much useless for its intended purpose, I proceeded to write a review for it on the website to inform other would-be buyers. After about a week, I noticed that the review never made it up there, so I wrote another one just in case. After several attempts to leave a negative review for the product, I realized that the website was screening reviews and only posting the ones that made the products look good. All the reviews on the website are positive; I’ve only found one at less than 3 out of 5 stars. Is this legal? Ethically speaking, it’s wrong, and it’s intentionally misleading to the customer. Is there a good place to report behavior like this? How common is this among online retailers who provide user reviews?”

A Lovely and mostly informative discussion of legality and ethics of reviews manipulation follows ad noseum.

Various claims of Yelp’s handling of their restaurant reviews were widely publicized in the press and the blogosphere and the trustworthiness of mixing reviews with an advertising sponsorship business model was questioned.

I am aware of  quite a few, well documented, instances of attempts to manipulate public trust by overzealous, not too smart and surely unethical marketers. You can find some references to those in my Evolution of BPR blogWSJ Blog, as well as many other places. This practice is illegal and the article in the New York Times describes the precedent-setting case. Incredible transparency is supported by the Internet exposed by less egregious attempts by others to compromise public trust using the most popular customer review sites and succeed at eroding the reputation of the companies involved. However, we are citing examples of a dozen-or-so known examples, that involve hundred or hundreds of actual reviews, while tens-of-millions of customer generated reviews have been published over the years. Is it reasonable to dismiss the public service of a multitude of socially- minded individuals for the sins of a few corrupt or misguided?

Lastly, I would like to address the argument of a bias that is at the opening of the WSJ article that inspired this writing:

“The Web can be a mean-spirited place. But when it comes to online reviews, the Internet is a village where the books are strong, YouTube clips are good-looking and the dog food is above average.

One of the Web’s little secrets is that when consumers write online reviews, they tend to leave positive ratings: The average grade for things online is about 4.3 stars out of five.”

I assert that this average grade is meaningless and indicates nothing but a poor adoption of a scoring methodology. As a co-founder of a start-up focused on the extraction of Product Reputation analytics from Customer Reviews, I often hear diametrically opposite, and emotionally-charged opinions that reviews are too negative or too positive. These opinions are based on their owner’s beliefs and experiences, which are anecdotal, and based on the excessively ambiguous 5 stars usage by social media sites mentioned in this article. The Leikert scale (5 stars) was invented in the beginning of the last century for market research. Its use and interpretation are bound by rigorous methodology that is completely ignored in customer review entry processes.

When you review a product and give it 5 stars, does it mean you are satisfied or delighted? Did your experience match your expectations or exceed them? When you found the experience with a product unacceptable and end up returning it, why are you “forced” to rate it with at least 1 star (A rating of 0 stars is usually not available)? Does it mean that you are 20% satisfied with the product?

Since there is no rhyme or consistent reason in the collection of these reviews, the only way you can gain practically-useful application of this data is reading and analyzing each and every one. That could be very time consuming and that is why we process this data with opinion mining/sentiment mining software to produce accurate and consistent scores. As a result, I assure you that the averages are very different and a lot more useful.

Social Media and ROI

Humans are social creatures which means that socializing is a natural, integral part of maintaining our emotional well-being. We do have hierarchies of intensity in our social connections that are based on our personal evaluation of dependencies and reciprocity experiences.

Joel Rubinson, Chief Research Officer of the ARF, in his post “To understand social media, marketers must drop the C word”, quotes

“Lynne d Johnson, the ARF’s head of social media (formerly head of communities at Fast Company mag.) asked our expert panel about ROI, it was clear that we don’t have all the answers yet.  However, the answer I liked best was Heather’s; “What is the ROI for you to send your mother a mother’s day card?”  So I might add, what is the ROI for authenticity?  Who cares?  What kind of company do you want to be? “

I have an answer for Ms. Heather Maxwell from General Mills’s rhetorical questions – I am sure that among other things, General Mills wants to be a profitable company. Perennially unprofitable companies have a nasty habit of disappearing from the business landscape, along with the employment they provide to social media visionaries.

I am not arguing against authenticity, transparency and other virtues mentioned in Joel’s post, in fact I strongly support them. However dismissing core business value, just because we cannot figure out a sound methodology to formalize it, is disingenuous and is bad advice.

ROI has nothing to do with authenticity – it is a tool to help us make choices. The choices we make can be authentic or not.

Next time you have to choose between an invitation to a friend’s pool party and helping your uncle with a move, think of ROI and be authentic.