headertext headertext promolink

Posts tagged with ‘Metrics’

The Contextual Side of Customer Experience Analytics

Satistical vs InsightCan you tell by the menu prices that a restaurant will provide you with a great experience? Will reading the final score substitute for the experience of the game you could not attend?  Some analytics practitioners would answer “yes” to these questions, especially practitioners who specialize in the field of Social Media Research. They say that transactional data, such as a number of re-tweets, Facebook “likes” or link clicks, provide meaningful insights into human behavior or intentions.

Wikipedia defines analytics as

“… the discovery and communication of meaningful patterns in data. Especially valuable in areas rich with recorded information, analytics relies on the simultaneous application of statistics, computer programming and operations research to quantify performance.”

Which data patterns should be considered “meaningful” depend entirely on the definition of “performance.” Complaints about the practical usefulness of research arise when business process owners shy away from or fail to define the performance they need to quantify. In other words, it is best to first figure out the actions you could take if you had the kind of information Social Media research can deliver. A successful example of this approach is extending Customer Services processes by using Social Media listening and monitoring to combine transactional data with rudimentary sentiment analysis. This approach quickly gained a lot of traction with many companies.

Yet it is not as easy to find truly actionable research applications as commonly adopted by marketing business processes, as these rely more on contextual than transactional data. Such contextual information is usually provided by traditional market research methodologies that solicit feedback from consumers and customers. However, these techniques are much better suited for confirming existing hypothesis than discovering meaningful new patterns, so they are not the same as contextual analytics. Since meaningful contextual analysis of Social Media is relatively new, very few marketing organizations have existing business processes that are immediately extendable.

Below is just one example of the distinction between transactional analytics and contextual analytics (borrowed from the previously published article):

Transactional vs Contextual New processes are evolving that use contextual analytics of Social Media to focus on improving market segmentation, marketing communications effectiveness and customer-driven new product development. These processes depend on the aggregation of transactional, contextual and operational analytics to produce substantially higher revenue growth.

 

 

 

Customer Experience is Everybody’s Business – Connecting the Dots

CX is everybodys' business

Most company executives don’t think that their accounting department is in the Customer Experience business. True, very few members of financial management teams normally have a reason or opportunity to communicate directly with their company’s customers unless they have to chase accounts receivable problems.

The new CFO of a start-up I was working for took pride in maximizing operational cash flow velocity. One of the minor tactical tools used was a few days’ increase in the window for reimbursing employees’ expense reports.  This change handsomely improved the short term cash flow statement. However, over the next few quarters, a noticeable trend in the growth of outstanding accounts receivable started to raise red flags and call for analysis.

You may ask what this has to do with Customer Experience Management. Interestingly enough, all measurements of customer satisfaction and loyalty, both objective and subjective, started to move in the opposite direction from the operational cash flow velocity metric within the first two months of the change in reimbursement policy. The Customer Experience Manager was reporting this troubling trend for months, but nobody thought of a connection. In fact, nobody ever looked at financial and loyalty metrics together at all, and that is why it took so long to link the cause and effect.

Cashflow velocity and NPS Cash flow velocity and CEM

 

It turns out that the technically complex product the company sells routinely requires professional services personnel to visit customer’s premises to help them ensure successful implementation and operation. The company’s engineers spent a lot of time making customers happy, and the company was paid well and on-time for their efforts. However, improving the velocity of the company cash flow negatively impacted personal cash flow of the front line employees, as they had to wait for sizable expenses to be reimbursed and had less cash for their personal expenses. They started to avoid and delay the projects that required travel, and customers fell victim to financial efficiency efforts.

Lessons Learned:

  • Customer Experience is a holistic matter – every single function of the company affects how customers perceive the entire enterprise. Of course some functions affect more than others, but they all do.
  • The Customer Experience measurements are predictive of the growth or demise of a company (product or brand). The trends are critical indicators of trouble, particularly if they are gauged against market averages.
  • Monitoring the correlations of trends between Customer Experience, Operational and Financial metrics allows for the fast diagnosis of potential treats to the health of your business.

 

Unlocking Value of the CRM part 1

crm1CRM initiatives used to be rightly considered to be the most challenging Enterprise Software undertaking by many people in business, yet many companies dipped their proverbial ‘toes” into this dangerous “water”. The reason is, that when these initiatives do succeed, the return on investment is very fast and spectacular. During the last few years rates of CRM failures declined substantially, but only because the expectations of returns declined even faster. So if your idea of CRM implementation is glorified Contact Management with rudimentary Pipeline, Help Desk and Marketing Automation, this read is not for you. However, if you are in hunt for a sound return on CRM investment, you may find this writing to be of interest.

It is important to understand why so many enterprises continue to sink money into implementing one CRM software after another in search of adoption and ROI, without experiencing either. I will explore them in this, and subsequent posts, based on my own experience as well as what I have learned from discussions with other practitioners.

1. Clear definition of objectives.

 

The objectives are often articulated in terms which are not specific or measurable – “Improve Customer satisfaction”, “Obtain 360 degree Customer visibility”, “Increase time available for Sales calls”, “Improve Sales efficiency by 13%” – are quite common and absolutely useless if not harmful.

First and foremost, we have to manage better communication….Project failure and success seem to depend on saying, “Are you able to accurately articulate and collect what the requirements are?” and “Are you able to express the right estimates?”…. Too many times, the collection process is weak, because the customer is not easily able to articulate [his needs] in language the people [on the project] understand. [S]oftware estimation is not a trivial exercise; it is still an art rather than a science.

JP Rangaswami is Managing Director of Service Design at BT Design

There is either no measurable target or agreement on base line and measuring methodology. The agreement is the crucial word in this sentence because we deal with the open economic system which can be influenced by multiple market factors outside of the sphere and control of this initiative. So it is difficult to come with clear metrics which would allow for measurement, i.e. accountability, and people are not often motivated to do difficult things without effective leadership. The result is a list of features and functions which are collected without asking – How this feature, function or process would affect the GOAL?

2. Effective leadership.

 

The leadership is very often is outsourced or delegated to IT after initial, very loud announcements:

In American culture, we tend to equate leadership with yapping. There is no correlation.

Lead Well and Prosper

Nick McCormick.

I would like to thank Michael Krigsman for posting these and some other excellent references and analysis in his ZDNet blog.

That is why effective leadership is critical -

Management is doing things right; leadership is doing the right things.

Peter Drucker

It is much easier to be a critic, so I will try to be constructive for a change, ranting can get very tiresome. From my Best Practices notebook I can suggest the following targets as examples:

“reduce selling cycle by 5% first year, and 7.5% during subsequent 2 years without decrease of average deal size, normalized to our industry market condition”;

“decrease deal discount rates by 10% from the current (pre-initiative go live) levels, normalized to industry market condition”.

These do not reflect the whole CRM footprint, but I found the SFA part is more challenging, that is why these are exampled, but I hope it is illustrative enough to extrapolate to Marketing, Support, etc.

Oh, I would love to use this new SFA system!

It is business leadership which needs to step to the plate and articulate WHAT do they want to achieve, and WHY these are the most critical targets to aim on. It is also very important that achieving agreed targets does not become an IT challenge, but remains a business challenge thru training, adoption, and compliance management. The IT is an enabler, not a deliverer of economic results.

I will continue later with the rest of W’s

I have six honest serving men,
They taught me all I know.
Their names are Who and What and When,
Why and Where and How.
- Rudyard Kipling

Musing on Metrics, Marketing and Innovation

How come there often seems to be no direct connection between the things we choose to measure and the goals we are hoping to achieve? Here are a few examples:

  • If a company management’s goal is a sustainable long-term growth, why do they measure their decisions based on IRR (Internal Rate of Return)? The metric is useful for measuring a transaction, but it can likely lead to an ultimate distraction of an enterprise vitality if applied to strategic decision making.
  • If a Customer Service organization’s goal is Customer Satisfaction, why do we measure performance of the employees based on how quickly they complete a call with a customer? Driving down the cost of customer interaction is a meaningful operational metric, but there is no profitability if customers abandon your operation.
  • If an ultimate goal for Product Marketing is demand generation, wouldn’t it be critical to measure why customers buy your product? “The customer rarely buys what the company thinks it is selling him,” as Peter Drucker said.

According to Clayton Christensen, a professor in Harvard Business School and brilliant scholar of Innovation, the root of this problem is the quality of education offered in our business schools. He makes a great point illustrating how wrong choice of key metrics leads to deconstruction of enterprises and entire industries. Clayton is famous for his efforts to re-focus marketing “a job customers hire products to do” as opposed to product’s specs.

As consumers, we all know that our experience with “products” depends on many factors that are not connected to or even correlated with its specifications, functions and features. Quite often customers are more influenced by how easy it is to deal with the supplier or how reliably a product performs, or how simply and consistently it delivers the outcome we require. Yet when we try to measure customer satisfaction, we ask them to score their opinions about characteristics of the product itself. I do appreciate the elegant simplicity of NPS (Net Promoter Score) methodology and its well-documented correlation with profitability, but what specific action can it suggest to a product manager whose product earns a low score?

Steve Blank, Silicon Valley entrepreneurial marketing genius and the author of The Four Steps to the Epiphany book, seconds Christensen’s opinion about the quality of our business schools and is working on the development of an alternative curriculum that is focused on customer development as opposed to financial engineering. Blank is preaching the importance of customer involvement into a product development that appears to be a no-brainer to me, but apparently is a relatively challenging concept to most marketing professionals according to Kristin Zhivago.

The choice of measurements we make has a dramatic influence on the probability of a startup success, according to Eric Ries—a creator of the Lean Startup movement—who has very interesting thoughts on creativity and innovation. Eric thinks that we prefer to use “vanity” metrics that make us feel good instead of helping us to make quality decisions.

So it appears that according to the experts, institutional indoctrination and lack of intellectual honesty are two major reasons for the gap between organizational goals and performance measurements that negatively affect our probability to succeed in business.

I would like to suggest that our compensation system methodology is the third leg of this proverbial stool. Since a majority of the workforce is not compensated for producing results aligned with a long term goals of organizations they work for, we instead end up measuring what is easy to measure and makes us look good.

Why People Dislike Metrics

I was talking to one of my customers about her experience trying to introduce the use of metrics into the business processes she is managing. Janet is in the gourmet food marketing business and was hoping to use analytics for discovering the patterns of shoppers’ consumption of her products by the time of day, as well as an impact of promotional events on the sales results. The food business, in her words, is a very fragmented environment and even the simplest business process tends to involve a number of companies to perform.

Clear understanding and measurement of the metrics, which Janet is interested in, would bring substantial financial benefits to all of the participants in this process, and yet they passively resist any attempt of implementation. Her frustration level was rising as Janet was describing the excuses she was getting from her customers and partners. They were not saying no to her proposal and even promised to make some information available, but ultimately no progress was ever made. Let’s make it clear that a cost is not a factor, as Janet’s company offered to underwrite the implementation.

“So why do ‘go-get-them’ people usually become so passive-aggressive when the analytics are involved?” Janet asked me. This question made me look back on my experiences, and it occurred to me that they invariably are similar to Janet’s. For over decades of my business career, I was charged with development and implementation of KPI’s many times in large and small companies engaged in different industries, but the outcome is always the same – passive resistance.

There are a few business processes that universally accept and practice metrics. The most common example are Sales and Call Center processes, but anyone who has managed sales forecasting will tell you that the efforts required to drive it are very substantial.

Recent explosion in web analytics technology brought to us a myriad of products that capture, measure and present dashboards of transactional data that may correlate to specific business process performance, but are very far removed from actionable KPI metrics that most of us need to manage business. Even marginal improvement in measuring performance of advertising investment disrupted the entire industry and created new multi-billion dollar players like Google. Imagine what could be done if we could measure actual impact of a given decision on the bottom line results. However, that would not be likely to happen anytime soon because of fundamental characteristics of human behavior – we will go to extraordinary lengths to avoid personal accountability.

The numbers can shine a light on our performance and quality of our decisions that can be too bright and harsh. Our organizational structures and compensation systems are too binary, with a few exceptions, to compensate for actual performance. Too commonly we get and keep our jobs not for delivering exceptional results, but for “fitting in” and showing up on time, for being efficient and working long hours, but not necessarily effective in producing the “right” results.

The key to successful, productive adoption of analytics into organizational fabric is careful selection of only those metrics that measure elements of a process that can be proactively managed by the parties involved to their performance benefit. The fewer relevant, actionable KPI metrics that help to take meaningful actions is much better than dashboards full of charts and numbers you have no control over. Relevancy beats ease of generation and drives user adoption.

Apple iPad 2 camera is eroding its reputation

The tablets market segment is fun to watch. While there is no doubt that Apple “owns” the segment it is interesting to note that it has the lowest satisfaction score compared to the competition.

Given the enthusiasm of Apple fans and popularity of the original iPad, I wonder if the bar was set too high for many iPad 2 purchasers. Digging deeper into the details of Customer Experiences we can see that a lot of negative comments are focused on quality of the camera embedded into the tablet.

Indeed the “focused” listening provides specific metrics that show difference between customer expectations and their actual experience with this attribute of Customer Experience.

The analysis shows that Blackberry Playbook is a clear leader when it comes to Customer Experience with picture quality. Considering widely held belief (which I do not share) that Apple does not do market research it would be interesting to see if they address the camera/picture quality issue in the next edition of this popular product.

We used Opinion Miner® software to analyze 1,103 customer generated reviews, published online before May, 29 2011, of the tablets listed above to generate these findings. The scores are calculated to the two point scale from 0 (unacceptable) to 2 (delighted) with 1=100% satisfied (i.e. experience matches expectations).

 

Musing on challenges of measuring

My hero, Peter Drucker, is often quoted to say (I paraphrase here) “What you cannot measure, cannot be managed” and this idea inspired many analytic initiatives by large companies as well as by budding startups, like this one. There are hundreds of companies that monitor, listen and analyze every aspect of web traffic, impact of media messages, both digital and analog, and just about anything else under the sun. There is surely no shortage of technology and tools, and current interest from businesses and consumers is quite high, but…there is still not enough conclusive evidence that measuring and managing to the specific parameter can produce measurable result. It often is still a challenge to interpret measurements into predictive models, that produce or support specific actions or decisions. Perhaps it is just my personal, limited experience and I look forward to be proven wrong  in your comments, but for now I would like to propose a few potential reasons for these disappointing experiences.

Is it possible that we often measure wrong things? Many people would argue that NPS (Net Promoter Score) is a meaningless thing to measure and the Social Media influence, measured by Klout and others, does not translate into any specific action. We often measure what is easy to measure, listen to what is easy to hear, without a difficult effort of understanding and interpreting into an action that can produce measurable improvement. Many people find it easy to identify metrics that measure the worth of their work:

salespeople have sales targets, production managers track whether inventory is delivered on time and under budget, but for most of us it is very difficult to associate and measure our direct contribution to the desired outcome.

Perhaps the most actionable metrics are derivative – a combination of a signal, statistics, interpretation and analysis. Measurement of atmospheric temperature and pressure, compared with historic observations and combined with predictive algorithms, do produce relatively reliable weather forecasts. Perhaps measuring multiple aspects of customer experience, compare them with competitive alternatives and combining it with  predictive algorithms, can produce more accurate sales forecast.

Is it possible that we have unreasonable expectations? We often expect direct causation while operating in an open system environment. Business environment is not a scientific experiment and unpredictability of market conditions cannot be isolated to prove validity of specific measurement methodologies.  We only can improve odds, but we often expect certainty. Uncertainty is the reason for any important measurement effort. Measurement improves confidence in a quality of the decision is supports, but it cannot guarantee an outcome, after all according to Warren Buffet “It is better to be approximately right than to precisely wrong.”