A Methodology for Crafting Awesome Experiences – Part 7

Part 1 – Introduction
Part 2 – Strategic Measurement Framework
Part 3 – Design
Part 4 – Validation
Part 5 – Implement
Part 6 – Measure

If you have not been following the series – or for a refresher – please read the previous entries.

I was always bothered by the shallow measurements in two critical areas of experience management: at the moment-of-truth (when customer and organization interact) and for the overall experience. Using this methodology we are measuring moment-of-truth by integrating feedback events into them as part of the experience design.  That feedback is collected when it happens, focused on effectiveness, and uses escalation if necessary.

The measurement for the overall  experience is something usually addressed poorly.  Most organizations simply deliver a satisfaction survey and ask a question about overall satisfaction with the entire experience.  However, it is very hard for a customer to place a value in overall satisfaction in an experience where they may have been good and bad elements (and we won’t get into the discussion of using customer satisfaction as a metric).  That question would also not measure the effectiveness of the experience.

I want to introduce you to the end-to-end efficiency and effectiveness index (EEX), which accomplishes two things:

  1. It provides a method to measure and monitor the overall experience.
  2. It consolidates all the moment-of-truth feedback measurements into an index that can be adapted to specific goals of measurement.

There are two stages to compose this index.  First, each moment-0f-truth is measured:

Feedback Event

The three vertical ovals in the chart above represent the three elements present in each interaction between customers and the organization.  Each moment-of-truth must have a feedback event and the smaller bubbles in the chart represent what is potentially a feedback event (which one is chosen depends greatly on the process, what we are trying to measure, and the purpose of the moment-of-truth)m represented with a numeric value (or a quality-value transformed to numeric).  If there is more than one feedback event in the moment-of-truth, those values are to be aggregated into a common index (the big bubble at the bottom of the chart) — or you have to choose (and document the choice) which one you will use to incorporate into the EEX.

Second, we measure all these moment-of-truth in the experience:

Experience Measurement

Of course, the individual events will still be reported and followed separately, and will likely have workflows associated with individual values and thresholds.  The index does not override the business’ need to focus on each element of the process to understand weak spots and potential improvement.

Finally, we insert all the values we collected from all those feedback events into a formula to calculate the EEX:

EEX Formula

There are two variables in this formula.  First, the feedback (F) refers to the numeric value of each feedback event created and deployed in the experience we designed.  Each will we assigned a weight (W) which specifies how important that specific part of the experience is to the overall value of it.  For example, if your strategy right now is to focus on the value of online interactions (web, email, chat, SMS, Social Media) then you can assign a higher weight to functions that happen online.  The score will then be higher (if you are doing a good job) or lower (if you are doing a poor job).

By using weights you can quickly compare any specific portion of the experience versus the overall benchmark (a benchmark is created by assigning each interaction measured equal weights).  Let’s use  an example.

  • You are measuring the experience for change of address.
  • There are six steps in your web site to change an address, and four of them are moment-of-truth and have a feedback event associated with them.
  • If you insert those four values (F1, F2, F3, F4) to the above formula and assign an equal weight (W1, W2, W3, W4) of .25 to each, you can get the benchmark value for the experience (the average effectiveness per customers’ comments).
  • If you want to highlight one of the steps over the others (let’s say that you have been having problems with the confirmation of the old address before allowing the customer to enter the new one), you can then increase the weight for that feedback event to .40, and decrease the value of each of the other three events to .20.
  • The resulting value can then be compared to the benchmark to determine the effectiveness in the overall experience
    • If the value from altering the weights is higher – then that portion is performing above the overall function.
    • If it is lower, it is performing below the overall function.

Finally, how to use this index — I know you will be tempted to use this index as a single metric to represent the effectiveness or efficiency of the experience.

Don’t do it.

I wrote before about how using single metrics for measurement is wrong, and it does not represent the true value of what it intends to measure.  Instead, use it as an internal metric to let you know when an experience is performing below par, or when a specific channel or portion of the same is lagging.

You can use it in dashboards, scorecards, or reports as you wish – but make sure it is part of a larger reporting structure.  At the very least, when you use the index (as you are supposed to do with any index) you have to make sure you include all the values that led to that specific index – in this case, the different feedback values you gathered in the different moment-of-truth.

We reached the end of this methodology, and I am anxious to hear your comments.  What do you think?  Is this something you could use? Can you see the value for using it to craft your awesome methodologies?  Am I missing something? Is there something I should change / add / delete?  Please leave me your comments here or send me an email with your questions, comments, or concerns.

Thanks for reading.

How Do You Achieve Success?

Failure is far easier to achieve than success.

Gartner concluded in a recent study that among all new enterprise initiatives, ninety-percent (90%) fail to have measurable success. They either expect inordinate returns, aim for unreachable goals, allocate far shorter time than necessary, fail to calculate costs beyond technology, or improperly staff the project (wrong skills or insufficient people).

It all points to the lack of a strategy.

Failure does not happen when your strategy correctly identifies the goals and metrics to measure, the expected results, and the methods to achieve those results.  “Black Box” processing (where something magic or mysterious happens to achieve the expected results) never works.  You may get lucky once, but luck is not a measure of success.

Success is planned – same as failure is planned.

I wrote in a post recently that the secret to success is to effectively deliver a solution at 80% of perfection, and work on the other 20% through iterations as time went by.  I was derided as calling for mediocre solutions to be released.  In reality, aiming for a solution that solves 80% of the problems initially and continues to improve over time makes for a far easier way to measure success.

Programs like Six-Sigma, Total Quality Management, and Just-in-time Management know that you cannot implement a 100% perfect solution at first try – that is why they become better with time.

Could you succeed with a solution that addresses 80% of the problem at first?

The 90% failure rate should not deter you from starting your initiative.  It should instead propel you to find out the best practices available, create a reasonable strategy, set realistic goals (around 80% of your first-intended goals), determine the metrics to reach that goal, and plan towards is.  In other words, create a strategy for your initiative before it becomes doomed for failure.

Plan for success and you will succeed, plan for failure and you will also succeed.

What are you best practices for success? Could you deliver an 80%-perfect solution? Have you? What do you think?

You, The Community Manager

It’s time to formalize your role as community manager.

You have several communities in your life already.  The different aspects of your life before have now become online communities.  In my case, my professional network has moved to LinkedIn, my family and friends to Facebook, and my daily routines to Twitter.  I also frequent blogs, newsfeeds, and trade journals and read, learn, and contribute to them.  I am not only a member of those communities, I am the community manager.

I decide the information to contribute to each and the information I retrieve from each.  I decide who to bring in and who to leave out.  I am the super-user in my communities, and the driver for their growth.  I decide the level of involvement in each. I decide how to cross-pollinate these communities to keep them healthy.

If you struggle to manage and balance the different facets of your life offline, online is even harder.  My favorite factoid is that throwing technology at a bad process simply makes it bad-and-fast, not better.  This is what is happening to us — we are falling behind in managing our communities without a strategy resulting in more time spent online that necessary.

You need a community management strategy.

You need to create a Vision, Mission, Goal, and Objectives for each of your communities.  Decide the type and quality of information to contribute to each, the people you want to bring in and leave out of each, and how much time and effort you need to spend in each.  If you want to bring in super-users to any of those communities, decide how and what are the benefits for them and you.  Make a cross-pollination chart to know which information from which community should seep into which other.  Make sure these policies are known to participants in each of your communities, and that you follow them.

Although it is hard to think of your Facebook friends and family as a community, it is far easier and healthier to balance that role that continue to complain about the time you “waste” on it.  Time is only wasted if you have no purpose for the community.  Make the time and effort you put into each community count.

What do you think?  What techniques do you use to manage your communities? Any recommendations?

For Questions on Twitter in Customer Service, Press 1

IVR systems get no respect.

An IVR could be considered a great addition to a call center.  It handles all incoming calls, resolves the simple requests for service through interactive applications, routes the calls to the most appropriate agent, and captures identifying information to pass along .  Is the perfect attendant and can scale to hundreds of simultaneous incoming calls.

It should be considered a success story.  Users don’t want to use it.

When first introduced – only functioning as a phone-tree with routing functions – people liked the novelty of it.  The novelty wore off and the system showed its true colors: an automated routing mechanism with little forethought put into at deployment time, no integration or interactive abilities, and few tales of success.

Vendors quickly began to improve their offerings, create better programming interfaces, provide more interactive functions and add voice recognition.  Today is far better than it was initially, and is actually useful to automatically solve around 30% of the calls.

Twitter is moving along the same path.

I referenced in the past the problems with Twitter and how you should only consider it another channel for customer service.  Research showed companies used it mostly as an escalation tool to the call center, or  to create tickets; few cases are solved via Twitter.  It  is closely related to the IVR. Focus on what Twitter can do well and stop thinking that it can do more than it can.

We will see innovations for the use of Twitter in Customer Service that will make it a better tool.  For now, use it as you use your IVR: automating large volume of interactions, routing and creating tickets, and provide a listening ear for feedback.

Anything else, simply cannot be done now. Do you agree?

A Methodology for Crafting Awesome Experiences – Part 6

Part 1 – Introduction
Part 2 – Strategic Measurement Framework
Part 3 – Design
Part 4 – Validation
Part 5 – Implement
If you have not read the previous five parts, please do so to understand the context.

And so it is we reach the last of the 4 phases of crafting awesome experiences: measurement.

Whenever I talk about this methodology, this is the part where the debate begins.  One of the most cherished best practices of Customer Experience Management and Feedback Management is consistent measurement.  Not simply at the end of an experience or function, but throughout.  Especially for feedback management (an integral part of CEM) it is crucial to collect feedback at different points in the experience to understand which part may need improvements versus simply measuring the entire experience when it is over.  We even discussed measurement and inserting feedback events in the middle so the experiences as we designed in Part 3 of this series.

So why, then, am I talking about measurement at the end?

I am totally in favor or measuring feedback and other metrics at different moments in the experience.  As a matter of fact, when developing the Customer Experience Map (in the design phase), you will see you must find and map the moments of truth – those specific instances in each experience where a customer is likely to make a determination as to their satisfaction and loyalty.  Loyalty is earned or lost at those moments

What is the difference between that constant, consistent, on-going measurement and what this post covers?

The measurement we do at the end is to make sure that the experience we created hits its goals and objectives.  The way to measure once the experience is completed and deployed is by comparing metrics to older metrics – how things work now versus how they use to work.  There are three types of measurements you must do at the end:

  1. Effectiveness.  This one is the one that you cannot miss.  Did the change reach its goal?  If you were changing an experience because it took too long to complete before – what is the time to complete now? what was you goal when you started? is the new process better than the old one – even if the goal was not attained?  The bottom line is that you must be able to reach a goal you set before to consider the new experience better.  Alas, you can still improve an experience even if you don’t reach the goals.  Even if your goal is not achieved at first, it may necessitate to do two or three iterations to reach a specific number, improvements must be documented as successful results.
  2. Regression. Did you break something else in the process?  Is another process taking longer because of what you did?  Did you generate a new process that is not well documented or measured?  The bottom line on regression testing it to make sure that the rest of the experiences work the same – or better – than before.  While we usually focus on what did not work when doing regression measuring, sometimes you may find something else that works better -unintentionally – than before.  That should also be documented as any improvement can be used to justify the existence of the experience management program.
  3. Incremental. Is there anything new that must be changed or measured because of the results of this new experience?  This is a very interesting question to ask, and the reason why experience management is an iterative process. As you make changes to one process, another one may now need to be evaluated.  There are some dependencies that are not easily addressed at first, or easily found.  The purpose of incremental measurement is to understand and prioritize the next phase of experience crating.

A couple more things on the end-result measurement.  First, you are measuring at a higher level than a specific function or even the experience.  You are not measuring any of the three layers in the Feedback Model (effectiveness, satisfaction, or loyalty) but you are measuring overall effect on the existing and new experiences, whether the goal and objective you set early were achieved, and whether or not the design of the experience can be called a success.

You are still going to have to measure effectiveness and satisfaction directly from the customers as they use this new or improved experience – and that is going to happen separately and with different expectations. I just want to make sure you understand that this has nothing to do with the experience itself, rather with the design process and whether it reached its goals.

Second, you must remember that crafting awesome experiences is an iterative process, it takes time and several repetitions to make an experience perfect (OK, better and way more improved – there are not perfect experiences).  You will have to measure the changes as you go along, and make sure improvements happen after each iteration – regardless of how customers feel.  Your goals for improving or creating new experiences are different from what customers expect to get.

They want better, easier, simpler, faster.  You want better integration, more automation, easier processing, and less time to complete.  It may sound the same,  but the main difference is that what you measure is back-office processes that clients cannot see and what they measure is a perception and a impression on how it works.

After you prove that the experience improved, achieved its goals (or least moved in the right direction), and works without breaking anything else or creating new problems then you are ready for deployment.

And to start measuring the perception and impression from customers in the shape of effectiveness and satisfaction.

And for that, as I said in the first two parts, you will need a measurement framework (part 2 of this series), and an understanding of what you are measuring.

Which is the next and last part of this series.

What do you think so far?  Interesting? Any Comments? Thoughts?