Category Archives: Uncategorized

ICYMM: Gartner Prophecy is Wrong (Says Someone Else)

ICYMM: Pointers to articles and posts elsewhere in the world where my work is featured.

This one is a good article that quotes me, but not based on an interview.  I really like it when someone takes my work and expands on it – it is what I always try to do.

In this case, Chris Ward from MyCustomer.com took a little bit of the work I did with IntelliResponse last year on Knowledge Management and cross-referenced that to some work that Gartner did on self-service.

In his opinion, and I kind-of agree, the present state of Knowledge Management as reflected in my research will prevent the prophecy from Gartner on self-service solutions to become reality.

Quote

“Doing self-service right means making the self-service experience available in a multitude of channels,” he states, in the report. “This appeals to a customers need for consistency of experience. Today’s best digital self-service technologies are channel agnostic, so that the customer can select the interaction channel of their preference and expect a consistent answer.”

He says that even though self-service has high adoption rates, the trends and data points will prevent it from reaching the 85% adoption that Gartner predicts by 2017.

Quote

The good news for self-service technology providers is that 64% of companies plan to invest more in self-service and extend it to other channels in the next 1 to 3 years.

He cross-references other data from different studies as well, but makes a good solid case for not reaching the goal.  Where is Knowledge Management and Self-Service adoption right now?

Read the article and find out…

What do you think? Is Gartner right? Short? Almost there?

Tell me your thoughts down below and will chat…

ICYMM: Operationalization of IT (and others) – an interview

ICYMM: In case you missed me, where I point to other places in the world where my work shows up now and then.

I have been talking about operationalization of Customer Service and other departments for about three years now, I covered a little bit of it in my last post on Business Transformation.

I had a very good discussion about it with Christine Wong  who later wrote a good article for ExpertIP.

Check it out, has a lot of the data and data points that have shown me where IT and Customer Service are going in the near future.

Thoughts? Questions? you know… down below.

Thanks for reading.

The Future of Data – An Ultra Brief Summary

I wrote a lengthy report on the future of data… due to several reasons I never got around to publishing it  (yet — I’d like to eventually, but now needs some updating).

I would like to share some excerpts from this report – such as the one below.  Please let me know your thoughts and it might encourage me to publish it (might, might, might… might).

Data has been touted over the years as many things: the blood the corporation, the main asset, the most valuable part of IT’s responsibilities – and lately as “Big Data”: a vast collection consisting of real-time data that comes from all walks of life and technology.

Regardless of how it is viewed, labeled, or categorized data remains the most important component of organizations finding and defining value for interactions between stakeholders and customers.

There are, however, two aspects that merit notice and make this very different from any other time: analytics and experiences.

The Value of Analytics

Without any processing, analysis and understanding of the hidden value inherent to data it actually is nothing more than a collection of one-and-zeroes in a storage device (or transport, if it must be interpreted life from the network as opposed to picked up from storage).  The purpose of data is to be combined with other elements, mixed-and-mashed in myriad ways, and its secrets unhidden.

This is actually one the most valuable advances of the recent years: the speed of processing, the understanding of the data, the advances in storage and management, and the better knowledge of how to manage data yielded amazing advances in real-time analytics and the use of predictive analytics (or, better yet – anticipatory analytics).

Now all we need are the tools to use it properly.

This is where most of the analytics vendors that get it are working on: data visualization, data-prep, data-scrubbing and more of those tools are losing their IT-centric, consultant-deployed models and embracing the rise of the citizen programmer.  Indeed, letting the stakeholders and the users directly access the data they need to conduct analytics and manipulate the data via graphical driven interfaces is removing the need for the “data scientist” and letting more and more organizations use their data better.

This is translating into better insights, better results, and better processes being deployed in organizations – and all this is the tip-of-the-spear for the digital transformation investments that will explode in the next few years.

All They Want is Experiences

The era of the customer is upon us.  Customers are more empowered and better prepared now that at any other time in history.  Armed with online communities, other customer’s reviews, and access to infinity resources via the internet they are smarter, better educated, and more knowledgeable than organizations.  As a result, they are calling the shots in the interactions between organizations and them: deciding when and where they interact, and (more importantly) what they want to get form the interaction.

This last point is critical: most organizations are not prepared to deliver those outcomes (in the form of experiences) because they don’t have access to the right information.

Information management I shaping up to be the critical investment priority for organization in the next decade and the three elements that make up information (data, knowledge, and content) are shaping up to get generous budgets allocated to them.

Of course, it is not simply about having the right information, or knowing where to find it, but also of making sure the right data is matched with the proper content and complemented with the necessary knowledge to provide a complete, personalized, and optimized answer to the customer’s question or inquiry.

The understanding of how information is created (how data is analyzed, knowledge generated, and content maintained at the very least) is what is going to be driving infrastructure investments for IT in the next 2-3 years (and longer) as they adjust to the new reality of having to manage information using technology.

What do you think? Am I way off?

Fixing Predictive, Making Anticipatory Work

Thanks for coming along… to refresh your memory: three part series

The concept? Predictive analytics was badly implemented, but new technologies and knowledge have made it obsolete.  There’s a new model that can be used, an interesting new model: anticipatory analytics.

This third and final post in the series will show you how to make it work – the rest had been more theoretical.  I want to really explore how to make the art of the possible work now.

How is this possible? It involves five elements:

  • Fast data management – the technologies that we use to manage data have changed dramatically in the past five years or so. From in-memory processing that does not require storage to understanding of unstructured data to faster throughput and better connections between systems we are able to manage data magnitudes faster than in the past – leading us to real-time (or near-real-time) use of data to let computers automate decision making. This is what allows us to create in near real-time customized offers that would have a far greater likelihood of being accepted.
  • Sharing of data – consumers, in the age of the customer, are sharing more and more data today thanks to social channels, social networks, and online communities. Information that was before hard to obtain or to understand it is today freely shared in public places for organizations to leverage and use to deliver solutions of mass personalization. This highly customized and attractive offers can only happen when data about sentiments is freely shared and analyzed.
  • Loads of data – there is no doubt that we are seeing more and more data being generated, but the counter side to that (organizations being able to manage all that massive load of data) is more interesting. Long gone are the days when using more data meant more expensive storage to accumulate it and welcome are the moments when either the storage is so cheap that is not an issue, or when the processing that happens in near real time makes storage obsolete and unnecessary. The volume that was suspect of bringing down performance before becomes a load of welcome data when properly filtered.
  • Outcome driven – as more and more consumers and customers are pushing to co-create and collaborate we see more organizations willing to focus on the outcome from the perspective of the consumer, not the company. After all, the outcome for the consumer also benefits the company as it usually relates to more products and services being sold. An ideal outcome for a consumer would undoubtedly necessitate more of a product or service that the organization, or one of its partners in the ecosystem, can offer and benefit from selling. Focusing on the outcome from the consumer perspective is a win-win situation for both parties.
  • Experience based – not just a generational shift, but also a societal move that reflects the advances of the middle class around the world, more and more consumers are shifting away from single interactions to complete experiences. This move, and the demands they make on organizations to deliver against them, is making more and more brands and providers consider ecosystems or complete experiences. In the case above, the airline could offer complete experiences versus a simple trip to create a competitive advantage. Knowing what consumers want for those experiences requires a complete predictive model that leverages all data and sources from all partners to deliver better experiences to consumers.

Of course, none of this happens automatically – organizations must undertake the process to understand what they have to do, setup the ecosystems, implement analytics, and optimize as they go. There are three steps that organizations must take to embrace anticipatory analytics:

  1. Understand Your Customers.  This is past the concept of demographics and who they are, but more around of their expectations, wants, needs, and demands.  What are they really asking? How they expect to get it? When? What is their impression of your duties and responsibilities towards them? This type of understanding is what drives the analysis and decision making you will undertake.  If you don’t know what you need to deliver, you don’t know how the data can help you anticipate their needs (and, btw, make sure you have processes in place to update those needs and wants as your customers are wont to do).
  2. Tie to KPIs.  If you are going to invest time, money, and resources into making this work you better be prepared to show how it supports the ever-moving goals of the organization.  We are not talking about numbers or metrics that are relevant to the process, but how those functions tie to the numbers that tell the story of how the organization is evolving.  The outcomes you are seeking better have a tie-in with the numbers that show the health of the organization – if not, find it.
  3. Evolve. There is no simple way to get there from here.  There is no silver bullet or magic potion that will give you an understanding of your customers expectations and how they evolve, nor is there a way to understand how the specific actions are tied to KPIs.  Further, it is likely that the technology you have you already implemented – no one will sell you an “anticipatory analytics engine in a box” to deploy.  This is about making commitments to see outcomes become realities and the best way to implement the technology you already have to support the new goals you are setting.  Leverage, repeat, learn, and evolve.

Well, that’s it for this series – thanks for reading along… what do you think? something you could do? See your organization doing?

Would love to hear what you are doing or thinking…

ICYMM: Three Tips to Power Your Knowledge Management Initiatives

ICYMM: In case you missed me, linking to articles or other places in the web where you can find my research and content.

I had a few weeks ago the change to talk to the lovely Lori Alcala  about Knowledge Management.  She was interested in learning more about the research and findings from a report on Knowledge Management I ran sponsored by IntelliResponse.

She wrote a great summary, I am  happy with how it came out  – despite being in a publication I don’t appreciate much – and wanted to link you to it and give you some quotes from it.

Quoted:

Kolsky noted that only 34 percent of companies have maintenance processes in place for their knowledge.

“Maintenance costs increase an average of 8 percent per year, but less than 3 percent of organizations would invest 5 percent or more on maintenance over the previous year.

“Most people see knowledge management as a technology,” he concluded, “but in reality, it’s a lifelong initiative of an organization.

Read the entire article including the three tips to “power your knowledge management” here.

disclaimer: you know me and my model, vendors sponsor research but don’t own any of the planning, deployment, analysis, or delivery.  they get to use the findings, but i get approval of the final content and model, etc.  IntelliResponse was kind enough to sponsor my research last year on KM (together with Transversal and Parature) and they got a good report (I will provide more info in later installments) and data.  They created an infographic and that was why Lori became interested and we talked.  That does not change any of my opinions about any vendor – but you know that already.  right?

A Model for Anticipatory Analytics

I hope you read my last post about what’s wrong with Predictive Analytics – that’s the basis for this post.

I would like to explain how Anticipatory Analytics should work – and give you an idea of what the value is.  This is, in essence, what predictive analytics should’ve been in generation 1.0, and how we evolve from that definition of predictive to today’s model.  As I mentioned before, I see predictive as being badly implemented more than anything and am hoping this model can improve and replace those faulty implementations.

The outcome for this model: explore the art of the possible.  Let’s start with a theoretical example to illustrate better the art of the possible.

A consumer purchases an airline ticket to a tropical destination. In today’s world of traditional marketing the email confirmation would contain links to typical tourist attractions with whom the airline has established relationships. If the consumer buys, they get a percentage of the purchase price – but the number of consumers that take those offers is extremely low.

Any incentive to purchase, or any special offer, will not be customized to that specific consumer to augment the chance of purchase.

Fast forward to predictive analytics events, and the airline may do better than just offer a bunch of random attractions, they might even filter by age and gender of the consumer – or even look at past events and see if they opted for one of the previous offers and then make the offer to the consumer. Since the confidence of these offers is greater, and the recipient is thought to be better known, the offers the consumer gets may be more customized and even addressed to their individual preferences (based on information the airline captured before).

Picture1-MBP

In a world where everything is possible we see a different scenario. The airline would use data from their own repositories, but also from other sources. Compiling information from social channels, communities, partners and alliances, even accessing credit card information from the past they can, close to real time, construct a very effective profile of what offers would or wouldn’t attract the interest of the consumer and extend deeply discounted, but almost guaranteed, offers customized to the preferences of the consumer. Not necessarily based on information that was stored before, but based on analytics conducted ad-hoc on the many data streams related to the consumer.

Each choice the customer would make would then change the potential outcomes of the many other options – which would then be recalculated and the most likely chosen – not the next best, but the next most likely.

In this example we see a consumer that instead of getting an email with 10-12 “opportunities” would get a highly customized package of offers that are almost guaranteed to be interesting and appropriate. Further, if the airline could obtain financial information from a partner about the user, the offers could be of higher or lower value appropriate to each user and further increase the chances of adoption.

Even more interesting, any choice that the consumer makes would alter the calculations for the many other options – in real time resulting in better offers being tendered instantaneously.

This new model, from expecting a consumer to repeat a behavior from someone else in the past to foretelling what a consumer may do based on his or her individual data and needs, and adapting it along the way based on their choices and other data, is the art of the possible today.

Great, you say – so how do I make this work?  That’s next week’s third and final post on this series… stay tuned, once more.

Fixing The Suckiness of Predictive Analytics

You have been so nice to respond to my publishing of “old” work that was never shown that I want to continue doing that.

What follows is the fist of a three part series (when you are breaking down a 3-4 page writeup into pieces three parts seems to be about ideal).

I was tasked last year to focus on how to make predictive better.  I was never a fan of how predictive analytics was implemented (I am fine with the concept, but I don’t think anyone cares about the concept – instead deteriorating it into a parrot-act of repetitiveness with no good results).

In conducting the research came upon long-forgotten concepts and ideas, and mulling through them gave me a new idea: the one you will read about in next week’s installment :-)

First, let me set the playing field.

Predictive analytics is finally changing.

An art form of sorts, revived by the recent interest in Big Data and analytics shown by global corporations in the past decade, predictive was never intended for the current uses.

The definition of predictive analytics puts it at odd with the current usage.

Predicting a behavior was not intended to be used as a harbinger of a customer’s intention to purchase but rather as a lagging indicator of an occurrence or event so that the knowledge could be used to build better analytics models.

The thought that an occurrence will repeat many times over because of past data points indicating a similar setup is criticized by many analytics experts – even when adopted by most organizations.

The difference is the narrowness of what predictive can do today. We are simply focused on one path, one way to get from point A to point B. If last time we were at point A we took a bus to get to point B, we will do the same today. The complexity of today’s world makes those “guesses” just about impossible. What if, for example, it is raining heavily and I am in a rush? Could I take a taxi instead? Or, what f I have time and it is a beautiful day? Could I walk? Or, what if I am with someone who owns a motorcycle? As you can see, the many variables that are traditionally ignored by predictive (we look for a pattern, and then try to repeat it when similar data points are recognized in the same sequence) are what make the new models far more interesting.

Keep in mind, this is not what predictive intended to be – but what became from the poor implementations along the way.

A successful bad implementation will  be repeated.  A failed good implementation never sees the light of day again.  This is how Twitter came be used for Customer Service (but I digress)

Instead of trying to predict behavior step-by-step as most predictive applications do, why not use the pattern as a loose guideline of a sought outcome, break down the steps, and consider the many options available at each step. What yonder could’ve been a monumental step in calculating and analytics is very possible today thanks to advances in data capture, storage, management, and analysis.

The “Big Data” era brought the capabilities to analyze just about any data set in real-time and add many more variables as part of the analysis yielding far more interesting insights.

And it is within this new approach that we find not predictive analytics – but anticipatory analytics: the ability to dynamically and actively generate insights at each step of the way based on previously impossible to include variables and elements: intent, decision-making by users in real time, and untold goals and objectives.

As a result, my phone may hail a taxi for me (and maybe offer me a discount) if it detects rain or nudge me towards walking on a nice day – not because I did it before, but because I am about to do it. This is where predictive transforms to become the art of the possible.

What does anticipatory analytics look like?  Come back next week to see…