Tag Archives: Communities

How Accuracy in Analytics Matters for Businesses

I wrote about using analytics to get more value out of communities and how to use analytics to get to know your customers better.  However, the more I think about it, the more that I see the accuracy of the analysis as one of the most relevant issues for analytics. Continuing the series on the topics that matter for analytics, I want to look at the issue of accuracy and its effect in business.

Accuracy is the rating that defines how true is the analysis performed by the computer without human intervention.  In other words, and talking about social analytics, how real is the computer’s perception that a tweet or blog post has positive or negative inclination.  The only way to measure accuracy is by comparing the results of the computer analysis to similar analysis done by humans.  In other words, did the computer pick the same a human would’ve picked?

We only rate accuracy as the capacity of a computer to mimic a human brain and we can only conceive that a computer can be accurate when copying how a person thinks.  Alas, we forget to consider the inherent bias built into computer calculations: the one provided by humans programming such systems.

Computers only do what we tell them to do.  They have (almost) infinite computational power, and can apply any set of rules to any computational variables.  This means that if we tell computers that a specific word or combination of words means something positive, then the computer cannot make it mean something negative.  In other words, we are not really rating the computer’s ability to determine a sentiment we are rating whether humans did a good job, or not, in biasing the computer to pick that sentiment.  This means we can accurately predict an outcome selected by the computer before the first variable is computed against the first rule.

Accuracy is not a computer function, rather a human bias – and its “manipulation” can have great effects in the destiny of the business.  Learning to reduce the bias is going to yield better results for analytics, and where the efforts should be spent – so that you can get real value out of the analysis and so you can know where to take action, because taking action on inaccurate data can be a disaster.

Reducing the bias comes down to the two core elements of analytics: what we analyze, and what we compare it to.  What we analyze has two variables: data and rules.  What we compare it against has two variables as well: taxonomy (categories) and ontology (definitions).

Let’s examine how we add bias to these four variables:

  1. The data we chose to examine will determine the outcome of the analysis very quickly.  Whether we chose to analyze results from a survey, blog posts we picked up, tweets, or other expressions of human opinion, finding where the data to be analyzed resides is the most critical part of the analysis.  If we know that our customers frequent a specific community to discuss our products and services, analyzing twitter will not yield good results.  We tend to focus our listening to a myriad of communities, a very large number of them that don’t really matter.  And listening to the wrong place will bias the results by watering down the value of the positive and negative overall thinking.
  2. The rules are the place where the majority of the organizations introduce their bias.  This is where you will determine whether a specific input matters or not, and how it should be counted.  Domain expertise and knowledge of the business removes the bias in this stage.  There is nothing easier, for someone well versed in the event being analyzed, than to determine the rules that matter or not.  Some of the analytical engines have automated technology for creating these rules.
  3. The taxonomy is typically used to organize the insights found (sentiment, issues, etc.) into categories.  A few of the available products offer pre-defined taxonomies to which organizations add categories based on their product lines (e.g. cars, motorcycles, trucks) or business functions (e.g. sales, accounting, shipping).  Different customers have different categories or different ways to identify products and services
  4. The ontology (sometimes also called a topic domain) is simply word definitions. These overgrown dictionaries, which also include thesaurus and contextual word functions, determine whether the sentiment and analysis was performed between the right term and concepts.  For example, defining the word “tire” for one industry might be different from another. One of the things that improve the ontology is having a capability around disambiguation, a computer algorithm used to better interpret the content and identify the sentiment and the issues it refers to.

Now let’s look at the implications of accuracy for the business.  Let’s look at one specific example to show “accurately” (pun badly intended) how the correct set of variables can lead to the right (or wrong) conclusions.

A car manufacturer had a problem: airbags in their cars were deploying under unusual conditions.  Even when the cars were not involved in accidents, not even a small hit to the sensors, the airbags were deploying.  Upon initial analysis of the data the conclusion was that the mechanism that deployed the airbag (which costs around $30.00 to replace) may have been at fault.  A recall was prepared to exchange it for all affected customers.  Right before the recall was issued, the company decided to analyze in more detail the complaints they received from their customers (which explained in good detail how the incidents occurred), and cross-reference them to the detailed repair notes provided by the technicians fixing the airbags after deployment.

A higher degree of accuracy in the analysis was obtained by removing the bias that was introduced when translating the original complaints from the customers to mean that the entire mechanisms was at fault, and deconstructing the problem further to focus on each individual component of said mechanism which was detailed in the verbatim repair notes. This allowed the car maker to determine that the problem was not the entire discharge mechanism for the airbag, rather a small spring that overheated and relaxed under specific circumstances – a spring that cost just $0.25 to replace.  The potential impact to the car company if they had to issue a recall for the launching mechanism for all affected vehicles versus the tiny spring, a savings total of $29.75 per car, quickly shows how a more accurate analysis and cross-referencing of the data helps the organization.

Are you removing biases from your analytics? Improving accuracy? Are you looking at multiple customer data sources and cross referencing them? How are you doing that? Let me know in the comments below, would love to see more examples.

This is the third part of a series of six sponsored research reports I am writing for Attensity on how to better leverage analytics in a social business.

Forget Listening and Engaging, Managing Conversations is "Da Bomb"

You probably heard the many experts telling you to listen and engage with your customers.

Do it! Today!

Get on Twitter!

Get on Facebook!


If you just followed the advice you already know that in the long run it just consumes resources for no gain.  Sure, you can engage with an upset customer and solve their problem – but how do you find and fix the faulty process that led to the problem?  Or, you can promote via Facebook and get lots of people to buy — but how do you know if they did come from Facebook? Did they only “fan” (or “like” you to use the more modern term) you for the special offers or coupons?  How do you retain customers and grow loyalty via these interactions?

So, if listening and engaging don’t yield tangible, long-term results – then what does?

Managing conversations.  Yes, I did use the M word… after all, you are running a business and you need to get results you can track – right?

I talked to beRelevant about this problem and demo their platform for managing conversations.  These are my impressions.

You heard me probably too many times talk about feedback being the fourth pillar of CRM, and how the sole purpose of doing SCRM is to obtain actionable insights that are then converted into process-fixes, which in turn provide better experiences.  The biggest problem you will face when trying to do this is to figure out which conversations make sense to listen and engage in, not how to capture feedback. Feedback, shows up when not expected.  You can be having a conversation about a service issue, and a customer will give you feedback.  Or they could comment with a call-to-action in your marketing materials online, or provide a killer idea on a post about your product elsewhere.

Capturing the feedback is not that hard, we have lots and lots of tools to capture it from the many different places these days thanks to Social Media Monitoring tools and the fact that every vendor figured some way to capture it by now.  It is the doing-something-with-it that becomes tricky.  Quickly you realize that you need more details.  What seemed like perfect feedback during the service call (e.g. the product should also come in brown and green) now is no longer so clear.  What parts should change colors? Just the cover? Maybe we need more colors, not just those.  Does the customer still feel the same way, or was it because they were on a “nature” week?  The lack of answers  is what thwarts the efforts to make the  feedback actionable.

Having that follow-up conversation, in a controlled environment — oh, snap; now I used the C word — and being able to make something of it is what having a conversation is all about.

You can do that with a focus group, or customer council, or whatever method you used before.  Or you could have those conversations online, using a platform that is geared to it.  That is where beRelevant comes in.  It takes the feedback, finds the author, and invites them to contribute and converse in an online, controlled, managed environment.  They make the platform that manages conversations.  And it works, from what I saw, pretty well.

This is an early release in a not-yet-born market.  As long as we continue to push the myth that management and control are bad words, we won’t advance the use of Social Media in the organization.  However, if we ever decide to actually do something with the money we are investing into social media, and we want to have meaningful conversations, a platform like beRelevant is something you need to consider.

Leveraging Communities through Analytic Engines

The driving force for the Social Customer era is the participation in communities both for social and professional purposes.  From the structured social networks (e.g. Facebook and Twitter) to company-owned or company-sponsored communities used for support, sales prospecting, or research and development, through communities used internally for collaboration between workers – communities are showing up just about anywhere.

This change brings vast amounts of content generated by the communities.  In spite of the extensive experience gained by organizations in the past few years dealing with large data sets and knowledge, the user-generated content still remains untamed.  What to do with it, and how to leverage it for value, are almost as mysterious today as they were when we first began accumulating Knowledge in the 1980s.  Organizations are struggling to understand how to utilize it and how to derive value from it.  Alas, Content Management Systems and similar enterprise tools can help manage the creation and processing of structured content – but the largest problem still remains the unstructured content produced in these communities.

Realizing Value from the New Large Volume of Content

Consider the size of some of these communities: Facebook is close to 500 million people, Twitter nears 100 million, and a few of the corporate-sponsored communities have over two million members.  The amount of content generated is bringing organizations that were already drowning in data from transactional CRM systems to desperate levels.  They are now saddled by massive volumes of knowledge and feedback that makes finding the needle in the haystack look like child’s play.  In spite of the amazing volume, the storage and management of the content is not the problem – storage space is cheap these days so virtually any amount of content and data can be stored for – well, as close to forever as we need to.  The solution of cheap storage has given place to a bigger problem: what to do with it?

An organization wants to capture and leverage critical information from their customers’ needs and wants to deliver better experiences and products.  On the other hand customers fear that their feedback is not being heard and used. To show customers they care about their opinion, companies must act on the feedback.  Alas, given the volume, and short of scanning each entry posted in any community for useful information or data, how can they capture and act on this feedback?

image by © Plumdesign | Dreamstime.com

Enter analytical engines.

There are two roles that an analytical engine can play in a community – they can either be used to monitor and report on usage, sentiment and trends, or they can be used to structure the unstructured.

Monitoring for the Sake of Monitoring

Social Media brought with it standard monitoring tools.  Whether from Social Media Monitoring (SMM) vendors like INgage, Radian6, ScoutLabs, and Visible Technologies, or embedded within the products of other vendors, these tools are quickly becoming the “first line of defense” for the barrage of data produced.  The ability to collect the raw data, summarize it and report on specific terms is valuable for organizations that are suddenly overwhelmed by these new channels.

These tools are used for monitoring specific words and phrases, brand mentions (or competitors’ brands), and people talking about industries or products.  For example, during the TV airing of Super Bowl XLIV there was an analysis of brand mentions done by Radian6 and partners, called  BrandBowl 2010, which resulted in the naming of a winner by number of mentions and “positive” (like or dislike expressions) sentiment.  During the same event, another analysis done by MarketIQ contrasting Coke and Pepsi, aptly named the SodaBowl, also looked at mentions and sentiments for both drink manufacturers.  Again, the conclusion was to which was more popular – they actually used the term “buzzworthy” – not who gained what from their different approach to promoting themselves.

While certainly entertaining, it yielded no value to the brands mentioned on the success of failure of their campaigns – just whether they were popular or not.

Although there is room for improvement in sentiment analysis, the near-real-time analysis of these events allows marketers to identify which communities are important to them, and which ones need further attention.  It also allows them, for the first time, to understand immediately what effects their actions have and adjust campaigns and plans in real time –invaluable to improve the message and ensure a good reception by the public.

However, monitoring for the sake of monitoring yields limited value to businesses on their way to becoming social.  Listening is the first step, but engaging with the customer and providing a return on their feedback is closer to becoming a social entity.  Organizations leveraging analytical engines to find and structure this feedback are on a more interesting path to assess.

Structuring the Unstructured

Among the contributions to communities by their members there are very interesting nuggets of information, opinions, and suggestions that are often lost since there are no tools that can extract it, organize it, and use it.  This information could be used to improve products, create better experiences, or to better understand the needs of the customers and prospects.  Customers are more open in their opinions among peers than when being asked to complete surveys or participate in focus groups.  This candor and openness often results in very valuable data – which is not always leveraged.

image by © Carsten Reisinger | Dreamstime.com

Analytical engines can find that information and structure it (create a data record from it), distribute it to the specific system that can utilize it, and keep track of trends and patterns on the data they find.  Organizations use them to carry out actions like ideation (the creation of new products and services), feedback management (understanding how customers really feel beyond the surveys), social prospecting (finding more about their prospects and segments to target in sales), and virtual focus groups (leveraging customers’ opinions without formally convening a group).

Good analytical engines will automatically classify all the information collected (using an SMM – social media monitoring tool – is the best way to collect all this information) into different buckets, and analyze those buckets to generate insights.  This categorized information in its raw form is somewhat valuable, but the use of workflows and databases to store this data and process it further yield very powerful knowledge for the use cases mentioned above.

Integration Rules the Analytics World

The most valuable output an analytical engine can produce is the ability to take different inputs, across channels and across functions, and use all that in search of insights.  Organizations receive communications via email, chat transactions, online comments, surveys with free-text boxes, and many other methods.  To focus the efforts only on the communities, because they are the “hot item”, leaves a lot of potentially valuable data un-examined.  This data must be merged and integrated with the community insights for further analysis.  Analytical engines cannot stop at simply producing a report for each community; they have to become a critical part of the platform used by the organizations to interact with and manage their customers.

This platform will then integrate the content generated by all channels and all methods the organization uses to communicate, and produce great insights that can be analyzed for different channels and segments, or altogether.  This analysis, and the subsequent insights, yield far more powerful customer profiles and help the organization identify needs and wants faster and better.

Alas, the role of analytical engines for communities is not to analyze the community as a stand-alone channel, although there is some value on that as a starting point, but to integrate the valuable data from the communities into the rest of the data the organization collects and produce insights from this superset of feedback.

What do you think?

This is the first in a series of sponsored research posts I will be writing with Attensity (cross-posted to their blog as well) to look at the value and purpose of deeper analytics on communities (i.e. beyond simply mentions and sentiments-like words and phrases) and social channels.  Any ideas or areas I should explore further?

How Enterprise Applications Will Change in 2010

Back when I lived in Los Angeles I used to take one week at the end of the year to recover from the past and prepare for the new one.

I would drive into the dessert (Las Vegas) embracing all it had to offer (mostly CHP officers pulling me over).  I would stay in a clean and modest hotel (hotels in the strip where cheap and decent then), and spend a few days pondering (playing blackjack and craps) on the fate (I almost always lost) of the year to come. It allowed me to plan better (how long to eat mac & cheese and ramen dinners) and to set my goals (ask for a raise at work).

As I got older, more serious (married and with kids) the process changed slightly.  Alas, since I live in the dessert now (for lack of better publishable words to describe Reno), the process is similar but I spend more time thinking about next year (married, two kids = no money, small town = nothing to do — might as well think).

This past week was my think week for 2009, and I am using the takeaways to frame my research the next 12-18 months.

Four strategies are going to be critical for businesses to address starting in 2010; use this list to plan where to spend your hard-earned strategic budget dollars:

Business Functions. How much has the customer changed in the last two years and how much will it change in the next two? We are not talking about customers any more (at least not as before).  Then, why would you continue to use same business functions as two, five – even ten years ago?  You have to embrace a new model, and you need new business functions for that.

Communities. The most critical element in dealing with “customers” (yes, in quotation marks) in 2010.  As the roles of business functions shift, they are finding communities to be the precipitant (I refuse to say catalyst) for those changes.  You will have to re-learn what you are thinking about communities, and how to interact with them.  You will no longer build communities to control, you will participate in ad-hoc and impromptu communities.

Experience.  If you solely focus on delivering the best experiences during customer interactions (as you have done until now), you will miss out on the best savings and innovation.  Disney plans and executes flawless experiences from the moment you plan your vacation through the post-vacation memories.  Are you approaching experiences the same way? Or are you trying to do the “online experience” or the “brick-and-mortar experience”?  The disconnect is what’s causing you to fail.

Convergence. You will need to converge your Enterprise 2.0 (internal) and Social CRM (external) strategies (first), initiatives (second), and implementations (third).  This is THE sine-qua-non condition for your organization to succeed and become a Social Business.  If you cannot get your organization to collaborate internally and externally at the same time, you will be left behind by the competition — and that means in the next 6-12 months, not years.

The biggest problem organizations are going to face is not going to be strategy.  That is easy (well, not so complicated) to tackle.  The biggest problem is the technical architecture underlying these changes.  There is really only one technology focus area for organizations going forward:

The Cloud. I promise not to say private cloud anymore.  In reality the cloud is not even started yet (although clues are beginning to pop up here and there).  I am planning a series of posts through the year to explore the issues and items you must consider from the business side as you dive deeper into this vaporware (not metaphorically speaking anymore – yeah, bad joke).  If you have any doubts that the cloud will change your business in the next five to ten years, you won’t by the time we are done dissecting it.

I did say before that analytics was a critical component of 2010 – and I still believe it.  I am trying to fit it within the bigger picture and will bring it out as needed (my wild card for 2010).

This wraps up 2009 blogging.  I want to write a short sentence to say thanks for your support and commentary.


What I Learned from Your Twitter Discoveries

Last Friday @VenessaMiemis and I had the following exchange in Twitter:


We exchanged a few DMs offline to discuss a potential way to do it, and then she twitted out to the world.


I then created the #MonTwit hashtag, advertised it a few times, “counseled” (coerced would probably be a better term) a few people to write about it — and the result was, well almost overwhelming for what I was expecting and for only a day or two advertise the experiment and get the word out.

First, some stats — as the writing of this blog there were 86 contributors (people using the hashtag) 164 times.  Twenty blog posts, and 14 opinions expressed via Twitter.  Check out the rest of the stats at WhatTheHashtag, or get a transcript if you prefer from there.

Some people (14 – list below) just tweeted their discoveries (yes, Twitter is a microblog – so perfectly acceptable).  Some others (10 – list also below) wrote posts or posterous or similar entries on their lessons learned and discoveries.

I read them all, as long as they were properly hashed and I could find them, commented on a few of them, and learned a lot of very interesting things in the process.  Here is my summary of lessons learned on the fist iteration of the MonTwit (Monday Twitter).

Will there be more?  Conversations are underway to try to produce it better, spread the word farther, and looking for better focused and more concrete topics.  Short answer? more than likely.  Stay tuned.

Lesson #1 – Tribal Knowledge Rocks — On Demand.  Asking people to talk about something they know, at a certain time and with proper structure brings you a lot of different views.  This is good.  One of the largest problems with crowdsourcing or wisdom of the crowds is that the largest voices influence the smaller voices (or more powerful or more influential – pick your word to use).  Setting a specific timeframe for the answers takes away the “bully” effect inherent to wisdom of the crowds.  You will notice if you read through the entries the influence that early ones begin to have on latter ones.  Setting a specific time takes away a lot of this and provides very interesting, different perspectives.

Lesson #2 – Twitter is About People, not Technology or Content.  Yep, virtually everyone wrote about the contact with people they did not know before, or met via Twitter, as the most critical part of what they discovered about it.  Twitter is a community, as I always said, and the knowledge sharing is inherent to the model of community. People want to connect to people, and what is what Twitter offers — the largest “brain phone book” in the world to find the people you want, to tap into brains and knowledge that you think must exist but are not sure how or where to find.  See @WimRampen’s entry for more on this, as his was the most RT one during this experiment (barely edging Venessa in reach and reads).

Lesson #3 – Know Your Purpose.  Twitter can suck the life out of you… yes, it is that addictive.  Close to 100 million people talking about — well, just about anything can really cause you to lose track of time even worse that spending time on YouTube.  Why are you on Twitter is the first and last question you should always ask yourself.  Sure, it works great as a time-killer, but even better as a community – and communities are about sharing knowledge.  What are you trying to learn today?

Lesson #4 – I Still Know Little.  I realized what I know and what I am still to learn.  I like to say that I am constantly evolving and learning and did confirm some of my suspicions and best practices by reading the blogs today, but I also realized that there are so many aspects of any issue I am not considering, or discarding too quickly.  Twitter is a great mind-expansion tool and you should always, always look at if for that: an unfiltered window into the tribal knowledge of the world.

Tweeted Entries (chronological order)
Blogged Entries
@ekolsky (me)

Now, it is your turn.  Did you read them all? some? most? What did you learn? What is new or different that you picked up from today’s experiment? Do you have any ideas on how to do it better?  woudl love to hear your thoughts…

Update (12/23/2009): The #MonTwit hashtag will be revived in 2010 for more like experiments.  If you are interested, keep a search column in your favorite client to stay updated.  Thanks for the persistent asking everyone.

Late Update (01/02/2010): David Carr (@carr2n) wrote a compelling #MonTwit entry — without hashtag.

What I've discovered about Twitter

This is part of  the #MonTwit experiment; several bloggers are writing about the same topic on the same day, each adding their own perspective, so we can share our earned experiences about Twitter and learn more in the process.  I will update links at the end of this post as I find them, but feel free to follow hashtag #MonTwit in your Twitter client or browser to see where this is going.

I have been on Twitter since 2007 — well, almost.  I signed up with a bogus account in May 2007 to see what the buzz was about (there were so few people really doing it back then, it almost sounded like a porn place — was not going to use my real name for that… there was also some privacy fears).  Used it for 4 days — but not sure the word used is the proper one to describe my behavior.  Posted some breakfast and lunch things, exchanged a few messages with some people I never knew – but most of the talk at that time was not about technology or business, rather between friends and with some little professional chatter mixed in.  Left it behind, thinking it was interesting, but was not sure how it would actually make it to the next step.

Then in 2008 the noise was too high to stay out and jumped in.  Still, I was clueless as to what it was (I think it was in May of 2008).  I read about it and learned as much as I could: you have to follow to be followed, you have to listen before engaging, you have to put interesting stuff out, you have to build you presence… you read all the advice.  I turned into a generic Twitter user: no purpose, no reason, just follow the “basic rules” that everyone was touting.

I could not see the value of being another voice in an ocean of millions — I started to experiment with it.

Follow people who are different, with lots of followers but that have something interesting to say, participate of events, follow links, RT different things to see the reaction, and many other things.  A picture began to emerge of what Twitter was, what it can do for me, and how to use it better.  Slowly started to change my follower/following ratio, using searches more and more to find the right people and the right content.  Began integrating Twitter with other social networks, with blogs and other places.  Started to admit I was a Twitter user at meetings, explaining to people what it does and watching their reactions.  It was all data that contributed to my learning about Twitter.  To understanding what it was, how to use it, what it does.  It was the preamble to these three key things i discovered about Twitter:

1. Twitter is what you make of it. Twitter has no life, nor purpose, no direction, and no idea of who you are.  Sorry, hate to break the news like that to you – but that is it.  It is a platform that just sits there and waits for you to do something with it.  Approaching 100 million people quickly, it is a very large platform actually.  True, there may be 20-25 million active users — but that is still something.  However, it won’t wait for you or guide you to accomplish something. If you know how to get value out of communities, then you are going to enjoy Twitter.  If you enjoy listening to people talk about — well anything, you are going to get value.  If you know what you want to do in Twitter, you can get it.  Twitter has nothing prepared for anyone, it is what you want to make of it.

2. Twitter is a community.  Shocking, I know.  There are no forums or ideas or structure (well, you could try hashtags — it worked very well for the #SCRM Accidental Community), but it is a community.  I wrote about this a couple of times.  The main difference, and the great part about it, is that each person gets to build and mold their own community – and change it at the drop of a hat if you want.  You can create and follow lists, groups, searches, hashtags, and people for The Red Hat Society today, and for Punk Rock tonight – without much effort.  You can create several IDs and follow people in different ways, have several personas here and still be you.  It is a great build-it-yourself, shape-it-as-you-go community.  Just be yourself in as many ways as you want.

3. No one is ever wrong about Twitter. There is no right and no wrong way to do Twitter, since it is what you make of it and what you build around it.  So, don’t tell anyone how to do it right, or wrong, or better or worse.  What works for you, or your organization, may (probably won’t) not work for someone else.  Share your experiences and lessons, but make sure that you understand that it is just that  – yours.  As with any communities, the ideal outcome is gained knowledge from tribal sharing, or gained power from aggregation.  The way you go about doing that is going to be different, so don’t expect other people to do it same as you.  Share your knowledge in your community, learn from them, and always look for new ways to use it and get value out of it — then you’ll be right about it.

What do you think? What have you learned or discovered about Twitter? How was your experience different from mine?  Would love to hear your thoughts…

Other blogs participating on #MonTwit (constantly updating this section):

The Five Issues to Ponder Now

At the end of the year I work on my wrap-up for the year, and prepare for the year ahead.

I go through my notes from conversations, oft-forgotten “blogs that I must read”, books, and everything else that  has a tangential effect into my research for next year.  I end up with my “predictions” for next year and the next five years, and a wrap-up of what mattered in the year past.

These are the five bullet points that are getting more and more momentum as the key issues for next few years:

1. Generational Shift – This is the one where I am reading more and more off-topic information.  Anything from Zogby’s book  “The Way We’ll Be” and academic research dealing with the coming generational shift from the Generation X and Baby Boomers to Generation Y Digital Citizens.  This is the root cause for the “social business” coming of age.  Our responses to this are evolving and it is becoming quite interesting.  It is not what we are thinking, but what we are doing about it.

2. Experience Continuum – I started to talk about the social experience and the change in the customer experience when I wrote “A Brief History of SCRM”.  I started this blog to dig deeper into customer experiences and the coming changes in organizations, and it remains the focus of all my research.  Social businesses’ goal is to co-create ever improving experiences using feedback from customers – the biggest change brought on by the Social Evolution has been an increased and faster influx of data to co-create these great experiences. It is this faster change to experience management that becomes interesting.

3. Communities – I am not thinking how to create better communities, or how to be a better community manager.  Plenty has been written (wrongly, I might add) about that.  My thought process on this is how to make better use of communities (Brent Leary wrote a great short post recently about what he considers communities – I agree with him) that already exist, how to leverage the knowledge created and how to do it better.  Communities are not managed, nor created ad-hoc – you can only leverage them.  It is leveraging communities outcomes that will make a difference for organizations.

4. Analytics – I was recently asked what was the biggest change we experienced in the last five years, and what will it be for the next five.  The biggest change has been the change from “drinking from a firehose” of data produced by CRM to “surfing the tsunami” of data produced by the social evolution.  And this is where Analytics is critical.   The input from SCRM into the organization is actionable insights – and analytics is the only way to do that.  It is about creating actionable insights in a timely manner.

5. Data Management – All the data we are capturing is becoming too much for our antiquated models of data management to handle.  There are three areas that matter: the speed of analytics (stream flow analytics), the capacity of the store-and-retrieve models (theory that goes way beyond relational), and the actual storage medium (the hardware).  All three must work together for us to be able to realize real-time (or near-real-time) benefits.  It is about using what we have, better.

What are your top-of-mind issues right now? How about for 2010? Did I miss something big in my thinking?