nGenera Antes up in Customer Interaction Management

The story of nGenera could’ve been similar to Divine, Consona, ChinaDotCom, and many others who bought enterprise software applications to put them together and patch a “winning” solution only to find themselves later in life without vision or direction.  However, they made two smart decisions and ended up with three interesting offerings: a Social Platform, the nGenera Collaboration Server and nGenera Customer Interaction Management (CIM) Suite.

To power CIM nGenera acquired Talisma, who had in turn acquired both ( and eAssist — all great complementary tools in the Customer Service world.  Later they focused on the most important modules: the eAssist original engine and the original engine as the core for the new product, and later integrated social and community components from their Collaboration Server and Social Platform offerings, updated the eAssist components, and integrated it all with (yes, even the community-generated knowledge).

The end result? they announced today version 9 of their CIM Suite – a product that is a raring-to-go competitor in the Customer Interaction Management space.

I saw a demo last week and I like the complete set of features and functions in this release.  It is a finished product with power and flexibility to allow their customers to span all the different channels and to seamlessly deliver comparable experiences across all of them.  It has the ability to integrate well with other CRM systems, legacy systems, and even to power processes that span multiple applications.  Brings with it a feedback management module that is good enough to collect customer service feedback although it  certainly needs work to become an EFM tool.

The suite looks like it is a brand new release, even thought it is based on proven components.  That is good, we needed some “fresh blood” in the market to provide an alternative to the established players and propel the market further.  I have only seen the demo, so I am not sure how it works in the real world, and have not talked to their clients about their experiences (working on that).  I like the way it shows, and am excited about what it says it can do.

However this would not be news without the second very smart move from nGenera: they put the right people (the experienced eAssist and Talisma team) to manage the life of the product (updated: John Ragsdale thinks likewise).  Very smart move, and cannot wait for the future releases to see where they take the product.

Oracle launches something cool for CRM

Remember CRM?

That stuff we used to do before Social CRM?  The stuff that most people still do and need to continue to improve?

Oracle does.  Today they announced three CRM things: Siebel OnDemand release 17 with some clever life sciences complements, additions to the Oracle eBusiness Suite, and the Social Services Suite for Governments (part of a Siebel 8.2 release).

I used to cover CRM and Government in a past life and I know that Social Services delivery is very complicated.  As the incomparable Anthony Lye said in our briefing, the legislature writes the law in legal English, and the computers need to figure out how to draw rules for processing out of that legalese; quite complicated (Michael Maoz, a former colleague at Gartner and an extraordinary analyst, wrote about this complexity before) text with many, many subtleties, interdependencies and special cases to consider.  More often than not, these programs are run by hand so the “humans” can make sense of the laws (I don’t think this is the best way, and said so here).

Oracle, by an earlier acquisition, came into an engine that translates legal-speak into computer rules.  Kid you not.  And they have incorporated it into this release.  Thus, all levels of government can now automate the processing of the rules (yeah, I am certain it is not quite so black-and-white, but the idea is there) and provide better experiences, faster processing, and even self-service interfaces for citizens and constituents to get information and access to services.

I have not see it running, nor do I know the complications or implications of running it.  I am certain that there are certain things that would be more complicated than expected, and others not so complete as described.

However, from living in that world for a while — anything that helps with the biggest issues that governments face (and Social Services are right up there) is worth exploring.

noHold Launches Confederated Knowledge

noHold Launches Confederated Knowledge

I know what you are thinking, that the past two posts I did on The Problem with Knowledge and How to Build a Federated Knowledge Base were interims to this announcement (I noted at the end of those posts that it was sponsored research).  To a certain extent, yes — however, when I first started talking to noHold we decided to do something different: I was going to focus on Federated Knowledge as a research topic, and after I did so they were going to show me what they had been working on and they wanted to hear what I had to say.  They wanted me to write a review (this post) comparing their product to the research I had done.

On March 16th, noHold introduced Confederated Knowledge, a product modeled after the concept of Federated Knowledge. I got a briefing and full product demo. Below are my notes comparing my research and knowledge of Federated Knowledge to this release.

One of the first things that surprised me when researching Federated Knowledge was that there are no implementations I could reference, and nothing more than research on it.  When I wrote the article on how to do it I saw the problems behind it: very complex management of the integration, and almost impossible to manage the logistics of connecting the different repositories.

The idea of creating a conjoined knowledge base with input from many different places, or at least to display knowledge from different repositories into a single view is too complex for many people to think about.  As hard as it is to manage a single instance of a knowledge base – imagine having to do the same for two or more.  Worse yet, in addition to managing these repositories you also have to manage the integration points between them.  It would take very specific cases, and even then, the scope would have to be very limited for it to work.

Attempting to integrate two massive knowledge repositories, and find the common points between them, is impossible.  Well, nearly impossible.  I am sure you can find some integration points, but as soon as the knowledge begins to change, retaining those connections becomes very, very difficult (if not impossible).

What noHold did, and I like, is that they found a specific case where this works.  This release of Confederated Knowledge is geared to a specific industry (internet service providers) and business function (technical support).  The target market is one where customers who don’t know where to turn for help, usually come first.

For example, if you have an anti-virus application installed and it stops working – who is responsible? Is it the operating system vendor or the anti-virus vendor?  Or is the computer manufacturer to blame? The answer, as usual, is it depends.  Troubleshooting this type of problem, which used to be done by finger-pointing away from the organization, can now be done via Confederated Knowledge leveraging the answers and knowledge from each of the partners.

Leveraging noHold’s virtual agent, an organization can help their customers troubleshoot across vendors and knowledge repositories from a single interface.  No longer do they have to open two, three, or more windows to find the right answer from the many knowledge repositories where it may exist, or navigate to several web sites or search engines to find their answer.

Now vendors can provide a single-point-of-access to all the knowledge that fits their clients’ needs, raising customer satisfaction, vendor accountability, and response times for solutions; a win-win-win solution for customer, vendor, and partners (see the screen below for more details on how you cross knowledge domains with the same interface).

That is what I like about Confederated Knowledge: they deliver the value of federation without the hassle of the actual federation.

I was expecting a far more complex product with more administrative features to let users and knowledge administrators integrate disparate knowledgebases, multiple methods for accessing them, and sufficient smarts to be practically automated in the integration of KB.

What I saw was a good solution to a pre-determined problem with tools to create and manage it, administrative functions, and the ability to solve problems as addressed.  It is a troubleshooting and resolution tool for complex environments with two or more partners providing content and knowledge.

There are certain things I’d like to see done differently as it grows and evolves.

There is great value in using virtual agents, but being able to leverage the power of the Confederated Knowledge model via email, chat, even mobile would make it more powerful and more useful for the adopters.  I would also like to see a method to bring ANY data, whether partners or not, into the confederate solution.  Truth be told, there may be some external issues, even legal, with accessing data in other vendors’ web site – but if they can be worked out, the Confederated Knowledge would be ever so more powerful.

Finally, I would like to see more dialogs built for other industries and functions.  Yes, this will take time – but there are many untapped areas where this product can become very useful and would like to see it expand into those areas (troubleshooting is not exclusive to technical support; many other functions also require it).

All in all, I think that the launch exceeded my initial expectations and gave me something to try in certain circumstances.  No, it is not for anyone who would like to use it – it requires a specific problem that fits the solution, and willing partners that collaborate on the knowledge to be distributed.  However, it is a good first step towards leveraging the Federated Knowledge model for organizations and brings together disparate repositories of knowledge to connect people to solutions.

Disclaimer: noHold is a customer and I was retained to help them develop content for the launch of their Confederated Knowledge product release.

The Evolution of Customer Familiarity

Customer Familiarity can be defined in one of two ways: how much a customer knows about a company’s processes or how well a company knows their customers.  Although it brings certain issues of loyalty, commitment, and retention with it, how well the customer knows how to do business with the organization only matters when a company wants to change a process – and even then, the companies that worry about changing their processes know how to deal with it (for the most part, they probably did it already).

A company knowing their customers is a far more interesting – and complex topic.  As the next step in my series on how to leverage analytics in an organization, I want to explore a little bit about how and what we learn about our customers.

At the beginning we had customers that simply bought our products or services.  And we did not care much about whom they were, we simply produced our products in the best possible way and then we sold them in the market.  At the very best, and not in all cases, we retained simple identifying information on customers – name, address, phone numbers, product they bought, and not much more.

We never used that information – OK, sometimes we used it for service and warranty purposes – and the value of knowing that Joe Smith bought a SNW-10 widget was not of great help for much other than tracking inventory and figuring which one sold better than the others. Reporting, if any, was usually just a summary of products sold, or some other identifying information such as 20 people from Reno, NV bought the product last month.

Someone came up with the idea of building profiles for customers.  The idea was that if a 29-year-old, white male, Cadillac driver, golf player, who lived in Southern California liked SNW-20, then all other people like him were potential buyers for the same product.  The theory goes that buyers often buy what their peers buy – whether they know their peers or what they bought, or not.

We set out to capture as much profiling information as possible, we called it demographical data or demographics, and we used it to segment customers in many different ways.  We then used those segments to create marketing and sales programs, to create service solutions – and to report.  Reporting was done by a process called cross-tabbing, or selecting certain demographical data and cross-referencing it against something else (e.g. 29-year-old from Saint Louis, who rents an apartment).

CRM systems were born and the idea was that profile data we had could be mixed with transactional information, collected by CRM systems, and we could build what was then called the 360-degree view of the customer, or a total picture of them.  Using transactional data we could both learn what customers wanted and predict what they needed – in theory.

The era of CRM saw the birth of customer analytics, as we try to discern from all the information we collected what was wheat and what was chafe (i.e. what was worth using for profiling and predictions, and what was – well, just stored).  By using that analysis we learned that transactional and demographical data could not provide a complete picture – it lacked data on the intentions of the customer.

Enter Surveys and Enterprise Feedback Management.  Organizations began to use surveys, first focused on customer satisfaction only, then more on intentions, needs, and wants.  These feedback events, consolidated into an EFM system that integrated with the demographical and transactional data collected in CRM systems, focused on Behavioral (what they did) and attitudinal (what they wanted to do) data from customers.

Reporting began to change.  Since we had analytics we began to create insights instead of reports.  Analyzing the data and finding patterns and trends in them gave us the ability not only to predict better what customers may want and need (inferred from the insights), but also allow organizations to understand better customers behaviors and model experiences behind them.

Later, when the use of open-edit or text-box questions was added to the surveys, we saw the data collected was very valuable.  Now customers could express, freely, how they felt and what they wanted even if they were not asked directly in the survey.  The amount of data collected, in unstructured form, became a treasure of sorts, where companies could read and learn more about their customers than they ever could before.  This data is still today critical to all efforts across the organization to learn more about the customer.

The last era of customer familiarity came with the social evolution we are currently experiencing.  Sentimental data, or sentiment analysis, began to crop up.  One thing we did learn from doing surveys is that customers tend to answer them in the way they think the organization wants them to be answered.  I wrote about a method you can use to ensure a 90% customer satisfaction, for example, that works in real life.

Analyzing the sentiments in addition to the behavioral and attitudinal data yields a true view into the customer’s mind.  Knowing their state of mind when they decide what to do, or what they will do, is an incredible insight that organizations can use to improve their processes and their business.  It does not replace any of the other data, it complements it.

There are two things that happened in this evolution, and they are very different:

  • The quantity and complexity of the data increases as organizations leverage analytics (see my previous post on how to do that well) to find more valuable insights and they use those to build better experiences, better products, better services  and – well, better businesses.

  • The detailed personalization of each customer diminishes in favor of a community of customers with similar likes and dislikes, needs and wants, and similar profiles.  Having a community makes it easier to assign specific attributes to it and leverage it for analytics .

So, now we are left with loads of useful, but unstructured, complex information, better profiling of less personalized customers, and some insights – what is the next step? More analytics.

Parsing the edit-boxes and comments collected, creating structured data models from them, analyze them for behavioral, attitudinal, and sentimental data structuring the unstructured, and use those new data models and insights to improve products and services.

What do you think? What did I miss? Do you see this as the most interesting turn of events yet?  Let me know your thoughts…

This post is the second of a series of sponsored research posts I am doing with Attensity on the use of Analytics in CRM.

How to Build a Federated Knowledge Base

In my previous post I discussed one of the main problems of knowledge management: distributed knowledge.  I provided three options, of which I prefer the model for Federated Knowledge, to solve the problem.  Just like in a federal government, the independence of the states (or knowledge collaborators in this case) makes it easier to manage a much larger entity.

Building a federated knowledge base takes first and foremost collaboration.  All contributing parties need to agree to the basic rules and must follow a similar model: all of them must contribute equally to the endeavor for it to serve its purpose.  A governance body, or committee, composed of representatives of all affected parties, must collaborate to create the rules and regulations for its operations (yes, just like in congress).

Once the governing body is in place, the first step is to determine the strategy (four key components: Vision, Mission, Goals, and Objectives), the purpose, and set time commitments to make it work.  This document must be the same for all interested parties, and it must be agreed to by senior management, with an executive sponsor at each participating organization.

The fact that different organizations, each with their own challenges and problems, are coming together to work towards a common goal makes the joint infrastructure the key component of success or failure.

Of course, the implementation must have certain features that would make it valuable for the users.  The following list is the minimum set of requirements that any federated knowledge base must have:

Alerts – alerts work at the administrative level to notify of problem knowledge entries, answers with high scores but low readership, and other problems that may signal an incomplete or inconsistent knowledge base.  Given that knowledge is produced in different content management systems, a centralized alert system is critical to managing the value of knowledge.

Subscriptions – for users of the data, being able to subscribe to be notified when it changes is critical since they are not accessing the source for the knowledge directly.  A critical distinction must be made between subscriptions for specific items and entire knowledge bases.

KB Performance – considering that the data is housed in different places and must come together at the time of need, an equal platform or infrastructure in terms of performance will guarantee the execution of the joint knowledge base.  Consumers won’t just sit around waiting for one slow performing component; they will abandon the self-service solution and become phone users.

Taxonomy – Quite simply, a federated knowledge base that has different categories, classifications, meaning to terms, or even different synonyms cannot find the right information.  Parties in a federated knowledge base must adhere to the same taxonomy to classify and index their entries.

Autonomy – Even though there is a joint purpose in the building of the federated knowledge base, autonomy must remain for each member of the federation with respect to their internal policies, content management systems, and decisions on internal maintenance and governance.  Alas, there is a common agreement on how to operate the joint venture, but it must never supersede the internal organization and Standard Operating Procedures for any of the members.  However, one of the un-intended effects of being in a federated model is that the practices of any one organization can be improved by exposure to others in similar situations.

Collaborative – Working together towards the common goal of providing consumers an answer should make the different teams work together.  Collaboration between the teams could also be extended to collaboration with consumers, if community-generated content is part of the deployments at any of the partners.

Dynamic – As the needs from the consumers evolve over time, so will the need to populate, maintain, and improve the federated knowledge base.  Even something as simple as supporting community-generated content can wreck havoc in an inflexible system.  The rules and operating procedures for the federated knowledge base should be flexible enough to accommodate changes over time.

Local-Global – The old adage of “think global, act local” applies very well to these systems.  Anything than any of the partners or members will do with their knowledge management and knowledge bases is bound to affect the other members of the federation.  All changes must be thought at a global level, even though approval by all members is only necessary for those items affecting the federation, not the rest of the operations.

Of course, these are just the core guidelines and components that will make your federated knowledge base work.  The item above about dynamic and flexible systems also applies to your specific needs – these norms can be changed and altered to suit your individual needs and those of the other members of the federation.

What do you think?  What am I missing?

This is the second post in a six-part series of sponsored research I am doing with NoHold.  More goodies on federated knowledge bases coming soon!

The Problem with Knowledge

Let me paint you a typical problem in a home office scenario: You are working at home finishing a document you need for your meeting on Monday morning, click on Print – and nothing happens.  Try again, still nothing.  You go through your standard “repair” techniques: turn the printer off and on, unplug the connecting cable from the printer and the computer, reconnect it, save your work and restart the computer; still nothing.

It’s time to go to your favorite search engine. You type some keywords (printer, printer model, error message if any, words “not working”) and find 1,250,000+ links.  You click on some of them, and get pieces of the information you need. Your printer’s web site says you need to reset the printer, your computer’s website talks about upgrading drivers, and the operating system’s support site talks about parameters in the registry.

Who is right? What shall you do?

Welcome to the problem with knowledge: we have too much of it and it is too widely distributed.  Finding the right information is not easy, and even when you do it is generally not complete: either you are missing steps that a vendor assumes you know, or there is a link to a third-party web site that is broken so you never find what you need.  The level of frustration increases with each passing moment since all you want is to print, or at least to troubleshoot your problem.

According to The American Customer Satisfaction Index (ACSI, run by the University of Michigan) customer satisfaction with personal computers support departments has been in a steady decline for the past 10-15 years.  This is when the interconnectivity between components escalated, and self-service knowledge centers came online.  As it turns out, the cost for self-service may be in the pennies per transaction for the organization, but the cost to the customer is much higher in wasted time and frustration.

Pushing the customer to support themselves via a self-service center might sound like a good move when you have a simple solution, with no inter-dependent components, but when the problem could have multiple origins, letting the customer try to figure out what is the proper way to troubleshoot and solve the problem does not work.

The problem is even bigger for brands.  Beyond upset and frustrated customers taking cheap shots at their products in social networks, they also have to deal with customer service agents and their lack of access to information.  The number one reason for churn in a call center is that agents don’t have access to the right systems or information to do their jobs. When the customer cannot find what they want online, they reach for the contact center.  Alas, if the agents don’t have any more information than the customer has – there is nothing they can do but sit there and be yelled at.

What is the solution?

There are three models that could solve this problem:

Hybrid Knowledge Bases – To create a hybrid knowledge base combine the content from two or more knowledge bases into a massive knowledge base.  The problem with hybrids is that very quickly they become so massive that finding anything is impossible.  So end users find the first 2-3 entries and hope that is the answer – similar to doing a search in the open internet – and they are not very easy to manage either.

Knowledge Management Partnerships – Two or more vendors work jointly to create solutions that are later propagated in their respective knowledge bases.  In the example above, the operating system vendor would work with the printer vendor to produce specific knowledge in the places where they intersect, and then put that in both knowledge bases.  The sheer complexity of coverage for all the possible combinations, and then being able to keep those up-to-date is where the model falls apart.

Federated Knowledge Bases –A federated knowledge base works in a similar model to a federated government: each vendor controls the knowledge specific to their products, and then work together in the areas where they intersect with other vendors.  Using the example above, the operating system vendor would create and maintain their own knowledge base for all the issues related to printers, and then jointly create and use knowledge for where they intersect with the specific printer in question.  Each vendor can create and manage their own knowledge base, they can maintain it as needed, and need to focus on only very little information in regards to the other vendors.  This information does not even have to be the same as the other vendor in the same intersection, just has to be accurate from their perspective (chances are that they will be the same, or very close in nature).

Obviously there are different scenarios that would work for each model, but the federated model is the one that works better when both partners have a similar commitment to the enrichment of their knowledge bases and they rely equally on their Knowledge Bases for service.

How do go about implementing a Federated Knowledge Base?  That is our next installment…

This is part one of a six-part sponsored research project I am doing with NoHold.  Stay tuned for more on federated knowledge, a very cool topic indeed!

What's the Problem We Are Solving with Social X?

I had an excellent lunch with my friends at Simplybox (great company, killer idea, well implemented enterprise collaboration- not a client) on Wednesday and we had a sensational discussion on solving problems.  They are going through a growth phase and are trying to position their product better based on their customers feedback.  We discussed several things, but we ended up talking about the title of this post.  As we began to discuss the different things that the product can and should do an idea came to me — which I’d like to get your input so I can understand it better and see if I am right or wrong.

Software is not a solution, we all know that.  Software is a tool, an aide to solving a problem.  The question that always arises is what is the problem we are solving. The best way to look at this is to say that there are two types of problems: pain-points and inefficiencies.  Bear with me for some definitions, it is important to distinguish them.

Pain points is what we all think of when we think of using software: a very specific function or process that is not working well and it is either costing more than it should, or not yielding as much as necessary.  Taking five days to answer a customer service email, using two weeks to process a database for a marketing campaign, or not being able to score leads —  these are pain-points, the problems that software is supposed to solve.  And, for the most part, it does.

Inefficiencies, on the other hand, are problems that exist within the process but they don’t hinder the normal operations of business.  For example, if an employee cannot get all the information they need in one screen and instead they need to go to three screens to collect it or if the phone system drops calls once in a while.  These issues will not cost us to consistently lose  business, and if we improve them it is likely that we won’t notice the betterment in the existing processes.

There is one more, rather two, distinctions:  buying centers and ROI.

The people who buy these solutions are different within the organization.  Inefficiencies are tackled by CIOs and IT.  Pain points are tackled by business units and stakeholders (some cross-over, but it does not last long — IT does not want to solve pain points, and business units are far from maintaining systems).

Then there is the issue of ROI.  It has been debated plenty and I will let you to Google or Bing your way to illumination in the matter. My point is that business units must show ROI for their investments.  They have no other way, in a civilized company, to get their funds approved.  Some of them try to skirt the issue by going the SaaS way – but CFOs and procurement officers are coming around to that idea, so it won’t last too much longer.  IT, on the other hand, works on the infrastructure.  Their initiatives are not ROI-driven (if you are going to call me stupid and tell me you are in IT and you have to do it, fine – lack of vision is rampant and your management has it), rather driven by the needs of the business to leverage technology and data.  They don’t prove ROI, they prove need.

This is, to me, the most important part of placing Social Business as a priority in the business.  It is a strategy, thus driven by the business side, that leverages technologies (Social Media) to accomplish what needs to be done (customers jobs, co-creation, customer experience, socializing applications – your call).

The strategy part has to prove an ROI and solve a pain point.  The technology part needs to prove a need and solve an inefficiency.  You cannot sell them together as one, nor can you make a business unit buy Social Media or an IT department buy Social Business.  You are no longer moving one project forward, you have to move two — with different players and different business models.

You are either going to solve a pain-point or an inefficiency.  Or both.  However, you need to do it differently.

What problem are you solving? What are you doing to sell the need and the solution? Are you talking to the right people about it?

Oh, the Dilemma! People or Systems?

I had an interesting day today following the live-tweets from the SAS Inside Intelligence Analyst Event.  There were some very interesting tweets that came along, like this one from Ray Wang (Enterprise Analyst with the Altimeter Group, and the most prolific tweeter for the event with around 15% of the total tweets):

Finally, I thought, organizations are starting to understand the value of data and that we can begin to use it for strategic needs.  Then Dan Vesset (IDC Analyst and author of a terrific paper entitled Decision Management: A Strategy for Organizationwide Decision Support and Automationmust be an IDC customer or pay to read it) tweeted what I consider best news from the Event:

Now I started to get excited — we are finally getting to the point where systems can make decisions, look at the data and make sense of it and not only recommend or report on the data, but actually make the decision and maybe, just maybe even act on it.  Ah, the possibilities — all those years of Star Trek and Star Wars finally coming to fruition!

As a big proponent of automation for organizations to truly leverage technology and data management, my head was spinning — could it be possible? Are we really that close to making something like this happen?

Later in the day, I caught a tweet from Venessa Miemis (Futurist, Student, Amazing Brain, and the writer behind the very famous and well read emergent by design blog) that talked to a different (yet similar) reality:

A different opinion indeed.

This got me thinking: do we need Sensemakers, people who can make sense of the data — or can we trust the systems to make sense and make the decisions for us?

I had a conversation via Twitter with Venessa about this, but there is only so much you can do with just 140 characters at the time.  I told her I would write this post to explain my positions further.

Here we go.

I fully believe that there are three factors standing in the way of Sensemakers as Venessa tweeted:

  1. Scalability – There are around 6.5 Billion of us in this planet, and we are growing towards 9 Billion in the next ten years.  Too much information that needs to be processed to those many people.  Sure, the counter argument would suggest, with those many more people you can have more Sensemakers — thus you can feed the needs of more and more people.  That would be true if Sensemakers were easy to find, train, and deploy.  As it was pointed out to me in discussions in my previous post, we still don’t know very well the type of people we need to analyze the information — how can we expect of have more of them?  To me the model is not scalable and thus would not fit the purpose.  To be fair, Venessa feels that this big-box thinking is what got us in trouble before — so why try again?  Well, for starters…
  2. Globalization – We are no longer limited to the information in our near-and-dear communities.  The local, small-town mentality that most of us had (yes, even in corporations) has recently been replaced by a global perspective.  This is a big world (before you say Duh!, please read on) and to feed the knowledge needs of a global world you need a global mentality.  Human beings are nurtured in local groups and communities; we are not global in actions or thoughts.  The ability to think global is not innate, and it is not easy to do for one person.  Finding, training, and deploying that person – in addition to being a Sensemaker – becomes an almost impossible task.  Now multiply that by 6.5 Billion people or so.  Computer systems can handle the magnitude of this need, human beings can only say “Huh?”.  Further, globalization has also brought the issue of…
  3. Complexity and Volume of Information – Raise your hand if you don’t feel overwhelmed  by knowledge and information coming at you (OK, the funny person who raised their hand can now put it down).  The sheer magnitude of data, knowledge and information is mind-blowing.  Add to that the complexity level of the information we receive and you get an idea of why you feel so overwhelmed.  Now, you have to find the potential Sensemakers to take that complex information, make sense and connect the dots and then communicate it and explain it to the people who need it.  Wanna apply for that job?  Me neither.

What I do want is to use computers and sytems designed to handle very complex, very large data sets and put them to work the right way.  We saw in the last few months the launch of machines so fast and powerful that my old Ti-99/4A seems like a — well even my phone is 100s times faster and more powerful than my old computer.  Why not leverage those systems for what they are supposed to do? Take large to gigantic data sets, organize them, make sense of them, and then act on it.

To me, this is the way we are moving in the next five to ten years. This is the reality I want to build towards, what I see as our future.

Wanna join me?  Why not? What do you think is a better way to handle these demands and needs?  let me know your thoughts, would love to know what you are thinking…