State of Social Customer Service (Twitter Custom Timeline)

I will admit, this would not make a post traditionally – but I wanted to test a Twitter Custom Timeline.

I built one after a conversation I had with Martin Hill-Wilson (one of the sharpest, if not the sharpest, people in the  forward frontiers of Social Customer Service; read his book; follow him on Twitter).

Here is the custom timeline, comments welcome underneath as always.  BTW. remember that Twitter is read from bottom to top – wish they had a setting to change that – #LeSigh.

(now, back to digital transformation; another post coming tomorrow)

edited post publication

After posting i realized the potential use for this “tool” (custom timelines).

It is true you cannot reorganize the tweets in a specific way – but with proper planning, this can become a Twitter FAQ/KM tool for organizations.  Here is the first pass at a process to do so:

  1. Create a private Twitter Account just to build these lists
  2. Plan what your FAQ should be, and the process to update them as necessary (life changes, your customers will ask different questions over time). Nothing worse than an outdated FAQ.
  3. Write the tweets you need to direct your customers to existing resources that will answer their questions or concerns.
  4. Use your Customer Service solution to
    1. monitor for specific phrases, keywords, questions
    2. provide an automatic answer to those customers with a link to the specific timeline that answers them
    3. monitor responses to your timeline account to pick up follow ups
    4. follow up to new questions, if necessary, via your existing accounts – not the timeline account (need to keep the tweetstream clean for timelines – this is crucial for maintenance).
  5. Monitor, report, improve your FAQs

Congratulations, first step towards using Twitter properly for triage – want to try it?

Let me know what you think and how this can be improved — straw man idea right now…

The Foundation Components For Digital Transformation

(note: I have to give many thanks for my good friend Sameer Patel who took at first look at this and said “let me tell you what you got wrong” – thus helping me improve it immensely – you should thank him too when you see him, the first pass needed lots of TLC – not perfect yet, but can be shared for wider commenting)

You’ve heard from many people that Digital Transformation is all the rage now. right?

Can you explain what it is?

How about what you have to do in your organization to be prepared (or even to be able to understand it enough to have a decent conversation)?

If you said no, or are not sure, you are not alone.  Virtually everywhere I go these days this is the conversation we are having: what it is, how it works, what do I need to know and do, and what is the timeframe to get it done.

I will try to address as much as possible of these questions, and this post will frame my thinking for the research I am conducting in the next 2-3 years.  This is the biggest thing to hit Enterprise Technology since I started, am looking for at least a decade if not longer of expansion and excitement.

This post is not even beginning to scratch the surface and all these items will be covered in far more detail in future writings.  But, it is a a start – so let’s start at the beginning.

Why is this happening?


I could use the “perfect storm” analogy, but I prefer to call it confluence of event.  

Of course, y’all like perfect storm better.  Perfect storm it is – for now. 

There was a perfect storm that caused this:


Customers In Control.  We have been saying for some time now, at least five or six years,  that customers have gained control of the conversation (good friend Paul Greenberg wrote very eloquently about this at the beginning of the “Social CRM Craze of 2008-2012″).

But what exactly does it mean?

It means that customers are more demanding of service times, of companies listening to them, and of making their voices heard.  No small coincidence, of course, that this happened at the same time that online communities (including social networks) saw an expansion – a case can be made for a chicken-or-egg situation actually…  In either case, the control of the conversation shifting to the customer was the first  – er, cold-front in this storm.

Everything Went Digital.  I am not going to assume anything about you and how you work and live – but, as an example, this morning I had to print and mail a document (remember that?).  I have been in this office for over a year and a half – and never before had to use my printer — I know, because it was not connected to my wireless network.

I have gone so far digital that I can check my snail-mail once a month and throw away all the mailers and coupons without even checking (I do, don’t worry if you send checks).  I even sold and bought a house last year – and not a smidgen of paper in my archives.

Granted, I am not a business – but if you agree with the statement above about customers going online – they are producing all the information in digital form.  And the businesses that dealt with me throughout my two house transactions were indeed businesses – and they had also gone digital.

When was the last time your customers mailed you a survey (wait, I meant to ask – what is the return rate for paper surveys you mail out or ask customers to mail in? I know some still do, but rates are plummeting). A warranty card (remember those) back? A registration card?

Content, data, knowledge – all has gone digital.  And we are not even talking about the expectations and demands from digital natives and digital immigrants – the people who live in digital world – that is another element altogether  that influences the amount of data and content that has gone digital as well.

Add the oh-so-famous Internet of Things with connected devices and machinery giving more data, social networks, constant generation of blogs, communities conversations, interactions between customers and web sites, web logs, navigation logs for customers – and that is just scratching the surface of how much digital information we are producing.

You get the idea.  Information is digital, and if not today – very soon.

Business Cycles Are Ending-Restarting.  Business is cyclical.  I know, shocker – everything we are doing today we did before (just faster, better, cheaper, easier now – supposedly).

Businesses evolve in cycles.  The last few cycles you can relate to:

  • ERP implementation (some 25-30 years ago) which was about automation and digitizing the work organizations did to stay alive;
  • CRM Implementation (some 15-20 years ago) which was about digitizing interaction of customers;
  • Internet Implementation (some 10-15 years ago) which was about bringing digital information from all over the universe to the organization; and
  • HR implementation (past 5-10 years and ongoing) about digitizing relationships with employees

All these moves gave us more digital information and processes that we know what to do with.  And all these moves in business also share common characteristics – they were executive conversations started at the highest levels of the organizations, with no technology or software solutions that defined and did what they proposed.  They were conversations on how to improve / change / automate / speed up processing of different areas of the organization.

There is a lot of similarities between Tom Siebel talking to Executive Boards about providing visibility into their pipelines and interactions with customers two-to-three years before the first workable version of Siebel CRM came out (ibid for Dave Duffield from PeopleSoft and HCM and Hasso Plattner from SAP and ERP) and the discussions we are having these days in executive boards about Digital Transformation.

Generational Shifts Giving Way to Paradigm Shifts.  I wrote this some time ago, but every fifteen years (give or take) we have (usually in concordance with business and / or technology cycles) a shift in the organization.

This is either a generational shift (a slow progressing movement that organizations can react to in time) or a paradigm shift (a massive societal, workplace, and marketplace shift that organizations need to react to quickly).  It is not a sudden transition,. where one ends and the next one begins – as with all ongoing entities there is an overlap of a certain time between them.

We are navigating the final stages of the generational shift that brought us the Social / Collaboration “Revolution” (more like an evolution to be honest).

This means that we are also starting the paradigm shift that is known as Digital Transformation (see picture below for a better explanation of these shifts in the world).

Paradigm shifts are characterized by breakneck speed of change, very similar to the conversations we are having today about Digital Transformation.


These are the four occluded / cold / warm fronts (I really hate this comparison to a perfect storm) that are all happening and aligning at the same time to create this perfect storm.

Sourcing The Vision

You are probably asking yourself how do I know this, where do I get this.

Among the many in-person, over the phone, and even email exchanges I had in recent months, I had this Twitter exchange with some smart folks and friends.  The question was “Where is the conversation about Digital Transformation happening?”

Before we move forward, and this is where Sameer helped me clarify this earlier, one caveat.

There is no purchaser – yet – for Digital Transformation.

This conversation on Twitter was clarified by an in-person conversation and we agreed that there are 1) no solutions available to purchase, 2) no purchasers.  There are conversations between the consulting firms that get it and their clients:

  • There are executive level and CEO level conversations about this;
  • The four trends above are being discussed in the context of changing the organization;
  • There are early steps taken by competitive-advantage driven early innovators;
  • There are some examples starting to see the light of day.

You’ve probably seen or heard of the early examples:’s CEO Marc Benioff has mentioned and exulted the virtues of Burberry’s for the past three or four years, as well as some of their other customers.

The transformation at Burberry was driven by their CEO (Angela Ahrendts, now working at Apple to make the same change happen at their retail stores) who had undertaken a radical change to how they do business.  The realization that their customers did not wait at home for a catalog or mailer to come to them with the latest trends led to a change on how information is shared, interactions are captured, and recognition is given to customers’ voices.

And the fact that retail is seen as the next frontier for Digital Transformation is no surprise, it has been going on for a while.

My friend Paul Greenberg also talks about Karmaloop, one of the pioneers in e-tailing, in some of his presentations; a company change driven at the highest levels.  The company understood that their customers were either digital natives or immigrants and transformed their processes and KPIs to support and leverage digital channels and interactions.

The results were impressive: one percent of their community (created by digitally transforming their marketing efforts) drives fifteen percent of their business.

I have had these conversations around the world in the past six-to-twelve months with executives and directors of companies of all sizes, located anywhere – and they all agree.  This is the next change coming to business, this is going to be our next decade: adopting and implementing Digital Transformation.


Vision Definition

The confluence of events (sorry, perfect storm) above seems to do the job of explaining at length how this transformation is coming of age but it is a tad long to go through it.  In executive circles sometimes the attention span is just not there to listen to the whole explanation.

We need a tweetable definition of Digital Transformation.

Finally, I was able to come up with one that I am quite comfortable.

In case you cannot read the picture of the tweet above, it says:

The world went digital and biz must adapt. Not from being analog. From having little know-how for digital owned.

That is the best way to define what Digital Transformation means and how it becomes our next business cycle.



If you have not yet get the book Christopher Morace (Chief Strategy Officer at Jive Software) co-authored .  It is not a how-to book for DT, but it is an amazing resource to understand this shift.  

You can get it for Kindle or old-format at Amazon (click the picture, not an associate link, I don’t get anything out of this).

Back to work.

Thanks for hanging in for that first part, I could break this into many posts, but half of you will complain that it should’ve been one (and the other half stopped reading after the third paragraph anyways).  So, keeping it as one.

Besides, this is the best part coming up, see the picture right below.

foundation elements for DT - 2

I know, I know.  Cray-cray as my 11-year-old daughter would say.

Let me ‘splain.

First, I am not a graphic designer – this is very crude, but it highlights what you need to know – the foundation elements for digital transformation and how they interact and relate to each other.  This is a good way to understand where everything fits, and why.

If you have any additions or comments, please lay them down in the comments section, contact me, or email me.

You will need an infrastructure layer, an information management layer, and an experience layer to make this happen.    In addition, you will show this via interfaces, and you will augment the power of your transformation by focusing on optimization, personalization, and automation as ideal outcomes (also called the Greek layer – get it? Greek…. OPA….Greek…. oh, never mind; no more jokes)

But I am getting ahead of myself.

Let’s talk about each component first.

The Commoditized Cloud

To say the cloud is commoditized would be disingenuous.  The open, three-layer cloud has less than 10% adoption in the organization.  The SaaS-as-cloud, private-cloud-as-cloud, hosted-applications-as-cloud, and other-monstrosities-we-cannot-call-cloud-being-called-cloud has around forty-percent adoption across all organizations (all sizes, all verticals, all geographies, etc.).  If you don’t like those numbers, feel free to insert your own – still makes my point.

Although we are all talking about cloud as a given, commoditized concept – it has not yet reach mainstream adoption in the organization.  However, it is also not an item of differentiation where companies can say “because we are cloud, we are better”.  The fact that hosted applications that provide multi-tenancy solutions as a service can call themselves cloud gave every on-premises vendor the ability to call themselves cloud.  And thus, it is no longer a differentiation.

The reason I mention this is because the underlying infrastructure for digital transformation is an open cloud infrastructure (I don’t recognize private cloud as being cloud, nor hosted applications as being cloud – but they are good interim steps, stepping-stones towards adopting the cloud in larger, more complex, compliance-heavy organizations; they don’t have a long life ahead of them, but they are a good starting point).

There is not a single CIO or IT department in the world that has not undertaken in the past two-to-three years a migration project to embrace open cloud.  Even those slow-to-move, compliance-heavy, and laggards of adoption.  They may not be there yet, but it is their goal to get there. There are too many advantages to the model not to leverage it fully.

We will discuss the software layer of the three-tier model as we get deeper into the discussion of interfaces, but it is the platform layer that will make the most significant difference.  I wrote a bit about what an open platform can bring to an organization (and you also have more links in there as well as definitions) when I wrote about Salesforce1 – please use that for reference of what a platform is.

Indeed, adopting an open platform model is what is going to prepare the organization better for a digital transformation.  The ability to both quickly integrate with just about anything, and to create customized applications that deliver personalized performance via a multitude of interfaces will become critical – but this is not the place for that discussion – you will need to have a three-tier cloud infrastructure to make Digital Transformation happen.

The Information Layer

I have had many interesting discussions and strategy sessions in this past year or two where the discussion was whether knowledge or content or data were more important to deliver personalized experiences to customers.

I even presented at EBEDominicana earlier this year about this.  I was asked to talk about Social Knowledge and how organizations can prepare, but when I get to the event I discovered that the concept of Social Knowledge was nowhere near what attendants wanted to discuss.  I spent almost an entire day talking to attendants and finding out what they wanted to cover and the answer was clear: content.

I went back to my research notes that night (after spending some time learning the basics of merengue dancing  –another time) and found a lot of common topics between the work I had done around content and knowledge.  Turned out, after a long time of contrasting, that the issues, the topics, and even the lessons learned (at a high level) are about the same.  Out of curiosity I did the same analysis for data – since I had many times in the past said there is no marked difference in how an organization must handle data and knowledge.

Low and behold, same principles can be applied to data (I don’t distinguish between big, small, or average data).  I have been making this argument for a long time, and finally got a small break: content, data, and knowledge are similar resources.  And it all can be called information (because, well — that’s what it is).

Think about it, any information you get from an organization or use in a business situation has all three: it has data (usually customer identifying, product identifying), knowledge (this is more like static data, things we know to be true and we use to make a point), and content (more like static knowledge if you want to define it – it is approved and usually has knowledge in a specific format).

plush cerberusThe use of all three, or two or one, of these elements in any one interaction means that they should (at least from the strategic level) be handled and managed together.  We will discuss this and explore more as time goes by – but for now, think of all three elements as siblings: data, content, and knowledge are the Cerberus of the customer interaction.

They fiercely guard customer interactions to  make sure they have the right answer.

The Experience Layer

I wrote a series of blog posts over the summer that were published by my friends at Oracle.  The topic was Customer Experience, the first one had the ever-pressing question: “Who Is In Charge Of The Customer Experience” (others dealt with people, processes, and technology related to customer experience – it is a good series and likely you missed it — but fear not, available now by clicking on those links).

The question of customer experience has become all the rage lately.  These past two years we have seen an onrush from organizations to implement “customer experience — something”.  Whether it is management, or service design initiatives aimed at understanding customers better, or analytics software to better create customer experiences — or, well, too many different projects and initiatives to name them all.  Chances are that in the past three years or so your executives came down from the mountain with the mandate to implement customer experience.

And chances are that you have done something in this area.  In my latest survey of Customer Service practitioners we found that over 80% of organizations have a customer experience initiative under way.

The problem is that since Ed Thompson and I co-wrote the ultimate book of customer experience in 2004 when I was at Gartner (must be Gartner client to read, sorry), not much has changed (well, that’s not true – Ed’s gotten smarter about customer experience, but he was pretty smart to begin with).  It is not to say we don’t know more about it, we do – we had plenty of experiences and we learned a lot about how to do it — but we still continue to approach it as a single purpose project.

Experiences, not customer only, is something that all organizations must embrace for all stakeholders.  Whether we are talking about customers, partners, allies, providers, employees – or any other constituency (citizens?), they all need to work together.  We cannot design an experience for customers without considering that a) they are going to be part of an end-to-end process (and thus must be an end-to-end experience), and b) they must accommodate all parties involved in this end-to-end delivery.

When talking about experience, you must begin to think of them as Figure 2end-to-end and encompassing many stakeholders along the way – and design and implement them that way.  That is what I been pushing for years now using the Experience Continuum.  Indeed, experiences must be done as an all-or-nothing initiative that considers employees, partners, and all other concerned stakeholders – even if their systems and information are not controlled by your organization.

This means that as  you advance your digital transformation plans and begin to implement them you will need to interact and work together with many, many different people and use their information in many ways.

Aren’t you glad you decide to adopt and open cloud as I explained in the first section above?  Yes, you are – and now you get why that is necessary.

The Analytics Layer

This could be the start of a book that could be written just to define and describe what is meant by Analytics.

I am not going to define it and try to convince you that is necessary.  Bottom line, the middle layer of the model above needs to be analyzed.  Period.  Thus, you need analytics.

Without analysis, all you got is a series of structured ones-and-zeroes that really don’t mean much going forward.  Sure, they can tell you what happened, but cannot prepare you for what MAY happen.

Now that we defined the need, let’s debunk the most common myths about it: it is hard to do, and it is magical.  Magical is what many users think it is – if you implement an analytics package all you need to do is point it to your data and — voila! finds relationships and insights you did not even know they were there.  Of course, this is neither true nor possible – no analytics package knows the relationships between your data, their meaning, or even what it means!

Simply knowing that a data field is called Sales_Total does not mean the computer knows what it is, how it is used, or what to do with it.  Even if you, as a user, can describe it and relate it to other data fields – you still don’t know what to do with the data — why on earth you think the analytics package would?  This is what brings the second myth: ti is hard to do and requires scientists to analyze.

Without a deep debate on the term or the concept, it does not.  If a stakeholder knows what the data means, where it comes from, and how to use it – the new tools and packages for analytics will handle the rest.  This applies equally to knowledge and content, by they way – not just data.  And this is why analytics is changing and is no longer the mysterious “thing” it has been assumed and we can now focus on the outcomes, not the definition.

The most important aspect of analytics is the outcomes – which so far you’ve been told they’re insights.  We put so much emphasis into generating insights (and I will count myself as one of them as i often encouraged clients to find actionable insights into what they do — without much explanation of what they are or how to get them) that we miss out on the applications of those insights.

That is what you need to do in the new digital world with the data / content / knowledge triumvirate of inputs: find the expected outcomes and aim achieve them.

There are three outcomes you should be seeking via analytics:

  • Optimization (improving processes and functions, even innovating by finding new and different ways to do things)
  • Personalization (make sure that each user gets what they need, when they need it, as they need it – and no more or less), and
  • Automation (leverage the optimization and personalization to take some of the interactions away from users and traditional processes and allow them to happen automatically)

These outcomes are not in any order nor are the three required from any single implementation (although eventually you will get to use all three as your strategy improves and grows).  They are the outcomes you should seek from data, content, and knowledge post-analysis independent of function that is using those inputs.

There is a lot more to cover on this, much more, but we will do so in research during the next 2-3 years.  For now, make sure you realize that analytics is not what you thought, and that is has a primordial role as the tool that will make things happens in the world of digital transformation; after all, it is the aggregate of the expected outcomes.

The Interfaces Layer

Thanks for hanging in there, almost done.

The final layer is the interfaces layer.  This layer serves two purposes, both incredibly important for digital transformation.  First, they are the connecting point to all things “legacy”.

A three-tier cloud architecture calls for the Platform layer to serve as an integration brokerage house of sorts – it creates trusted, verified, secure link to other platforms and brings the information from that platform to complete the services it runs, and it also sends information to the other platforms so they can do the same.

This works great in a three-tier cloud-to-cloud communication, but lacks some of the finesse when dealing with legacy applications and APIs.  Some of the older applications and those with not-so-good APIs require more work than the platform can do in a secure environment that requires token security to operate.  Some of the legacy applications and interfaces require a point-to-point traditional API call. This is where the interfaces layer performs one of the key functions: it serves as the central integration point to all applications and information that cannot be accessed or serviced directly.

The other function it performs is to make sure that the outcomes of the analytics layer are properly displayed and used in any interface: mobile, desktop, internet-of-things, laptops, tablets, and just about anything else that may have access to the DT platform and needs information from it.

As simple as it may sound, the ability to interface with a three-tier cloud, all layers in the DR architecture proposed in this post, and legacy applications at the same time and make sure that information flows properly it’s quite complex.  Think of what EAI (enterprise application integration) components used to do in client-server world – but exponentially more complex due to the multitude of displays and application environments it has to tender to.

Alas, the infrastructure layer and the three-tier cloud model help a lot on this, especially the platform layer that can serve any device, any interface, any need as long as the proper paths to find the service and or application that can deliver the necessary information is known and documented.  This simplicity is what this layer promises – while delivering the outcomes delineated above.

Where to Now?

As i said earlier, this is an oversimplification of a concept that is likely to require an entire book to be explained properly (things that make you go hmmmmm).

But the concept is there, and three things will happen now:

  1. You will help me improve it.  All the content in this blog is licensed under the creative commons CC 3.0 initiative.  You are welcome to use it and improve upon it as long as you don’t use it for commercial reasons and you always credit the source (that’d be me).  Take it for a spin and let me know what think.  Write down your experiences down in the comments section, or contact me with more details.
  2. It will slowly be implemented and improved. The one thing I learned from creating visions for the future and implementation models while at Gartner is that there is a  modicum of my visions that are great, a sensible part that is useful, and the rest if between can-be-ignored and unusable.  As you begin to work in your digital transformation using this post as one of the data points in your journey – please let me know how it works.  I make the commitment to improve it with your feedback and — of course, give you full credit for your contributions.
  3. I will continue to research this.  This is my “research agenda” for the next few years.  I cannot even begin to see this being implemented in less than 2-3 years – and getting close to five is more likely.  I will continue to research and find information to substantiate and improve the model, while you continue to do the things you do – implement.  I will continue to do research on these layer by talking to users and practitioners, discussing it with analysts and consultants, and continuously write about ways to get it done and make it better.

OK, just about 4,500 words and here we are – your turn.  Before I begin to post more and more research in relation to this model — what do you think?

Are you thinking this may work? Do you see the possibilities?  Or do you see it won’t work?  Both answers are likely to be correct – I just want to hear the rationale for either.

Help me improve this architecture of foundation elements for the DT world.

I appreciate it.

Microsoft Acquires Parature: The Analysis


On January 7th (officially, although embargo was broken by several analysts and press before then) Microsoft announced it had entered a definitive agreement to acquire Parature.  The terms of the transaction were not disclosed (although wild speculation abounds) and is expected to close during the first quarter of 2014.


Although the timing might make it look similar, this deal and the deal between Verint and KANA are very different.  Please read that post earlier in my blog for more details on that – I don’t want to make comparisons that are not appropriate for this analysis.

Microsoft has a glaring hole in their Dynamics CRM product line: customer service.  For as long as I can remember, this has been the case.  Even though they have – something, it is not competitive.  It does OK for small implementations, but as the complexity of the interaction grows it lacks the fortitude to deliver – and even worse in large scale.  This has been a problem as they try to grow into larger accounts and try to compete in the customer service market for position and presence.

As Microsoft tries to win larger accounts against the likes of, Oracle (either with the Oracle Fusion product or RigthNow’s version of Customer Service) and others the lack of a structured customer service solution began to be noticed.  I worked with Microsoft before, both formally and informally, about this.  There was no doubt there was a need to be filled, the only question was what and how.

On the other hand, Parature was in the mend.  Recovering from a near-disastrous bad-CEO stint, they spent the past three years or so re-architecting their product.  From being considered a solution that could only cater to small and mid-size businesses for simple customer service, they saw early on the need to redo their product with two guiding principles: a stronger, cloud-based architecture and the need to refocus on Knowledge Management.

I have written extensively on KM these past few months, and will continue to do so in the many years to come.  We are approaching a paradigm shift in knowledge like we have not experienced before, mostly driven by the rush to aggregate knowledge in online communities and social networks.  The shift from knowledge-in-storage to knowledge-in-use demands new models, new architectures, and new behaviors.

Whether consciously or through subliminal knowledge, Parature developed their new architecture in a way that can both support the traditional models for KM as well as the new paradigm described above.  At least in my perspective, since I have not yet seen it implemented that way (not only by them, it is a nascent model and not getting sufficient traction in the market – yet, wait a few more months).

Microsoft making this acquisition for the knowledge management components primarily, and after looking in detail at this new architecture, brings the issue of knowledge-in-use and the paradigm shift to the enterprise level – and I cannot be more happy for that.  There are some very interesting new plays in KM that are leveraging this new model (Mindtouch, Transversal) and old companies cozying up to the concept (FuzeDigital , Jive, Moxie Software, and others) and this is the push that they will need to make it happen.

Of course, this only works if Microsoft realizes what they have acquired and puts resources and talent behind that effort.  I understand from my conversations that this is the case and this is what will happen.  Not only as an independent entity, which Parature will remain, but also as a way to bring the power of this architecture to the Dynamics CRM product and enhance their solution there.  A two-for-one special makes me happy.

Here is to hoping they keep on their current path.


I am bullish on the potential of this acquisition to accomplish two things:

1)      Reinforce the concept of knowledge-in-use by putting additional resources behind the work done by Parature as it remains an independent entity.

2)      Create a more complete enterprise-player out of Dynamics CRM (especially around customer service, but eventually leverage the KM power into other areas) and with a two-tier customer service solution be able to compete more fully with ServiceCloud.

While it does not fully complete the customer service product for Dynamics CRM, it does make it very competitive and with the right partnerships they can deliver that extended value.  While there is more, much more, that can be discussed about Microsoft, this is about this deal – and the bottom line is that has a lot of potential.

There will be changes coming down for Microsoft customers once the product is integrated, likely in the 1-2 years’ timeframe considering the speed and efficiency with which Microsoft completed integration with prior acquisitions made by Dynamics CRM, and don’t foresee significant changes for Parature clients.

It is a little early to see the changes for Microsoft customers but adding functionality for Customer Service is definitely in the works.  As any other vendor, I am sure Microsoft would want a higher price for the more features they will provide – but the functions should remain sufficiently separate so it would not be mandatory upgrade.

All customers, current and prospective, should understand the roadmap for the product but don’t ask for another six-months or so.  The current plan calls for remaining independent (which I don’t foresee changing – or will let you know) and slowly integrating the valuable components.  We will know in six months or so if this remains the same.

Do you see this differently?

Would love your comments…

disclosure: parature is an active client and I have helped them some in positioning and strategy for their new architecture and product.  microsoft is going-to-be (if their verbal commitments hold) a client again and has been in the past going back many years – we also had discussions leading to this event.  both vendors fell out of favor a couple of times before, but now they are listening to me and doing much better.  most of the other vendors mentioned are active clients or will be this year.  may be too late to say this, but I know everyone in the (soon-to-be-extinct) eService market and they all are, were, or will be clients.  no better way to ensure there is no conflict of interest in my opinion. needless to say, but mandated by the ftc, there is no conflict of interest as they all are very nice to me and give me tons of free stuff (like time, access to executives, responses to my DM and emails, etc). i don’t favor one vendor over another, just call them as i see them.

Verint Acquires KANA Software: The Analysis


On January 6th, 2014 Verint announced their intentions to purchase KANA Software for a reported $514 million mix of cash and loan obligations.  The expected close date is first quarter of 2014.


I cannot say I am surprised that Verint finally got into the multi-channel customer service market.

I have been expecting this move since both NICE and Verint acquired EFM vendors in the middle of 2011 (frankly, I have been expecting this for longer than that – for both of them).  Of course, we should see NICE moving into the market within the next 4-6 months, and we shall see other vendors as well try to enter the market (vendors like Nuance come to mind initially, but also Aspect, Genesys, Avaya, and Nortel trying to expand their presence by acquiring more value and better knowledge management tools).  I don’t expect traditional customer service and / or CRM vendors to try to acquire any of the remaining vendors for customer service or sub-components like knowledge management – with a few exceptions (which would be addressed in commentary if it happens).

Verint has done a lot of acquisitions in the past.  Probably the most iconic for this market is their acquisition of Vovici in July of 2011 to bring in Feedback Management to their suite.  They have a complete suite of agent management tools (things like workforce management, training, scheduling, as well as analytics, reporting, and performance management).  They have also made acquisitions of function-specific tools (like fraud detection and telephony management).

Throughout these acquisitions a consistent model emerged: the acquired vendor does not retain their independence, nor do they retain their product as it was when acquired.  The main reason they are acquired is a technology or tool that Verint needs to complete their suite.

Vovici is a good example, where even though independent implementations of Vovici are still supported, their main message and go-to market is to incorporate feedback and analytics tools from Vovici into their suite and enhance the value the product provides that way.  There is no remaining product from Vovici that resembles or continues the old product.  This translates into a migration for customers that remain on the product if they want to continue using it (usually within 2-3 years, not right away) which brings with it a higher price tag.

Cannot fault Verint, or any other vendor, for trying to make money – it is the inconvenience it brings to their customers.  So far they have promised to keep KANA as an independent solution – but the discussion around the long-term roadmap was not very clear on how long that would last – or if that would be changed later.  It is clear that the main driving force for this acquisition was the integration of the product lines – I’d be surprised to see KANA remain an independent entity for the long-run.

I am also concerned with the nascent momentum KANA started to experience in the market.  Coming of their semi-recent acquisition of Sword Ciboodle (barely a year ago) and subsequent re-launch in the August-September timeframe, their presence in the market was just beginning to get solidified.  In addition, the acquisitions KANA made in the past 4 years since A-KKR acquired them (Lagan for Government, Trinicom for mid-market service suite, and Overtone for social media analytics) were not yet fully embedded into their business model, with no (Overtone) to modest (Trinicom) to good traction (Lagan had an exceptional year in 2013) on their own.

It is not clear what will be the fate of these different solutions, but it is clear that it brings another layer of complexity to the planning of the long-term roadmap.

If you read the coverage and the press release for the acquisition you will see that is presented as a marriage of Big Data and Analytics on the Verint side, and Customer Experience on the KANA side.  You would not be faulted to think that this is a match made in heaven, with the ability to deliver the latest and “bestest” solution in the market right now.  After all, customers are asking for Customer Experience and Big Data.

In reality, at least from my perspective, these are 2-3 years old marketing messages for both companies.  Verint will tell you they were founded on the concept of analytics – but the vast majority of their customers (at least all the ones I talked to, I have not talked to all 10,000 of course) think of Verint as a provider of agent management tools (in other words, they make sure the agents are there, trained, and ready to work with the right tools).  I have yet to meet a Verint customer that talks about them in terms of analytics as a core differentiator (even though they had speech and text analytics offerings for some time).  They also made more acquisitions in the past 2-3 years that shored up their solutions, but they are not known primarily as an analytics vendor.

Similar fate for KANA and Customer Experience, it was not until the past two years or so that they began to focus on this message and positioning.  KANA is known for their knowledge management and multi-channel service solutions, not for their focus on customer experience.

The positioning may describe what value they could bring to bear, but it belittles the value they do have to offer.    Both solutions are far better and more complete than their positioning for marketing purposes, and both of them together deliver a complete customer service offering – which takes away one of the strong points of this acquisition if ignored for the benefit of marketing buzz.

About sixty percent of the customer service customers are laggards or late adopters for the technology to power their contact centers.  Partly due to refresh cycles that take too long, partly for amortization and ROI expectations, and partly for the fact that refresh cycles tend to fix what’s broken more than innovate – Customer Service is a laggard technological function.  In this context, more customers are asking for integrated suites like the one KANA and Verint are proposing (there is a healthy demand for an integrated solution among late adopters that are not as interested in the cloud, customer experience, and analytics as they are in delivering multi-channel solutions that are effective).

The up-to-the-minute marketing message they are positioning is taking away from the potential to deliver into that market.

See the following chart (I developed this with my friends at Moxie Software and am using it here with their permission) for a better understanding of how the two vendors come together:

CS Architecture

KANA’s value comes from offering a unified desktop, knowledge repositories, case management and channel management (which was extended by the acquisition of Sword Ciboodle).

Verint value comes from offering agent management tools and some analytics – with an additional set of predictive and proactive analytics for optimization as well as more analytics tools added lately.

In spite of the wonderful marketing buzz of the new message that integrates optimized analytics and customer experience, customer service buyers would be more comfortable seeing a chart like this that addresses all their needs rather than listen to a marketing message that leverages timely buzz words.

I am very interested and hopeful in seeing this deal go through based on the former, not because they can master all the latest and greatest marketing words in their message.

One final item to focus on as they move the deal forward is the cloud.  No, not talking about hosted-apps-in-a-browser and calling it cloud, am talking about the change in infrastructure that brings a three-tier open and public cloud to bear for organizations.  Neither of the solutions is built for or supports that model (yes, they both could – not the standard offering).  Both the solutions are cloud in the old-fashioned hosted applications running through a browser with API access, not in the open, three-tier model.

While this may not be an issue currently for virtually all of their clients, it will become an issue within the next 2-3 years as the open cloud infrastructure begins to take hold inside the organization and more and more organizations begin to migrate their contact center hardware and software to that model.


As with any acquisition or merger, some good and some bad in this deal.  Bad is the potential change imposed on customers – although it is not yet confirmed and Verint promised to offer a roadmap based on keeping KANA independent soon.  The roadmap past years 2-3 will be critical to squelch that criticism and show the long-term viability of this acquisition.

Good is the potential to fulfill the demands and needs of the majority of the market and position the product as an all-in-one suite to deliver to expectations from their customers.  If they avoid the cute marketing words, of course.

Existing customers should get a “certified” roadmap from Verint to understand their intentions and direction and match it to their strategy.

Customers considering bringing either one of the vendors in the organization should make sure that their needs will be filled today – but also that potential conflicts with other future needs or existing solutions don’t put a hamper in the integration.

Other vendors in the market should understand that this signals the beginning of the final consolidation for the eService market and find the ecosystem that best fits their need and / or potential acquiring partners in a relative short term.

Anyone else should contact me for a more detailed discussion of where you are, what you need, and how we can make it work for you.

What are your thoughts?

Do you see something I missed in this deal?

Comments welcome, of course.

disclosure: KANA is and has been a wonderful client for a long time, dating back to my first days as an analyst almost fifteen years ago.  I cannot recall any year since then they were not a client.  It is with sadness I see them being acquired one last time (I am quite certain they won’t remain independent for a long time, see above), but looking forward to potentially working with Verint.  Verint was a client of Gartner’s in my past life, but other than a few briefings we never worked together.  They were never a client of thinkJar – although if they are smart they will pick up where KANA left off (I believe KANA has the contract for 2014 in their possession… but we can figure that out later).  As you read this you will realize that whether they were / are / or will be a client means not much as I will be fair in analyzing their situation and the potential for the deal.

Twitter for Customer Service? These Companies Get It Right

If you follow my blog and my writings (and rantings, and presentations, and panels — if you ever talked to me about this) you know that I am not a big fan of using Twitter for Customer Service.

It is not that it is not possible to do it well, but it is that the resolution times, close rates, escalation rates, and just about any other metric you can use are so horrible by comparison that to do it is almost a waste of time and resources.

This prompted me, about a year ago, to write a post advocating the use of a single channel strategy, and even before that to deliver a presentation on the failing metrics of social channels.

Although things have improved, somewhat, for smart organizations that have learnt along the way, my core statement remains as it was at the beginning: Twitter is no more than an appropriate triage tool for Customer Service (I think I called it an IVR back then, I still do today).

Midst the poor performance and lack of understanding from organizations though, few glimmer of hopes are emerging.

Here are two examples, in pictures, of companies that are getting the gist of using Twitter for triage and escalation when necessary – and have the right tools to do so (which is the hardest thing to do using Twitter for Customer Service BTW).

Example One: T-Mobile Escalates To Chat.

In the picture below you can see a customer asking for help with a billing issue.  Now, there are two bad ways to handle this: 1) ask the customer to call and give them a ticket number (after asking them to follow you, DM back and forth), 2) try to resolve the issue via Twitter (yes, even via DM) in 140 characters at the time.

tmobile example


Alas, T-Mobile did it right – realizing it would take more than 140 (or 280, or 420 — yes, I did take math in college) characters to resolve it, they immediately escalate to chat.

Why this is better than calling or emailing?

The customer wants immediate resolution, more than likely, and they come to Twitter for that.  By escalating to a real-time channel (chat is one) that is easier to use, less expensive (on average) than phone, and can be even outsourced without major issue (as opposed to the telephone being outsourced and customers complaining) they can control the SLAs, the privacy of the customer, and the wishes of the customer.  Even if the customer wants attention, and not real time resolution, the offer is a good way to set expectations: we are here to help, in real time.

BTW, I clicked on the link, it worked – but it was time-sensitive and expired shortly after it was issued – even better.

I also imagine that the chat session would show up in the unified desktop that T-Mobile agents have, where they will get access to KB, customer history, etc.  Likely better than the tools they have for Twitter (educated guess based on what I know they do).

Example Two: Amazon Escalates To Web-Based Ticketing.

In this second example a customer complained about something that was not right with a product made by one of Amazon’s companies.  They quickly replied with a link to provide additional information.

amazon - 1amazon-2

The interesting part here is that (if you notice, my name is at the top of the screen) by doing this Amazon can see the type of customer I am (I am prime, and I use it very often), what products I purchased, when, and other information they need — in addition to being able to link my Twitter ID to my Amazon account (if not done before).

Social ID correlation is a huge, huge, huge problem for companies — and this is an easy solution to that problem if customers are logged in.

Bottom Line: Learn how to use each channel properly.  Social channels are horrible for resolution (even if you get past the 40% of unnoticed events, the 10-20% average close rate, and the 10x or more resolution times) and they are perfect for triage and escalation.

Do it well.

What do you think? Other examples of well done Customer Service via Twitter? That is scalable? Viable? Sustainable?

Would love to hear you thoughts…

A Customer State Vector? Great Idea – for Customer Experiences

Last week I had a very interesting briefing with my friends from the SAS Institute (disclaimer: not a client. I know, too short – but this is supposed to be a short post).

One the things we conversed about was a new blog written by their CEO, Dr. Goodnight.  If you don’t know him, he is truly one of the pioneers of the world of data (big, small, and medium), analytics (in-memory, on disk, or even on tape), and neighboring concepts.  I always look forward to my chats with him, he has an amazing talent for thought leadership in bringing complex concepts to a simple explanation.

The blog (here is the link) is a about an idea he had: talking about customer state vectors.  He explains what vectors are quite well, will copy from the blog to use his explanation.

The customer state vector is based on an engineering concept that is popular in the science community. For example, NASA uses a state vector to control the space shuttle during operations. Variables in the shuttle state vector show the present position, velocity components and other factors of the orbital trajectory at snapshots in time. It analyzes where an orbiting vehicle has been, where it is now and projects where it is going. Vectors are an excellent prediction tool for launch, orbit and landing positions.

He also makes reference to state vectors as a modeling tool for analytics, and he says:

 In the process of building predictive models for fraud detection or marketing, you discover the underlying set of variables that are important for use in those predictive models. Once you’ve built the models, you get a good picture of the data and variables you need to collect

This is all very interesting, but also very complicated.  They are working on making the concept easier to grasp, and to use, via new products and interfaces (which is under NDA for a short time longer) and they are making progress.

Alas, that is not the reason I am writing this. You know me, I don’t write about briefings and press releases. I was thinking about the concept over the weekend (yes, in my time off) and started to draw some comparisons to customer experience.

While it is true that vectors are very valuable for predicting and forecasting — could the concept be used for managing in real time and reporting?

I think so, with some alterations… and it would be great to apply to customer experiences.

Here is the rub, the crux of this idea: a customer experience has many moving parts, and we are absolutely horrid at monitoring and working with all of them.  Sure, we can improve one aspect or portion of it, but carefully monitoring it, but we fail to do that for all of them at the same time.  I wrote about creating indexes for monitor experiences before (here is the link) and even came up with a formula to do it (well, it was an attempt).

The problem with my index is that, as much that is done in an enterprise, it is reactive: it only lets you know something is not right and lets you act on it post-facto.  Wouldn’t it be great if we could move an index calculation to real-time?

Enter vectors.

By monitoring many variables at the same time, the relationship between them (if one goes up, what happens to the others? what if two of them move in different directions?), and the repercussions of the moves in the variables (in space terms: are you crashing against the space station on coming in for a landing?) at the same time they are the perfect concept to do this.

Best part, you can get the concept without and advanced degree in data and statistics, just by knowing how your business operates and what metrics matter.  Not only that,but if you spent some time lately correlating your metrics to your KPIs, this is the perfect tool to test that and make sure you got the right model.

If you think this is interesting, then go back to all your math teachers over the years and apologize for telling them you would never use any math in your career…

Vectors are a cool concept, definitely worth looking into and altering it for your organization.

Don’t you think?

A Brief History Of Salesforce1

(note: this is a similar format to my brief history of SCRM, which was widely successful at the time to explain how SCRM came to be.  This is in no way related to Stephen Hawkins’ masterpiece – but you likely already knew that. my official disclaimer of conflict of interests and such is at the bottom of this post)

I let some time go by after DreamForce 2013 so I could cool off from the heated discussions I had with plenty of people.

If I hear one more person tell me that Salesforce1 (S1) is a client-side app, a mobile client, the culmination of Touch (remember that launch? Not many do) or something similar I will scream (like I did the past few days).  And the problem is that (SFDC) has such a loyal customer and fan base that they will repeat pretty much what they are told (as well as some of the “influencers” and “analysts” out there – unfortunately analysis is no longer a required activity for analysts).  This dichotomy between what it is and what was presented at the show culminated (at least for me) in an exchange between Marc Benioff and myself on Twitter on Thanksgiving Day (Zachary Jeans did a good job of converting it to a Storify stream; you can find it here if you are interested).

That exchange prompted me to write this, I had only been talking to people about it prior, as a way to preempt the question of what is S1 and why there is so much confusion.


You see, there is so much to what S1 is that is not being covered that it is almost an insult to the people that spent 3-5+ years working on getting it done.  To make justice to the journey, and explain it in more detail that you probably have seen, a little history is in order.

(another note: I have down at the bottom a few other articles I considered worth including here that explain it quite well without the long-winded story, feel free to skip it and read those)

The History

SFDC launched in 1999.  At the beginning their call to fame was “No Software” (still hanging around today, ask TooSaasy) back in the days before we had cloud or SaaS.  In those days the rage was Hosted Applications (also known back then as ASP – which was Microsoft’s version, remember Microsoft? It was pretty big back then).

To put it into perspective, this was about the same time Siebel was dominating the “CRM” (read sales force automation mostly with very bad versions of marketing and passable versions of customer service) market; by promising no software to be downloaded (and very low prices to boot) SFDC was able to take a nice piece of the market from Siebel.

Now, keep in mind this was not a cloud application, this was a hosted application.  That means there was a monolithic architecture (mono=single, lithus=stone – meaning an all-in-one comprehensive solution) that run using browsers as interfaces.  To make this happen, things like multi-tenancy (many users running the same application) and multi-instances (many copies of the exact same application) were necessary.  I already covered why multi-tenancy is a horrible idea for real cloud applications (please read it, extend it to multi-instances as well – applies later in the story).

Bottom line: back then this was all we had.  Other vendors like RightNow Technologies, E.piphany, and many others were doing the same: hosted applications that provided a browser interface to monolithic solutions running in the background.

The problem with monolithic architectures is that is a client-server solution through-and-through and it cannot leverage the basic principles of distributed computing well (which is the basis for cloud computing as well as know).  Thus, running monolithic solutions (even via browsers for interfaces) means that innovation, security, integration, and even the ability to scale the solutions are very limited. The cloud computing promises of infinite elasticity, easy integration via platforms, and vetted and tokenized security to ensure privacy and safety is not possible in hosted applications, just like it was not possible in client-server applications without significant investment and unsustainable methods.

Well, I should say not possible to be done easily – but anyone can “pretend” they can do it by creating more and more complex code and solutions.  Removing the flexibility and elasticity of the cloud computing architecture gave us what we called, incorrectly, cloud for a long time: between 1995 and 2005 there were only hosted applications (with very few exceptions coming from smaller vendors) that could not leverage the promise of cloud computing.

SFDC, as well as most other Enterprise Software vendors, was in this camp.

This was very evident to the people who saw SFDC try to build ServiceCloud in the early days: version after version of a monolithic solution that could not integrate with or work as the other solutions deployed by SFDC and could not compete with the then-reigning-champion:  RightNow Technologies (another hosted solution, still today) or even the smaller customer service vendors.

Sometime in 2006-2007 SFDC realized the problem they had and noticed that distributed computing and three-tier cloud computing was starting to be noticed in the Enterprise Software world (some of the early smaller vendors that were creating innovative solutions for Enterprise Software were beginning to leverage the cloud computing model and break their monolithic solutions into tiers, finding ways to deploy them and leveraging the recently launched AWS services from Amazon and Grid computing from other large vendors).

In 2007 SFDC launched – their first attempt at a platform.  While the migration of force-dot-comexisting code bases was not in the initial plan, the idea was to build a platform layer as part of a three-tier cloud computing model and let developers use that to access SFDC applications.  This was their first attempt at delivering a platform and had, still today has some, many problems: proprietary languages, incomplete service directories, and limited integration into the existing applications of SFDC (SFA and the pretty bad customer service solution back then) were the most noticeable for users; a very complex architecture was the problem for whoever looked behind the browser.

Alas, it was a good first step and it was welcomed by the developers and customers as a way to extend the existing solutions SFDC offered back then.

The Evolution

What follows was a list of steps that helped them realize the potential and power of the platform (trying to shorten this post, which is going to be long anyway):

  • In 2008 SFDC acquires Instranet, a French customer service solution that was very strong in knowledge management but not so much in other areas.  The great part of that acquisition was getting Alex Dayon, a young technologist who understood the power and the concept of the cloud computing model and was willing to rebuild service cloud as a platform-based solution.
  • In 2009 Vetrazzo, an SFDC customer, built ERP functions in in one-third the time and cost of buying an ERP solution to run their organization.  This was not done with SFDC but it was heavily advertised by then once it was done. This proved that motivated customers with access to could do anything they wanted.
  • In 2009 FinancialForce launched using as their underlying platform to offer accounting solutions to SFDC customers.
  • In 2010 Kenandy launched using to create a standard ERP solution, still standing today and used by multiple SFDC customers.
  • In 2010 SFDC launched Chatter, which was initially launched as a hybrid of platform solution and monolithic architecture software. Very important later as it was another data point in showing that platform-only solutions (see above) were better.  It also became the “guinea pig” of migrating SFDC applications to become platform solutions.
  • In 2011 and 2012 SFDC acquired several “dot-com” properties (some of them later became, and as well as Heroku (a platform focused on letting developers write smaller apps that could be deployed via web or mobile using different languages)

All these steps were essential to the development of S1, for different reasons:

  • Instranet, which later became a working model of ServiceCloud – the first-one ever to be honest – was a proof-of-concept that platform based solutions could be done.  The development of this solution as ServiceCloud was done (approximately) between 2009 and 2011.
  • Chatter was further proof that monolithic applications were a horrible idea if they were going to be used as a platform.  When Chatter was first launched it could only operate as a stand-alone solution, another entire solution separate from existing applications, and integration into the code-bases of ServiceCloud and SalesCloud was nearly impossible – something that a platform-based service could’ve done with little effort if any.
  • The varied dot-com acquisitions, and platform-based launches by their partners, were important to prove that (since they were real three-tier cloud solutions) platform based services could be used and leveraged across different applications, and to prove that cloud-computing was a far better model than monolithic solutions (I wrote about the acquisition of Assistly, now, and covered some of these points).

Now it is 2011ish (not very precise, some of the points above came more clear as the development of S1 was underway, but it is the right timeframe going forward).  SFDC already knows that is not cutting it as a platform (it was proven when they could not launch Chatter as a platform service) and they need to do something.  What follows is one of the hardest decisions to take as an Enterprise Software vendor – but one that will have proven incredible beneficial for SFDC: they needed to re-architect  This was the genesis of S1 (not the original name, and certainly not any name that was used during development).

SFDC makes the decision to re-architect the platform that was supposed to be basis for everything they do – and to fully embrace the three-tier model of cloud-computing.  The new platform will not only be extensible, secure, and elastic – it won’t have the interface code it had before (that becomes the true SaaS) and will have to separate the database and connectivity layers form the platform as well (this was the hardest thing to do, but this is another long story).

Among the many things they had realized once they got going was the power it can confer.  Take Chatter for example, one of the best examples of this change.  Chatter was a stand-alone solution that could, in the original implementation, bring some details from files and users into an activity stream.  It was not possible at the beginning to use Chatter for the new ideas that were emerging: make the stream part of all applications and functions, integrate it directly into groups and files to launch communities, and even worse – make it the basis for the social enterprise model that SFDC had then espoused.

chatterChatter was redone – the second time was done as a full platform-based solution: a service-based application that can operate within the platform to serve functions to any other application. The re-launch of Chatter as a platform (done in 2012) was a showcase of the power of what (then) could do: it quickly became part of everything that SFDC offered, its functions were easily accessible by not only other applications but also by customers, partners, and even competitors (the back story on Chatter and the database licenses it required, and how that became a roadblock for its growth was solved in the deal reached in2013 with Oracle– also another great story for later).

The Proposed Solution

And now we are at the end of 2012, beginning of 2013 with three incredible important accomplishments:

  1. A newly re-architected platform (yet unnamed) that could change the Enterprise Software world
  2. A new version of Chatter that not only serves as the proof of concept for this platform, but also the epitome of the many acquisitions and partnerships it took for SFDC to get to this point.
  3. The burning question of how to deploy and leverage this the best way possible.

Enter S1.

I am not yet sure of how and when S1 was named so, and it does not matter. What matters is what it can do: it has the power to change SFDC from a hosted application (fine, hybrid hosted and cloud application if you prefer) vendor to one of the few solutions in the market with the clout and power to change cloud computing and ensure the adoption by organizations (IBM and Microsoft are the closest – but that is a whole different story also, trying to stay focused).

Here comes DreamForce 2013 – the chance to introduce S1.

During the keynote(s) (there were many more than one) the emphasis for S1 was on the use as a mobile client (it’s an app you can download today in the app store – don’t wait!), as a platform and an app (as if you could be a car and a highway system at the same time), and as many more things than what it is.  My blood pressure rose several points each time someone asked me what I thought of the “new mobile app: S1” or tried to convince me this was the culmination of Touch (that was a release SFDC did to address HTML5 “clients” about two years ago, a total failure – still exists somewhere in the chatter mobile and other apps, but nowhere near what the expectations were at the time).

To be fair, SFDC partners, most of them, I talked to were very smart about it and are already working on very interesting modules that leverage S1. The understanding of a three-tier model for cloud computing and how SFDC is working to incorporate it into their solution was not beyond comprehension by partners, it was poorly explained by SFDC.

The fact that some of SFDC employees were repeating this mantra of “mobile client” was what made it more cumbersome to me: knowing the effort and time it took to build it – why belittle it by calling it a mobile client?

S1 could be a mobile client – well, not really.  It can be displayed via a mobile client (mobile is an interface, not a client) as well as a desktop, a laptop, a tablet, a smartphone, a partner application, a custom app, an embedded item, a connected machine, even a connected customer and anything else in between.  Because it is a platform, any change you make to a service is IMMEDIATELY reflected in all clients and interfaces – that is the beauty of the three-tier cloud computing model.

There are many benefits to S1 (the platform) that are not being discussed (which I will make the ending of this book-long post).  There are some I will miss in this short post (yes, short – I once wrote a 45+ pages simple explanation of how cloud computing works).

Let me explain some of the things you can get when you stop calling it a mobile client and focus on the power of the platform (and we will extend that to ecosystems of platforms in another post):

  1. Any client, anywhere can access any service offered by the platform.  This means that once you authenticate with a platform, anything else you want to access that is trusted to that platform is also accessible (PaaS to PaaS integration is far safer and scalable than point-to-point integration or security as done today by monolithic solutions)
  2. Any interface (mobile, computer, watch, a “thing” in the internet of things) can access the functions and data (once properly secured and authenticated) and display it – you cannot have an “internet of things” or even an “internet of customers” without an ecosystem of trusted platforms (well, not a sustainable one at least).  This will lead to the rise of “atomized apps” (usually called apps).  These apps are found in mobile devices and are single-function solutions that require no further logic (think about it this way, if your job involves checking people’s credit scores before approving an application for a loan – wouldn’t you prefer to have a simple app that does just that? If your smartphone or table can do it, why not your organization? It now can)
  3. The concept of multi-tenancy finally disappears (yay – I cannot tell you how happy I am) together with the concept of multi-instances.  We move to elastic single-instances with single-tenancy: each customer can have their own service (managed via systems management and metadata quite easily) and instantiate is as many times they need – and make any changes they want in the process. No longer are customers constrained (either in data model or functionality) to what’s offered as they can extend the functionality quite easily by making another service call (to any providers) without having to worry about changing the core service.  (note: vendors will try to tell you how expensive this is as compared to multi-tenancy, but ask you yourself how “cheap” it was to use multi-tenancy and what benefits you as the customer derived from that – or read my previously linked post for that answer).
  4. SFDC can create more modular “API calls”.  They introduced this at the same time as S1 but they failed to mention why this was possible: any API is a library of many calls to different parts of the monolithic application to leverage their functions and data.  API calls require complex transacting for security, scalability, and even integration that can reduce the granularity (read complexity or simplicity – either work – if you prefer) of how you can interact with it.  By using services that leverage tokenized security and inheritances (core benefits of cloud computing) the calls can be far simpler, and far many more, while performing at the same or better level.  Completing more service calls will use fewer resources and time that doing the same via API calls.  Bottom line: you can use more granular functions with far less resources.
  5. Incorporating Enterprise Application Stores (EAS) into their cloud computing deployment will allow any organization to create as many atomized apps as they would like to, thus reducing the complexity of the solutions used by the customers, the training and support costs, and enabling and empowering their customers to build and use their own “custom” version of the Enterprise Software they have running.  The device and operating system they run is irrelevant as long they are supported in the EAS (SFDC announced, very quietly, the first version of their EAS at DreamForce).

The $64,000.00 Question

If you followed all this so far, thank you.  I know it is a bit to consume.

You are probably saying by now, is that Saleforce1? Is the platform they launched and announced as a mobile-client / platform / everything what it is? Is it working yet?

Lots of questions, one simple answer. No.

Let me explain.

Salesforce1 at this point is a very well developed concept, an idea that has been partially implemented and (like i said above) it has a lot of potential.  It has the potential to change how we do Enterprise Software and Cloud Computing forever.  It has the potential to change the way software vendors work with each other.  It has the potential to change how organizations think about ecosystems, about systems of engagement, and about everything from personalization to revenue models.

It has all that potential – but it needs to be realized.  By my estimates, it is about 60-70% complete right now.  Most of the basic APIs have been moved over to the new service-style granular API model and a large number of customers have been running in the new platform without knowing it.

A development environment, an extension of existing IDE, already exists and service calls are working and available.  There are development manuals and directions, guides on how to do it, and even a “mobile client” (think Chatter mobile and you get a good picture) to allow anyone working with it have a mobile interface to it.  The majority of the pieces are there, but it is not complete.

I talked to a few partners and they had been working with it for some time.  They have been, and continue to, developing new solutions leveraging what Salesforce1 has to offer for some time now.  They have plans, new ideas, and the desire to build new models and new execution paths to fulfill the needs of their customers.  They are working on it and are doing quite well from what i saw.

I also talked to a few, very few, early adopters that are beginning to explore and see what they can do with it.  Admins and Developers are getting a closer-to-the-ground look at the potential and power of the platform and creating very interesting apps and applications for it.

However, none of these are released (there are a few apps in the AppExchange that say Salesforce1 ready – but I have not found any customers using those versions yet).  By my estimates, we are at the very least six months away, but more likely nine-twelve months from having some extraordinary solutions with momentum.  We are one-to-two years away from revolutionizing the way we use those apps, and three-to-five years away from changing revenue and business models to accommodate this new platform (including the core concepts of cloud computing).

Of course, timing will change from industry to industry, and company size to company size.  There are no guarantees of how long it will take to get there, but this is for sure – although Salesforce1 is not 100% ready today, it is excellent progress towards the realization of one of my visions – and the delivery of significant value to the users, customers, and partners.

Time will tell.

This is a very, very brief summary of the many discussions I had over the past few days with different people.  There is a lot more than I can put in here, please contact me if you would like to discuss this in more detail.

I made the offer over Twitter before and I will make it again: I would be more than happy to invest the time and effort in helping anyone understand why S1 is far more than a mobile application or client.

The potential to change the game of Enterprise Software is phenomenal – let’s just hope SFDC does a good job of explaining it.

Benioff Tweet

Yeah, you better believe I will send them. I will share via this blog following…


Notable Posts (that means I agree with them)
Ben Kepes, amazing cloud dude
Brian Vellmure, analyst extraordinaire
Ken Yeung, interesting and smart reporter
disclaimer: These are my opinions; by no means they are official words from SFDC.  This is my understanding and it is not endorsed by anyone at SFDC.  I have not run this by them, not have I sought approval.  Any errors, omissions, or mistakes are mine and mine only – Safe Harbor does not apply here, I am just telling a story as I see it.  Feel free to correct me in the comments or debate me as well.  I don’t monitor comments, even if WP does, I always approve them.
disclaimer-2: I said this before, Salesforce is one of the smart companies that took me on the offer to become my client.  I am very appreciative for the years we worked together, the incredible access to information, the many debates we held (still hold, never ending) about cloud and software, and their friendship and support for my work.  They also pay for me to attend Dreamforce every year, including hotels, meals, the registration fee, and a few parties here and there as well as a nice analyst swag bag.  I won’t deny it, they spoil me.  However, as you can see throughout the text, that does not mean I will be nice to them or not call them in their mistakes (yes, many through history).  As anybody else I chose to invite to be my client, they listen and sometimes work on their mistakes, sometimes they don’t (but all I can seriously ask is that they listen).  Everything they gave me to date has resulted in a stronger understanding of the potential that Salesforce1 has to change Enterprise Software (and not via more Marketing, as Marc Benioff mentioned in a tweet).  I just hope they realize it, it would make many of my long-held visions begin to come true (you know it is all about me, right?)

the blog!

%d bloggers like this: