Three Lessons Watson Taught Us to Improve Customer Service

Well, I had to do it.  I had to rip off the headlines and apply it to our customer service problems.

In case you call a rock your comfortable abode (or have a life outside of the echo chamber of Twitter that does not include watching Jeopardy), the news is that IBM created a super-computer that — what’s the best way to put it… trounced, annihilated, destroyed, humiliated… I know — summarily defeated two human opponents playing Jeopardy (this is a trivia questions-and-answers game that is very popular in the United States, and quite hard for most normal human beings to master – not me, of course).  The computer played against the top two jeopardy mega champions: Ken Jennings (who won every game for almost six months straight), and Brad Rutter (who won the largest sum of money in the game’s history).  Here is a wrap-up article by Ken Jennings that explains most of what you need to k now about this event.

Why does it matter to Customer Service?

It took IBM 25 scientists and four years to program the computer to understand the language used in jeopardy, select and store the necessary knowledge (if could not be connected to the Internet, federal regulations — ever seen the movie “Quiz Show”), and have it learn the rules and regulations of the game – in addition to train it to play the game.  Four years, 25 language scientists.

The problem to be solved is far larger than your customer service implementation, right? Right? Well, this is where the lessons learned come in – if you take the time to analyze the results…

Lesson One – Constraint. The scientists started with the premise that the game show could ask any question, about anything, from any time and any place.  That is a lot of knowledge to condense and feed a computer.  They had to, somehow, constraint the knowledge base; to define better what they had to understand, where the answers may be, and where to find it.  Watson had 15TB of data available.  Far more than your standard customer service setup for sure, but a minuscule, tiny, insignificant amount compared to the 3.6 ZettaBytes (don’t try, cannot even picture it) we consume each day, or the more than 20 petabytes that Google processes every day (just for your information, that number was 10 petabytes less than 12 months ago).  As you can see, there was lots of constraint shown in choosing the 15TB of data that Watson had available to generate answers.  Same principle applies to your service and support solution – I am sure that the knowledge base with 120,000 articles is a source of joy for your organization – but keeping articles in there than tell you how to solve your Windows Bob, Newton, or CP/M problems only muddle the process of finding the right answer.  Chose the knowledge you need to use wisely, and be very, very good at keeping that number small and manageable; trim unnecessary and add necessary swiftly.  It is far worse to not find the one you need that to have 119,999 you don’t.

Lesson Two – Simplify. Bells and Whistles are awesome – you can do lots of things to call attention to something good you are doing, try to make it more powerful, more attractive, and further reaching.  Bells and Whistles, however, don’t prove to be a solution.  The key to Watson winning was not only having the right knowledge, but understanding the process.  Now, think about the many decisions you as a human would have to make if you were playing Jeopardy, and the speed at which you would have to execute those actions.  If you could simplify the process, reduce the number of steps, and focus on the core of what you are doing you’d be far ahead of the game.  You can do this with your customer service setup: simplify the process, make sure that both customers and agents can get to THE answer faster and easier.  The researchers at IBM sought the best examples of how to play Jeopardy (Ken Jennings in this case) and reduced the complex model they had built to accommodate his specific style of play – simplification at its best. Simplifying makes it also far simpler to maintain a solution  if you already know what is not necessary to have.

Lesson Three – Learn. What can I say about learning and training your systems that I have not said? the world of support is divided into two: those that learn from their operations, errors, and successes – and those that are no longer in business.  A client of mine in the old days deployed a very costly knowledge management solution, but “forgot” to add the necessary routines to learn from its mistakes, grow the solution by trial and error.  By the time they figured out they needed it, almost 3 months later, it was impossible to control the monster they had created and they had to go back to the starting block.  Learning from the successes and failures of your solution, whether automatically or not, is what is going to make your certainty increase, the right answers show up more often, and your knowledge base remain simple and effective.  Virtually everywhere you read about Watson it says how he learned from playing, and it became better the more it learned.

Finally, one word of warning – among the many interviews that IBM researchers and scientists gave during the three day monster-computer-demolition-event, one of them said that the real excitement was not that Watson could win playing Jeopardy, or that they could program it to do so – but that they had real-life applications waiting to take on using the same technology.  At the top of the list: Customer Support.

I don’t, for once, welcome our computer overlords (that was Ken Jennings closing phrase after being “p0wn3d” by the computer).  Apparently, neither does Andy.

What do you think? What were your impressions of Watson versus Humans? Over-hyped and under-delivered? Over-delivered and Under-hyped? Mixed? Would love to hear your thoughts — and whether or not you welcome your computer overlords.

8 Replies to “Three Lessons Watson Taught Us to Improve Customer Service”

  1. Watson also had an advantage in buzzing in. I’ve auditioned for Jeopardy and the hardest part is not knowing the answers – it’s timing when you push the fiddly button. You can’t buzz in until after Alex finishes reading the question AND a light goes out. Buzz in too early and you’re locked out. Watson was hard-wired into the system, giving the computer a huge phsyical advantage over its competitors. (I think there’s a metaphor for integration in there somewhere…!)

    Like

    1. Chris,

      I had the same feeling about it as I was watching – it was not an advantage of semantic search, as much as early buzzing (always the issue with Jeopardy, and a point that Ken Jennings made in many occasions).

      Alas, I think that the semantic search that was used, and the display for each question that was underneath “Watson” as they were playing, was what made it interesting for me. There is something there, and the broadness of the topic chosen, the speed of computation, and the results shows that we are advancing.

      Thanks for the read!

      Like

  2. I think the Watson idea was good for TV and created enough buzz to justify. I like the parallel to Customer Service challenges and you do a good job of spelling out what one needs to do. I am doing alot of research lately on chatbots/Virtual Agents to help drive Customer Experience and reduce cost. The Knowledge Base component of Virtual Agent solutions parallel’s the Watson challenge. Seed with enough of the “Universe Of Questions” and then you must review what customers are asking to alter accordingly. But in the end it requires a continuing commitment which I see as your underlying message.

    Like

    1. Kevin,

      I absolute adore the idea of chatbots – been promoting them since the late 1990s. There are a few good implementations, not many unfortunately, and few offers left standing. The first one of them, Elisa, was actually a therapist… and it worked ok, but the progress shown in the later models (what we are using today) is remarkable.

      However, they all have the same problem: they need to be trained, have a very narrow view of the world, and need constant, methodical maintenance. It becomes cumbersome for some organizations after a while – requires a strong commitment to succeed (as with any other channel, requires no commitment to fail – pun intended).

      As I was telling Chris above, to me the key here is that I saw a very broad subject matter, good results, and speed that I had not seen before (then again, none of the vendors that I followed used supercomputers as their core). I think there is something there if it can be productized and package (well, it is IBM that owns it after all — just looking at their track record).

      Thanks for the read!

      Like

  3. If only people asked questions so consistently like a Jeopardy question! The problem is they don’t, and that’s why semantic search engines never get close to their marketing hype. In fact, I suspect that really good semantic engine will not have a clue on the real intent of a question 70%-80% of the time for anything but situations where the subject matter searched and types of questions asked are pretty narrow. Generally not a good user experience for anything but the softball questions.

    Search engines that can intelligently leverage insights gained from
    others searching still have the best shot at getting the most people, the quickest, to the answers they seek. Google proves this every day and has to wade through a boatload of content people have created to game SEO.

    Answer: What is reality.

    Like

    1. Chuck,

      Indeed, intent will ruin anybody’s approach to — well, anything that is automated. Few, if any, vendors have managed to understand it — less along include it in their product (yes, they do include it in their marketing materials, there is a gap in delivery against those claims though).

      I focused and researched intent for a while back in the old country, I can tell you that most people are amazed when you can really use it well and deliver better results. Heck, even using it to go from 20% confidence to 40% confidence surprises people.

      However, from what I saw Watson was not using intent to answer questions, it seemed to be doing parallel searches on similar items and with different methods, then reconciling the answers into a possible topic. It was correct about 80% of the time (which is why the other two players were able to chime in now and then with some answers), which is a significant improvement over anything I have ever seen.

      The point I was driving to, organizations can learn to constrain and restrict their searches / findings to offer better customer service. It may not be 100%, or even 60% probably — but going from 20% to 40-60% is huge.

      Thanks for the read!

      Like

Comments are closed.