Surveys Done Right, Part 1 – Point of Delivery Surveys

Surveys Done Right – Part 1 – Point-of-Service Surveys

I am going to break some very old rules of mine to write this post.  Ever since I introduced the three-layer model for surveys while at Gartner (point-of-service, customer-satisfaction, planning) I have been getting requests for “sample questions”.  I have maintained, and continue to do, that I cannot provide sample questions since all questions need to be created according to the situation, respondent base, strategy and vision for your feedback initiative, as well as the standards and rules you set for your surveys.  Of course, they also have to be personalized to respondent and situation, and be written to match delivery and collection channel.  This is as basic as it gets when writing surveys.  My concern / problem is that when someone gets “sample questions” they become “THE questions” without further tinkering, and that is just wrong.

So, the counterpoint to that is that I have seen the concept implemented (point-of-service surveys) with some truly horrendous ideas.  I have experienced “short” surveys of 10 questions asking all sorts of things, and questions so badly written that it is almost impossible to answer.  Thus, as a public service (yes, I know I am a selfless philanthropist when it comes to surveys) I am going to break the rule and make this post about two things: a reminder of how point-of-service surveys should be done, and a set of sample questions (which I will regret for a long time, and possibly my grand-kids will as well).

First, how does this work.  Point-of-service (also called point-of-delivery) surveys are SHORT (yes, needs to be shouted), 2-3 questions surveys aimed to discover the efficacy (not the efficiency) of the service interaction.  In other words, did we do a good job delivering service and was it what you needed.  It is intended to spot any problems during delivery, and to fix them before they become customer service issues or lead to customers not being satisfied.  Simple, huh?

Now, the main point of doing this is preventing service issues from becoming problems.  Thus, the critical part is not doing the survey, but actually having processes in place to reach out to customers and fix their problems when either of these questions returns a negative answer.  This is where most companies fail, they don’t have documented, specific processes in place to take care of negative answers quickly (yes, speed matters).  The reason I am bringing this up, even if you copy the questions from the bottom of this post – please, please, please make sure you have the necessary processes in place before doing these surveys.

Final point I want to make, then we move to the actual questions.  Channel of delivery matters.  If at all possible, try to keep the survey in the same channel where the transaction took place, and to follow the interaction immediately.  If the customer called, make the survey an IVR-driven survey post-call (no, don’t have another call for follow up… it does not work that way).  Email came in? email going out (as quickly as possible, not 2-3 days later).  If you cannot maintain the channel of service be the channel of delivery (or you cannot make it immediately following the transaction) then your best bet is using email surveys.  No, not email with links to online surveys — email surveys.  The questions are within the email and they can answer simply and quickly.  OK, getting off the soap box now (yes, I am passionate about this “stuff” being done well).

CAVEAT: I know I said this before, but please, please, pleaseeeeee customize these for your situation and personalize them. Please?

Question 1 (this one should never change): Did you receive the answer you needed?

Question 2 (choose from the three below based on what else you need to measure):

Q2.1: Did we do a good job delivering the answer? (my favorite, but a little broad in meaning)

Q2.2: Was our service cordial and polite? (in other words, who needs some training or talking to)

Q2.3:Was our representative knowledgeable? (again, training or knowledge management issues)

Q3.3: Was our representative prompt to answer your questions? (do they know what they need to know?)

You get the idea, depending on what part of the interaction is critical you can change the second question.

So, please don’t let me down. Customize, personalize and (more important) let me know how it goes…

%d bloggers like this: