Monday, August 8, 2011

Evaluating Training

The dilemma I'm going through when it comes to evaluating training is that I primarily train on software.  I also manage the support line for said applications.  The question becomes, if I get more calls after doing training has the training been effective?  Who are the people that call?  Those that have taken the training or those that refuse to?  A hodgepodge of both?

Tomorrow I'm going to evaluate the effectivenes of some bi-monthly (as in every other month) calls I do with the key players in each office for one of our apps.  I noticed we had about 25% of the offices that didn't have a person represented in the last two calls.  Will these offices have fewer or more calls to the support line?  My guess is fewer.  If the key players show no interest in the bi-monthly calls there would be also less interest in the firm as a whole.  But I may be proven wrong -- they may just have no direction as to where to go for training and where to get guidance and call us even more.

There are the other measures of evaluating just run of the mill training through user surveys.  They work OK but they miss a key element in gauging how well people have learned the application.  If only I could record the mouse-clicks on the app of users before and after training and have some software tool analyze the differences.  Some computer nerd out there, do you hear me?