rurwin, I was making working assumptions, and of course this is better if there is low hanging fruit. You just gave a 98% success rate which would take millions to compete with. It actually takes away four pages of subjectivity and demonstrates that it's not worth putting the effort into improving. I normally work with tackling success rates way lower than that, and I'm not pushing any professional services. Maybe I got carried away with enthusiasm.
I still think it would be interesting to know the top 3 this and thats, and patterns. Oh, and how to do something quicker the second time around. I'm sorry if I came across as smug, but I've had the wind knocked out of me now.
I just realised it was rude of me not to answer your question though. The last place I worked at (I was not the manager of the help desk, nor the service desk, but the principles still apply) targets were 80% approval, measured by closed issues and satisfaction survey. Since there was always a bow-wave of open "squeals" it was measured on a monthly basis by volume. This was being gamed by supporters by opening up many issues for one user. The satisfaction was skewed because people already avoided the supporters and wouldn't even spend the time to fill in a survey. There was a service level agreement, which was measured by how many times and how severely it was breached. Not applicable here, except interesting to show speed of response, and accuracy, ie. did they have to do things twice, how quickly could things be closed were there any things which regularly came back to haunt the group. The SLA breach target was <15%, but again that was gamed, because being a commercial organisation pay and bonuses relied on the figure. When you make a target the incentive, suddenly the target gets hit instead of the work. I always thought 15% was appalling, but it was defended to the death because the director's children's Christmas presents depended on it. There were a ton of ready made, and some more interesting analyses which come down to critical success factors and KPIs, which nobody should be bothered with here.Once you get past the 90th percentile, the exponential effort become far more obvious, and it takes real sponsorship and determination to get better. My experience is shared with others, the willpower goes, people move on and entropy sets in.
Out of 3000 customers, there were 2000 squeals a month and about 500 service requests, rising to 4000 squeals when rolling out Win7. Types of issue really was everything which distracted from the main business, which was everything except buying Christmas presents. The first solution to be registered, and always the most popular was re-ghost the machine - equivalent to download a new SD image. The most successful fix, but the one that left the customer unhappy was replace the machine. It takes a month for any user to get their personal computer back the way liked it. That was the source of the most sad faces on the survey. Also it lead to avoidance of the support group and reliance on an underground favour bartering system.
All this is moot, because what I was suggesting was a quicker route to the knowledge, based on a statistic of 1, me, and subjective observation of others. I really was not suggesting any of this (the above) should be adopted. Just a way of getting out of the "jeepers not this again" syndrome.
Staffing the solution side in a commercial venture is not difficult, since logically it should reduce repeating answers, and there were already people freed up through that. Of course the work expanded because people were free to inflate the numbers. Here, solutions are provided by volunteers, so I have no solution for that except that it would be a challenge to overcome. But now, there is no case for it.
One of the more interesting things which has a parallel for the foundation, I think was when 3000 "customers" were migrated from, in this case, Windows XP to Windows 7. I've since seen it go for 100 times that many. It was late, and the challenge was to do it with as little disturbance as possible to the customer base, measured by the squeals. Well, the noise doubled. Twice as many calls and complaints every month for the life of the roll-out, and then for a couple of months afterwards. Worldwide. <end of parallel here> But the satisfaction levels remained the same, and the service levels were rock steady at just under 15% breached. So we knew things were being gamed. The boss was too chicken to mention it though, and the rollout team had a party, because there was a bigger throughput of issues, which meant more Christmas presents.
Previous company was similar - the more issues, the more powerful the support group. Forget trying to go smart and fix things before they hit the customer - the customer is part of the cycle. 80% functional is good enough and we can handle the rest through support. We build our business that way. They had a specific tolerance for failure, sponsored by the CEO, all the way down. Some of the community will almost definitely use their software. Trouble is nobody ever measured the 80%
It's just a case of picking up some lessons, and knowing what it's useful to know. If it can get quicker and more accurate then great. If it can't ...<shrug>
For those wondering why I keep editing, it's because I keep seeing possible ambiguities in what I wrote, or I didn't answer a question. OK I'll stop now.
Last edited by Wattie
on Fri Jan 22, 2016 4:42 pm, edited 5 times in total.