Monday, May 4, 2009

Guide to the Approval of Strategic Outcomes and Program Activity Architectures

I didn't think this document was available to the general public, but it is, and here is the link:

http://www.tbs-sct.gc.ca/pubs_pol/dcgpubs/mrrsp-psgrr/guide/guide01-eng.asp

This is the guide on the approval process for Strategic Outcomes (SOs) and Program Activity Architecture (PAA). At least, this is the guide for the process for the 2010-2011 PAA.

For those who don't know, the SOs and PAA are the skeleton of the framework used by departments of the Government of Canada for public reporting.

Wednesday, April 29, 2009

Measuring Efficiency

As discussed in an earlier post, efficiency can generally be conceived of as the ratio of the output to the input of a system. So how do you measure that, and why would you want to do so?

Basically, the way to measure efficiency is to link resources to results, whether that result is an output or an outcome. Obviously, if you start linking final outcomes directly to resources, your model is going to be somewhat questionable, but what you may want to try to do is measure how much resources it takes to maintain a certain level of achievement of a lower level outcome. What am I talking about?

Take for example the outcome of client satisfaction. A lower-level outcome could be customer satisfaction with telephone support. An indicator of that could be call wait time. Ok, now, say you set a target of 2 minutes (this could be a service standard), how many employees do you need on the lines to keep call wait times within that target? So in the end, to link the result to the resources in this case, you’ll need to measure at least 3 things in order for it to be meaningful: the call wait time, the number of employees answering calls, and the number of calls. At some point, by looking at historical data, finding trends and building forecasts, you should be able to get a pretty good idea of how many employees you need to answer calls at different times of the year, to stay within your target call wait time. Ok, that was a pretty complicated example, and the analysis should go further because there are costs involved in adding and removing staff.

But here are more simple examples of efficiency indicators:

  • Average number of hours per file
  • Average number of days to staff a position
  • Average cost of a staffing process
  • Average cost per unit (i.e., production lines)
  • Total value of sales per month per salesperson
  • Server up time
  • Number of units produced by machine 4 per week

As you can see, you can link different types of outputs or outcomes to different types of resources (time, employees, funds, etc.). However, one of the main weaknesses of efficiency measures is that they do generally assess quality. For example, if it takes 5 hours on average to review a file, but a lot of mistakes are made, or steps are skipped to reduce the time required, then you just created a perverse indicator, or perverse incentive. The lesson? Balance efficiency measures with measures of quality.

Why would you want to measure efficiency? To optimize the use of limited resources.

Thursday, March 26, 2009

Service Standards and Year End Reporting

Let’s assume we are in April 2009 and take the following example of some applications:

Receive DateProcessing Finish DateNumber of Days to Process
February 1, 2009February 15, 200914
February 15, 2009February 28, 200913
February 15, 2009March 1, 200914
February 15, 2009March 15, 200928
March 1, 2009March 15, 200914
March 15, 2009March 31, 200916
March 31, 2009TBD


Now, let’s assume that no applications were received in January and that the service standard for this type of application is 15 days (applications are processed in 15 days).

There are two ways of looking at this. The typical view is to measure from the receive date. That would measure the number of days it took to process an application from the day on which it is received. The other possibility is to measure based on the date on which the application’s processing was finished. This method is less common.

Here is a tricky question: “What was the average processing time for applications in February?” The question is tricky because it doesn’t give you a reference for which date to use as a base, are we talking about the applications received in February, or the applications processed (finished) in February? Here are the options:

Average Processing Time of Applications by Received Date
February17.25
MarchTBD


Average Processing Time of Applications by Processing Finish Date
February13.5
March18


According to the received date base, 75% of applications received in February were processed within the service standard (15 days). But of the applications processed in February, 100% were processed within the service standard (if we consider that no applications were received in January, which we do in this example).

Normally, I expect most organizations to use the first method, based on the date on which the application is received.

So the fiscal year is over (ends March 31), and it’s now April 7, 2009. Can you produce accurate statements on your performance against the service standard for applications received in March, using the first method (based on the date the application is received)? The answer is no, because your service standard is 15 days, and only 7 days have passed since the last day of March. April 15 is the last day for which the applications received March 31 will still be processed within the service standard, so you would only be able to know how many of the applications received in March were processed within the service standard at the end of the business day on April 15. That’s assuming you have instant access to up-to-date information, which is not always the case. If there is a delay between the time an application is processed (processing finish) and when you know about it, then you also need to take that into account. This is often the case for electronic systems, there is often a delay between the data entry into the application, and the availability of the data in data marts or cubes used for reporting.

In conclusion, be aware of your service standards, of how your performance is calculated, and of the delays in the availability of data when you are doing your year end reporting, or you could end up with inaccurate performance statements.

Tuesday, March 17, 2009

Efficiency and Effectiveness

Efficiency and effectiveness are two closely linked concepts that are, unfortunately, often misunderstood. So what are efficiency and effectiveness?

Efficiency

Efficiency relates to the amount of resources (input) used to achieve a goal (output). Efficiency can generally be conceived of as the ratio of the output to the input of any system. An efficient system would have a high output to input ratio, that is, it would produce a lot of the output for little of the input. There are different situations that can describe a gain in efficiency:

1. producing more output with a given amount of input
2. producing a given amount of output with a reduced amount of input

The other 2 situations representing possible efficiency gains,

3. producing more output with more input
4. producing less output with less input

depend on the measure of the ratio of output to input. In the first case, an additional amount of input must lead to the creation of more additional units of output than the current value of the ratio for the situation to represent a gain in efficiency. In the second, a reduction of one unit of input must be accompanied by a reduction of less units of output than the current value of the ratio. In other words, the output to input ratio must increase.

Effectiveness

Effectiveness relates to whether the means used lead to the end. In other words, whether the action has the intended result. In the business context, it most often refers to the extent to which a program or service is meeting its stated goals and objectives (or outcomes). Improving the effectiveness usually means changing something (normally the action) that will increase the extent to which the goal is met. For example, improving the effectiveness of an anti-smoking program would mean changing something that would increase the percentage of people who smoke (if that’s the indicator you decide to use to measure the achievement of the goal).

It should be noted that a program’s effectiveness can be increased by changes outside the scope of influence of the program. Changes external to a program can impact the effectiveness of a program, both positively and negatively. That is part of the reason of environmental scanning. Effectiveness is one of those concepts where it is important to understand the difference between correlation and causality.

Tuesday, February 17, 2009

Tabling of the 2007-2008 Departmental Performance Reports and Canada's Performance Report

The 2007-2008 Departmental Performance Reports (DPR)and the 2007-08 Canada's Performance Report were tabled on February 5, 2009.

Departmental Performance Reports are reports written by departments and agencies at the end of the fiscal year. They describe what the organization has achieved (it's performance) and how it performed compared to its plans and goals.

Canada's Performance Report is basically a chapeau piece to the Departmental Performance Reports, and tries to combine the performance of all the departments and agencies to create a performance report for the government as a whole. Where DPRs are historically based mostly around what the department achieved, Canada's Performance Report gives a much more social perspective to the results of government spending.

2007-2008 Departmental Performance Reports: http://www.tbs-sct.gc.ca/dpr-rmr/2007-2008/index-eng.asp

Canada's Performance Report 2007-08: http://www.tbs-sct.gc.ca/reports-rapports/cp-rc/2007-2008/cp-rctb-eng.asp

Wednesday, January 14, 2009

Updated MRRS Policy

There has been an update to the Policy on Management, Resources and Results Structures (MRRS). The updated policy took effect on December 20, 2008.

The updated policy is available here:
http://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=14252§ion=text

The old policy is available here:
http://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=12412§ion=text

Overall, I would say the new update improves the policy. What I consider the core of the policy, the core requirements which used to be under 7.1, and are now under 6.1.1 haven't changed in the essence of their meaning.

The update adds more responsibilities for Deputy Heads related to the implementation of the policy, keeping the MRRS up-to-date, following proper procedures for updates, etc.

The updated policy also has a new section, 7. Consequences, describing consequences for untimely or unsatisfactory implementation of the policy.

Wednesday, December 31, 2008

Aggregating Indicator Scores

To measure the performance of your organization in a certain area, you will typically use a set of indicators. These indicators may or may not cover the entire area you are trying to measure, may contain indicators of short-term or long-term progress, etc. Anyhow, you'll have a set of indicators that you have chosen to represent an area of management, a process, an activity, etc.

So indicators are a set of metrics. You may have something like this to measure client service:



IndicatorActual ValueTarget
Percentage of pizzas delivered within 30 minutes90%100%
Percentage of calls answered within 2 minutes of entering the queue80%100%


Now, to get an aggregate score for client service, you could just take the average of the 2 indicators, that would give you (90+80)/2=85. However, you may decide that the indicators don't all have the same importance, the so they shouldn't all have the same weight. Let's say people hate waiting in a telephone queue, but won't notice if there pizza is 2 minutes late. In that case, the indicator for call wait time is more important, so we'll give it a weight of 70%, and we'll give a weight of 30% to the pizza delivery time. That would give us a score of (90*0.3)+(80*0.7)=27+56=83.

A few notes on this:

be careful of the units you use, in the example, we used 2 percentages with the same target, so we know they'll be fairly close and that they are fairly comparable. But if you were measuring something like the number of units sold and average call wait time in minutes, your units would be too different to be compared directly. What can you do? Use the target, and compare the result to the target. That will give you 2 results in "percentage of target achieved", which can than be directly compared to one another. If you use that method, setting meaningful targets becomes essential if you want your aggregate indicator score to be meaningful and useful.

In the example, the weights used add up to 1. It doesn't necessarily have to. But having a score that has an understandable maximum (100 in this case) makes it more understandable and intuitive. The resulting aggregate indicator score in the example, is not in a particular unit: all we know is that it's maximum is 100. There are times when, because of either your indicator or target your result may exceed 100. There is nothing wrong with that, but it highlights the importance of explaining how you go about measuring your performance, and how your data should be interpreted.

Finally, defining weights is a tricky exercise, and some managers may abuse this system by assigning low weights to indicators on which they know they will perform poorly. Another aspect to consider is that you may want to assign low weights to indicators for which the results are not very reliable.