:
I would like to call the meeting to order at this time.
I want to welcome everyone here. Bienvenue à tous.
Colleagues, this meeting is called pursuant to the Standing Orders. We're dealing today with chapter 1, “Evaluating the Effectiveness of Programs”, from the fall 2009 report of the Auditor General of Canada.
We have a number of witnesses before the committee this morning.
From the Office of the Auditor General of Canada, of course, we have the Auditor General herself, Sheila Fraser. She is accompanied this morning by Assistant Auditor General Neil Maxwell and Principal Tom Wileman.
From the Treasury Board Secretariat, we have the secretary, Michelle d'Auray. She is accompanied by the assistant secretary, Mr. Alister Smith.
From the Department of Citizenship and Immigration, we have the deputy minister and accounting officer, Neil Yeates. He is accompanied by Elizabeth Ruddick, director general of research and evaluation.
Finally, from the Department of the Environment, we have the deputy minister and accounting officer, Ian Shugart. He is accompanied by William Blois, the associate director of the audit and evaluation branch.
On behalf of all members of the committee, I want to extend a very warm welcome to everyone.
We will now have opening statements
Ms. Fraser, you're first. You have up to five minutes.
We thank you for this opportunity to present the results of an audit included in our November 2009 report on evaluating the effectiveness of programs in the federal government.
As you mentioned, I'm accompanied today by Neil Maxwell, Assistant Auditor General, and Tom Wileman, principal, who were responsible for this audit.
I would like to point out to the committee that the work for this audit was substantially completed on May 31 of 2009.
Effectiveness evaluation is an established tool that uses systematic research methods to assess the extent to which a program is achieving its objectives. Over the past 40 years, the federal government has made repeated efforts to establish the evaluation of effectiveness as an essential part of program management.
One of the most important benefits of effectiveness evaluation is to help departments and agencies improve their programs. Departments also need to be able to demonstrate to Parliament and taxpayers that they are delivering results for Canadians with the money entrusted to them. Sound information on program effectiveness is particularly important in light of the recent budgetary measures to contain administrative costs and review government operations.
[Translation]
In this audit, we examined how examination units in six departments identified and responded to increasing needs for effectiveness evaluation. We also looked at whether they had built the required capacity to respond to those needs. In addition, we looked at the oversight and support role of the Treasury Board of Canada Secretariat in monitoring and improving the program evaluation function in the government, particularly with respect to the evaluation of program effectiveness. The period covered by our audit was 2004 to 2009.
Overall, we found that the six departments were not sufficiently meeting the needs for effectiveness evaluation. Annual coverage of departmental expenses by evaluation was low, ranging from 5% to 13% across the six departments.
In addition, the effective rate of coverage was even less because many of the evaluations we reviewed did not adequately evaluate effectiveness. Of the 23 evaluation reports we reviewed, 17 noted that the analysis was hampered by inadequate data, which limited the evaluation of program effectiveness. This lack of performance data is a longstanding problem as noted in my office's earlier audits of the evaluation function.
With respect to the six departments' capacity to meet the needs for effectiveness evaluation, department officials told us that they have not been able to hire enough experienced evaluation staff and have used contractors extensively to meet requirements.
[English]
Of the audited departments, Environment Canada had internal processes to systematically identify areas for improvement in effectiveness evaluation. For example, Environment Canada solicits client feedback through post-evaluation surveys. Such processes ensure that departments are following the management cycle for continuous improvement.
The situation with respect to program evaluation in the federal government is not unlike that of internal audit before the policy on internal audit was implemented. Lessons learned from the government's recent strengthening of internal audit could be applied usefully to program evaluation.
We believe that implementation of the new requirement to evaluate all direct programming spending faces serious challenges. Earlier requirements for full coverage were never met, and current legal requirements for effectiveness evaluation of all grant and contribution programs have been difficult to meet. Department officials told us that they have concerns about their capacity to implement the expanded coverage requirement in the new evaluation program.
In our view, it will be important for the secretariat and departments to carry out effectiveness evaluation of programs that are susceptible to significant change because of shifting priorities and circumstances. These are programs where evaluations of the relevance, impact, and achievement objectives can be put to best use. During the transition to full coverage, these programs may present the biggest opportunities for effectiveness evaluation. Taken together, the increasing needs for effectiveness evaluation and the department's limited capacity to meet those needs pose a serious challenge to the function. Concerted efforts by both the secretariat and departments will be needed to meet this challenge.
[Translation]
The secretariat and the audited departments have agreed with all of our recommendations. In several cases, they have made commitments for remedial action in their responses. In line with the committee's request for action plans and timelines, the committee may wish to explore the progress made to date in addressing the issues raised in the chapter.
Mr. Chair, this concludes my opening remarks and we would be pleased to answer the committee's questions.
Good morning.
[Translation]
Good morning. Thank you for this opportunity to speak about the evaluation function in the Government of Canada. As you mentioned, I'm here with my colleague Mr. Alister Smith, Assistant Secretary of the Expenditure Management Sector. Mr. Smith is responsible for my department's Centre of Excellence for Evaluation. This centre is responsible for evaluation policies and works very closely with the government evaluator community.
As Ms. Fraser mentioned, evaluation is a longstanding management tool that is vital to the sound management of public spending. It involves the systematic collection and analysis of evidence on the outcomes of programs. This invaluable information is used to make judgments about the relevance, performance and value for money of programs. It is also used to examine alternative ways to deliver programs or achieve the same results.
Finally, it supports policy and program improvement, expenditure management, cabinet decision-making, and public reporting to Parliament and Canadians.
[English]
Given the increasingly important role evaluation plays in support of program effectiveness and expenditure management, we are in full agreement with the recommendations contained in the Auditor General's report. They mirror in large part what the Treasury Board Secretariat has learned through extensive consultations and monitoring activities. And they are reflected in and addressed by the actions we have taken as part of the implementation of the new policy on evaluation that was issued in April 2009. Unfortunately, the scope and timing of the Auditor General's report did not allow for recognition of these improvements, as it focused on the period up to the introduction of the new policy.
Our action plan, in response to the Auditor General's report, which we provided to the committee, outlines what the secretariat has undertaken and delivered since the report's publication and what we will continue to do. Allow me to highlight some of our actions.
[Translation]
One of the Auditor General's concerns was that evaluations were not adequately assessing effectiveness, and that they lacked performance information. This concern has now been addressed under the new policy, which sets a clear standard for evaluation quality, as well as responsibilities for performance measurement. It also requires that all evaluations examine program effectiveness.
The new policy also requires each departmental head of evaluation to prepare an annual report to their deputy head on the state of performance measurement in their organization. This report will assist the deputy head in ensuring that the key data needs of program evaluations are met.
Finally, the policy has also expanded the evaluation coverage requirements to cover all direct program spending over a five-year cycle, after an initial transition period.
[English]
The Auditor General also recommended that the Treasury Board Secretariat should do more to monitor and support departments to help them identify priorities for improvement. This is addressed under the new policy that calls on the Treasury Board Secretariat to provide functional leadership for evaluation across the government. This includes regular monitoring and annual reporting to Treasury Board on the health of the evaluation function. Our first report will be issued before the end of 2010-11.
Much of our monitoring and support work is carried out through the annual management accountability framework assessment process, which assesses evaluation quality, neutrality, coverage, and use. It is also carried out through the advice and guidance we provide to departments on performance measurement frameworks, which are required under the management, resources, and results structure policy.
The secretariat has also allocated resources to our centre of excellence for evaluation to strengthen the evaluation expertise we provide to departments.
[Translation]
We appreciate that the new policy represents some important changes for departments—as the Auditor General noted by calling on the secretariat to help departments prepare to implement the new coverage requirements. This is why there will be a four-year phase-in period before departments are required to meet the comprehensive coverage requirement in their five-year evaluation plans, beginning with the 2013-2014 to 2017-2018 planning period.
[English]
I will turn now to the support we have provided to departments during the transition period, largely through the secretariat's centre of excellence for evaluation. For example, in November 2009 we issued a draft guide to developing departmental evaluation plans, which will be finalized and issued this summer. This provides guidance to departments with regard to evaluation timing, coverage, prioritization, and instruments.
We also issued, in November 2009, a draft guide to developing performance measurement strategies to support heads of evaluation in assessing the department's performance measures. This too will be finalized this fall, after integrating feedback and recommendations from departments.
We also set up, in June 2009, the evaluation community of practice with a website for exchanges of best practices. And we have held regular meetings to guide the capacity development of the evaluation community.
In addition, we provided preliminary guidance to departments on the possible merits of including external experts on departmental evaluation committees. The final document will be integrated this fall in a guide on the evaluation function, which will set out the expectations of the secretariat in relation to the evaluation policy and directive.
We recently led a post-secondary recruitment initiative for graduates with evaluation-related backgrounds. This led to the establishment of two pools of pre-qualified evaluators at the entry and intermediate levels. We also continue to work with universities and the Canada School of Public Service to promote and develop the types of evaluation skills and knowledge we need.
[Translation]
All these improvements have addressed the Auditor General's concerns over the quality, capacity and program coverage of the evaluation function in the government.
In sum, even though much remains to be done and even though we have attempted many times to improve the evaluation function within the federal government, I am of the opinion that with the new policy on evaluation, the guidelines and guides, and especially through our interactions with the deputy ministers and evaluators, we are laying the foundation for building a stronger, more competent and productive evaluation function in the Government of Canada in order to ensure better expenditure management.
Thank you.
:
Good morning, Mr. Chairman, ladies and gentlemen.
I'm Neil Yeates, Deputy Minister of Citizenship and Immigration, as the chair has noted. I'm joined by Elizabeth Ruddick, who is the director general of research and evaluation at CIC.
[Translation]
I would like to thank the committee for inviting me back to speak today. Today I will focus my brief remarks on chapter 1 of the Auditor General's report, and afterwards, we will be happy to answer any questions you have.
With its focus on results and accountability, the government has emphasized the importance of the evaluation function in assessing the effectiveness of federal policies, programs and services.
Our Evaluation Division leads the evaluation function, and has developed an action plan, tabled here today, to respond to the Auditor General's recommendations.
[English]
I'd like to highlight progress we've made over the five-year period between 2004 and 2009. CIC initiated significant changes to the evaluation function with the creation of the research and evaluation branch and the establishment of an evaluation committee, as well as the implementation of a formal evaluation policy. Funding for the function increased from about $650,000 in 2004-05 to about $2 million in 2008-09. That's along with an increase in the number of professional staff, from three full-time equivalents in that initial year to 13 in 2008-09.
In the past the focus was on evaluating grants and contributions to meet the requirements of the Treasury Board and the Federal Accountability Act. This growth in resources has allowed us to increase the coverage of departmental programs and the rigour and quality of the evaluations themselves. With more and better studies, the results and conclusions of CIC evaluations are increasingly used by senior managers to inform program and policy decisions. As a result, under the management accountability framework, the evaluation function--initially assessed as unacceptable in 2006--reached a rating of acceptable in 2008, showing steady improvement over a relatively short period of time.
The department agrees with the OAG's findings, Mr. Chairman. We have developed an action plan that includes the renewal of the comprehensive departmental performance measurement framework and program activity architecture, and will further the integration of the framework into our business planning process. This will improve the availability of performance information for evaluations.
CIC is also adding an external evaluation expert to the department evaluation committee. We are currently identifying a list of potential candidates and developing terms of reference for such an expert.
A process is now also under way for soliciting client feedback at the end of evaluations. We are finalizing an internal client survey and we will carry out the survey this year. The survey will be administered to senior managers of programs that have recently been evaluated, as well as to members of the evaluation committee.
Mr. Chairman, the Auditor General's report observed that my department's coverage of spending was low, particularly for grants and contributions. This is due largely to the renewal cycle of CIC's grants and contributions and the fact that about 88% are concentrated in only two programs. I'm happy to report that between fiscal year 2009-10 and this fiscal year we will have evaluated these two large programs, accounting for this large proportion of our grants and contributions budget. The other programs are much smaller in comparison, but all will be evaluated over the five-year cycle.
Recognizing the need for more comprehensive evaluations and a broader coverage, CIC has increased the non-salary evaluation budget by $500,000 this fiscal year, 2010-11. We will add an additional $500,000 in 2011-12, for a total non-salary budget of $1.5 million in 2010-11 and $2 million in 2011-12 and ongoing. As well, by the end of this fiscal year the FTE complement will reach 20 persons devoted to this function.
Mr. Chairman, over the past several years some evaluations have had to be postponed or rescheduled for various reasons, including a lack of available performance data, which has created challenges for completing those and other evaluations in a timely manner.
[Translation]
To avoid similar problems in the future, the Evaluation Division is working closely with CIC staff to develop more robust data collection strategies and tools, to ensure that data collected through our administrative systems will be available in the format required at the time any program is being evaluated.
These are just some of the ways we are working to address the Auditor General's concerns in a timely fashion. We are ready for your questions now.
Thank you very much.
:
Thank you, Mr. Chairman.
I want to add my comment right at the outset that we at Environment Canada concur with the Auditor General regarding the valuable contribution that effectiveness evaluations can bring to decision-making, and we support the recommendations she has made.
At Environment Canada the evaluation function is an important contributor to our decision-making. It provides an essential source of information on the relevance and performance of the department's programs and has an important role to play in managing for results.
[Translation]
We use evaluations to make decisions ranging from program improvement, such as how an existing program should modify its activities or processes to enable it to better meet objectives, to ensuring whether there is an ongoing need for intervention or an appropriate role for the federal government before renewing programs. It provides me with information I need to demonstrate accountability for the use of public funds.
Environment Canada's evaluation function also plays an active role in the review of memoranda to cabinet, Treasury Board submissions, the department's performance measurement framework as well as individual programs' performance measurement strategies and plans. This ensures that program managers are considering, planning for and collecting performance information that can be used in future evaluations.
[English]
Our valuation function has had continuous improvement processes in place for several years, and we continue to look for ways to improve the value of our evaluations to support the department. Some of this was noted, and we were pleased to see that in the report of the Auditor General. We've accepted the two recommendations that the report addressed to Environment Canada. In particular, we agree with the recommendation to develop and implement an action plan to ensure that ongoing program performance information is collected to support effectiveness evaluation.
Past evaluations have included recommendations for improvement in the area of performance measurement. We think we're now starting to see improvements in the department with respect to an increasing number of programs that are developing and implementing performance measurement strategies, and we will continue to work hard at this, because it is critical.
Environment Canada also accepts the Auditor General's recommendation to consider the merits of including external members on departmental valuation committees. We have recently received a preliminary guidance from the centre of excellence for evaluation on this issue, and we're pursuing that examination.
Finally, I'd like to speak briefly to the concerns identified in the report regarding the capacity of departments to evaluate all direct program spending every five years. This policy was put in place to increase coverage, and this is an important goal, one we agree with. It recognizes the benefit of maintaining a broader base of effectiveness information for departmental activity. It is important to acknowledge that there are challenges inherent in striving for greater accountability. Increased evaluation coverage, in order to have more information on program performance, has to be balanced against the need to focus on program delivery and the realization of results.
The four-year implementation period for the policy allows us, we believe, the time to adapt our approach, realistically to expand the scope of evaluation activity within the context of department priorities, resources, and program requirements. To increase coverage within current funding levels, Environment Canada will adopt a flexible risk-based approach to planning the study's scope, the approach, and the level of effort for each evaluation. In so doing we will ensure that our evaluation resources are focused on areas within the department where evaluation information is most needed.
Finally, where appropriate, we will conduct evaluations that take a broader perspective on some areas of the program activity architecture, as opposed to conducting individual evaluations of each program within the PAA element.
These are some of the changes that are taking place at Environment Canada that were motivated both by the Auditor General's report and by the new policy on evaluation. We're looking forward to and believe we will see positive impacts in our evaluation program resulting from both of these.
Thank you, Mr. Chairman.
:
There were problems in more than two-thirds of the evaluations you reviewed. The other six perhaps gave results. What is even more worrisome is that this only represents 5% to 13% of the programs per year. The others aren't even evaluated. Therefore very few are evaluated. For those that are evaluated, in more than two-thirds of cases, the evaluations are not satisfactory. One would hope that at least the others would lead to results. That's what your report states.
However, six departments—we heard this again this morning—increased their resources in this area. I did the math. If one looks at exhibit 1.6 on page 22 in the English version and page 27 in the French version, from 2004-2005 to 2008-2009, there was an increase of 38% in the funding for program evaluations. With respect to staff increases, table 1.7 on page 23 in the English version and page 28 in the French version shows that there was a 54% increase in evaluation staff. We end up with this kind of result. Obviously one has to ask why we have such unsatisfactory results with an increase in resources.
How much more should resources be increased, given that only 5% to 13% of programs were evaluated, and badly evaluated, given that your goal is to evaluate them all over five years? Twenty per cent of programs should be evaluated over five years, satisfactorily, when you're having difficulty in recruiting competent staff, which is another aggravating factor. You have used contractors, but we don't know if the contractors will stay long enough to provide any memory or experience to the various departments and Treasury Board itself appears to be completely overwhelmed. According to your report, there is not enough staff and you have not provided sufficient leadership. How will we get there? The government is increasing its requirements and you are not able to meet the ones you had already. The gap is quite glaring: 5% to 13% of programs evaluated when the goal is adequate evaluations of at least 20%.
:
Thank you Mr. Chairman.
You have asked several questions in your question. I will try to respond as fully as possible.
The points that were raised by the audit were in fact identified by Treasury Board Secretariat. That is why, over the evaluation period we reviewed and redrafted the evaluation policy. The policy now deals with evaluation in terms of program performance. I should point out the importance of data collection. That is why we put considerable effort into establishing data for measuring performance, because that was one of the deficiencies noted in Ms. Fraser's report.
We also provided for greater flexibility in the policy in terms of the type of evaluation, scope of coverage, as well as risk management, in other words their nature and importance. As Ms. Fraser pointed out, if these are programs that are going to change, then departments must be encouraged to focus on those questions.
We also started recruiting staff, as I pointed out in my opening remarks. We have created two recruitment pools at the intermediate entry level. There are approximately 1,500 individuals in these pools. We represent, in terms of the evaluation function, more than 500 individuals throughout government. Last year evaluation coverage was on average 15% for programs. In terms of grants and contributions programs, we have achieved more than 65% coverage.
I would say therefore that although not everything is perfect, we have made progress. We are now working in a very concrete and practical fashion with the evaluation community and with departments.
:
That is what I was wondering, because I remember meetings with Ms. Ruddick on data for the immigration system.
Here's my point. It seems to me that every time you come here we're talking about inadequate data, etc. I cannot understand why after 40 years, we're still evaluating programs. Information such as the criteria on the basis of which programs are evaluated have not been determined or assessed, and it is not a priority. Yet, today, in 2010, all of a sudden, this is a priority because the Auditor General has provided the background. One can see that this has not been resolved over the years.
Furthermore, in paragraph 1.96 to 1.100, the Auditor General points out that during her exchanges with the various departments, the departments expressed their concerns over their ability to evaluate programs. The report goes on to say that they were not able to undertake the required improvements on a regular basis.
I have a question. When you identify a weakness or a need to make an improvement in departments, why do you not act on those improvements? How does your decision-making process rank the importance of resolving problems as soon as they arise or as soon as they are identified?
:
Mr. Chairman, perhaps I'll begin by answering and then I'll ask my colleagues to expand on their own programs and activities.
First, with respect to data, we recommended that departments work with their managers. This was for the establishment or renewal of programs, in order to identify performance measurements unique to those programs. We also requested, through our policy, that the evaluation heads in each department draft an annual report on the quality and capture of data.
Those are the measures that we identified. The Auditor General had of course also identified them but we had identified them at about the same time as the audit did. This is what triggered our changes in the evaluation policy, precisely to address those deficiencies. We had also identified them. That is why we agree with the recommendations. Our reflections and the consultations that we undertook with the departments, which led to the new policy, reflected the same deficiencies and observations raised by Ms. Fraser in the audit.
That being said, we are increasingly using the evaluation measures for program reviews, especially for strategic reviews. The same applies to program renewal, under the transfer payments policy. We officially require performance evaluations, because we must focus on the effectiveness evaluation issue, performance evaluations. We are much more demanding with respect to evaluations than we were previously.
I'm not saying all will be resolved by tomorrow morning but I do think we are on the right path. The connections have been made between performance measures, program activity architecture, and expenditure reviews, and the full cycle is now integrated.
:
I would say that it did. It led us to undertake a fairly comprehensive review of the evaluation policy that we renewed and redid. It caused us to look at the performance measurement information framework. It led us to have a fairly extensive series of conversations with deputy ministers about how they used evaluation. And it led to the fairly extensive work plan and very pragmatic approach we are taking today.
As I said earlier, it's not perfect, but the gaps were very similar. The approaches we're taking are to address the biggest gaps and make the linkages between expenditures, program effectiveness, improvements, and decisions about whether or not the programs should be maintained, changed, or improved.
So I don't want to say this is all a perfect situation, but in parallel, as we were looking at and assessing departments in their capacities--and the quality, scope, and coverage--we came to the same conclusion the Auditor General did.
Just to drill down one more step, did the analysts in the line ministries pick this up too? Did they report to their supervisors that they were unable, from a professional perspective, to deliver the kinds of quality evaluations they would like to?
I'm just checking to see if the system worked all the way through, or did it take the AG to come in and trigger everything--because we have both. We get into all kinds of situations, so I'm just trying to get a sense of how well our systems were working underneath. Right at the beginning, were the line ministry analysts able to determine they had a problem, it went up, and then it got bumped up further to the Treasury Board, where things started to happen? Then did the AG come in, see some of this, and then do her further work?
:
Primarily, Chair, it's because the Auditor General in her report refers to the challenge that audit staff and officials within departments have recognized in completing the policy set out by the board. So I anticipated that this is an issue that is germane to this whole debate, and simply wanted to refer to it in that way.
You're absolutely right, it is characteristic of the kind of thing that we have to do all the time, and I think I would say that this is an excellent example of the kind of continuous improvement that we try to achieve, and it's relevant to the issue of data. As a deputy minister, even before the audit, before the policy, before the requirement of 100% coverage in grants and contributions spending, I would receive an evaluation report, and typically that evaluation report would indicate that we were able to answer these questions because we had data, but not these questions because there is no performance data. So even within the context of a particular evaluation, it's not a complete lack of data that we have; sometimes there is data but it is not developed by program managers to support evaluation per se.
For example, we might be able to know what the coverage of a particular program is, but we might not have data that relates to service standards, for example, and an evaluation--for which we decide in the department what program will be evaluated and what is the scope and the nature of the evaluation--would identify relevant questions. There might be data for some of the questions but not for others. And evaluation staff will, in their interaction with the AG and her team, have identified that one of the areas for improvement that we need is to have performance management.
In Environment Canada we recently redid our performance activity architecture. We already have a performance management framework, but it is by no means complete. So within the department we continue to work on our performance management framework, all of the data development that will in fact support that PAA, and it will then result in more data that's available for evaluations in the future.
The last comment I would make is that I as a deputy don't welcome and yet I do welcome the policy on 100% coverage. I don't welcome it because it's another thing I have to meet the requirement for; it's another pressure, another obligation. I do welcome it because it is the right thing to do, and because it will add a discipline to everything we do, both in program delivery and in evaluation. It will force us to develop the performance data, and so on. Will it be perfect in three or five years? No, it won't, because there will still be relevant questions that should be asked, and we may not have all of the data, but we will be improving, and I'm confident of that because we're already on a trajectory of improvement.
I would say that there are benefits to policy-making if an evaluation shows that a program is not performing well in some of its objectives. That is relevant for ourselves and for central agencies in the process of determining whether a program is renewed. It may also tell us that part of the program is working very well but another part isn't.
[Translation]
Sometimes this may be due to capacity at the federal level vis-à-vis the provincial level. For example, a province may be able to do better than the federal government with respect to one aspect of a program.
[English]
Another area of benefit is in the management of programs. We posted an audit on environmental emergencies, where one of the details that came out in the audit had to do with the roles of our environmental emergency officers. On the ground, in the case of an emergency where we're providing support, it's very important to have a very clear delineation between the responsibility of environmental emergency officers and enforcement officers. This was an area that the evaluation showed was not entirely clear. That allowed us to issue policy guidance and an actual statement of responsibilities for our environmental emergency officers, as opposed to our enforcement personnel, so that on the ground there's no confusion and there are no legal problems about who is authorized to do what. It's a small point, but it allowed us to improve the delivery of the program on the ground.
These are a couple of examples where evaluations actually do make a difference, sometimes in the continuation of the whole program, at other times in the improvement of its delivery.
We did recognize that gap, and we are working on many fronts to fill it. One is the work we are doing to recruit. We have also set guidelines on what the core competencies are. There has been some increased funding in departments. We've seen a growth in the number of evaluators available in the government. We're at just over 500 right now.
The Auditor General made an interesting comparison to the work that started with the audit community. We are following, I would say, a similar pattern in terms of identifying what the competencies are, identifying what the governance structure should be, and identifying the nature of the work of the evaluators and the evaluation function.
I think that in departments there is a growing recognition of the need to manage and oversee the evaluations themselves and to sometimes, as necessary, bring in contracting expertise, because not all the evaluators will necessarily have all the expertise necessary to cover the work.
I know that my colleague could speak from departmental experience about what that means, if that's possible.
I have thought of a bit of a red herring in looking at a cost-benefit analysis of this whole process. I can recall being in business once, when an audit cost me $20,000 in professional fees. Goodness knows what it costs government. I know from my audit that there was an $11 discrepancy.
I'm asking for an opinion on the following. We have programs being run by the best people we have available in this country, by all of our program directors and ADMs. We also have an Auditor General who is doing a great job, and we're very, very pleased to see that. But we have auditors auditing the auditors of the department, who audit the internal auditors, who audit the evaluators. We're running down through....
I'm just wondering about the horrendous cost involved. Is the benefit of all of this comparable to the cost, or are we really just building another multi-levelled or layered bureaucracy? We have 350,000 civil servants in this country employed by the federal government alone. Is it necessary to audit the auditors, and to audit the other auditors who audit the auditors who audit the evaluators?
On my point, could you again just make a quick observation, Madam Fraser?
:
Thank you very much, Mr. Chair.
There's one last area I wanted to follow up on. The auditor's report talks on page 22, starting at paragraph 1.52, about the shortage of experienced program evaluators in the federal government:
The shortage of experienced program evaluators is a long-standing concern. It has been noted in past Office of the Auditor General audits and in diagnostic studies by the Treasury Board of Canada Secretariat, and it was the subject of recent discussion within the federal evaluation community. A 2005 report by the Secretariat Centre of Excellence for Evaluation stated that “[t]he scarcity of evaluation personnel is probably the number one issue facing Heads of Evaluation.”
Two paragraphs later, it reads: “According to officials in the six departments, despite these”--Mr. Dion raised these earlier--“increases in both funding and staff, it remains a challenge to find experienced evaluators, particularly at the senior levels. In their view, the shortage of experienced evaluators has affected their ability to hire the people they need.”
This does get interesting: “For example, in one collective staffing process, the pool of experienced evaluators was depleted before the demand was met. They also indicated that the shortage of experienced evaluators has led to evaluators being hired away by other federal evaluation units.”
That may work well from a micro point of view, but from a macro perspective, it solves nothing. It's a legitimate concern, and we face it. I was thinking, Madame Fraser, that we went through something similar with the Canada Revenue Agency, and there it was analysts who had expertise in international investment income. As a result of not having the experts, there were likely untold amounts of money not coming in, for the simple reason that—no fault of anyone—there just weren't the experts. I'm seeing this as the same thing.
Can you collectively give me a sense of how we're tackling this? Are we speaking with educational institutions and provinces about trying to make sure we're developing them?
There's my very long-winded question.
:
Thank you for those questions.
It is a challenge to find experienced evaluators. Clearly we're building up this community rapidly. We're building it up largely through intake from universities.
We have a consortium of universities working with us to offer evaluation courses and help with certification. They are from right across the country, from Carleton and Ottawa universities here through Quebec's Université Laval and l'Université de Montréal to the University of Saskatchewan, the University of Victoria, the University of Waterloo. It's right across the country. We're also working with the Canadian Evaluation Society in developing our standards.
We have a competency standard in circulation now that I think is quite good. We have a lot of courses being offered. We have a lot of support from the Canada School of Public Service. We're doing everything we can to build up this community.
It's true that when you start to get evaluators in one department, they could be poached by another department. We will keep trying to build up the community. In the end, as long as those evaluators remain within government, we're better off than we were without having them in government.
:
Mr. Chairman, if I may, I would add that one of the responsibilities I have as a deputy minister is to give full support to the evaluation group. Nothing motivates a team of public servants more than knowing their work matters and they have a platform for making a contribution.
I suspect that every deputy minister could say what I'd say about my own team. They are a very impressive group of young people and more mature people who enjoy their work.
As a manager, you have to get used to losing good people. I don't see my responsibility as being restricted to only Environment Canada, because it is a collective pool, but I certainly see my responsibility as meeting the obligations of the policy to hang on to and recruit as good a team as possible.
The data shows the numbers have increased. We use our people well, in conjunction with contractors. We've been known to recruit contractors when they have done a good job.
I would say that it is a terrific career. In an evaluation unit, there is probably no better way to learn what's going on in a department than to get right in there, from the vantage point of evaluation, work with the managers, and find things out.
These people become very good policy and program people and their careers often take a turn from evaluation into other areas. Frankly, we lose some people from evaluation because of that, but we recruit from other areas of the public service as a result. It's a very complicated situation.
:
Mr. Chairman, I offered it as an example of the type of question that might come up in an evaluation that could be relevant to performance.
It should be the case that when a program is developed and adopted, at the outset, the appropriate federal and provincial analysis is done. In principle, the Government of Canada shouldn't engage in programs for which it does not have a jurisdictional responsibility, but there are of course many shared jurisdictions in Canada.
Certainly in my field, our work with the provinces is absolutely critical. An evaluation can expose factors of the relationship that can be as simple as how well things are working. A smaller province may not have the capacity that a larger province has and it may rely on us to carry a bigger load. It may not be the case for all programs, but it could be the case for some programs. That type of thing could come to the surface in an evaluation.
On the other hand, there may be deeper questions. Over time provinces may not have had the capacity or the interest or it could be that the federal government has not had the capacity in a particular area. Something that was true 20 years ago for an older program may not be quite the case today.
Those are the types of adjustments to programs that sometimes involve the federal-provincial relationship. An evaluation can be very useful in analyzing and bringing that to the attention of senior managers.
:
That was an estimate. In the department there was a 100% increase in our capacity.
[English]
We did increase from about four evaluators in 2004-05 to where we have 12 or 12.5 FTEs in evaluations today. So in that period of time we more than doubled our evaluation capacity.
Going forward, the same level, generally speaking, to achieve 100% coverage over time probably is fair. With the increased supply of evaluators that we hope will be available we think that is achievable. But it is absolutely true that we will have to manage this carefully and we will have to focus our evaluation resources in those areas where the greatest value from evaluation can be achieved.
As I mentioned, we may be flexible in how we implement this policy. We may cover a broad area of programs in one evaluation rather than each individual program, which would be an efficient way of doing it. And if we make wise selections and have good evaluation plans, then we will be able to achieve 100% coverage in a very efficient way. I would look to the professionals in the evaluation branch to help me with evaluation plans that are efficient, as well as meeting the goal.
So it will be a challenge, but it is a necessity, and we will proceed as best we can.
:
The management accountability framework was established in 2003 in order to assess the department's management results overall, over a broad range of areas of management. Evaluation is one of currently 19 areas of management that we assess.
For example, we encourage departments to assess their capacity in terms of audit, in terms of financial management, and in terms of people management. We also look at the areas around performance management and we have an assessment also of the nature of the Treasury Board submissions that they send to us.
A lot of these areas, including evaluation, are what we consider core to the management functions and good performance in the area of management of departments and organizations. So we assess, we have performance indicators. Some of the assessment measures are what we would consider to be objective and some of the assessment measures are self-assessment by departments and organizations that they provide to us.
We report back to departments. In fact we have published on our website the results of the management assessments. We work with departments on establishing what would be the management priorities for the coming year.
That's where--in some instances, for example--the performance around evaluation has not necessarily been at the highest standard. We have occasionally identified where that is the case in a department where that is a priority for improvement for that department. That then becomes a management priority for that organization for a given year in order to focus their attention on improving the management capacity of a particular area, including, for example, evaluation.