Skip to main content
Start of content

INST Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication

37th PARLIAMENT, 1st SESSION

Standing Committee on Industry, Science and Technology


EVIDENCE

CONTENTS

Tuesday, June 4, 2002




À 1000
V         The Chair (Mr. Walt Lastewka (St. Catharines, Lib.))
V         Mr. Ronald Kostoff (Individual Presentation)

À 1005

À 1010

À 1015

À 1020

À 1025
V         The Chair
V         Mr. Rajotte
V         Mr. Ronald Kostoff
V         Mr. Rajotte
V         Mr. Ronald Kostoff

À 1030
V         Mr. James Rajotte
V         Mr. Ronald Kostoff
V         Mr. Rajotte
V         Mr. Ronald Kostoff

À 1035
V         Mr. Rajotte
V         
V         The Chair
V         Mr. Larry Bagnell (Yukon, Lib.)
V         Mr. Ronald Kostoff
V         Mr. Larry Bagnell
V         Mr. Ronald Kostoff

À 1040
V         Mr. Larry Bagnell
V         Mr. Ronald Kostoff
V         Mr. Larry Bagnell
V         Mr. Ronald Kostoff

À 1045
V         The Chair
V         
V         Mr. Ronald Kostoff

À 1050
V         Mr. Stéphane Bergeron
V         Mr. Ronald Kostoff
V         Mr. Stéphane Bergeron
V         Mr. Ronald Kostoff
V         Mr. Stéphane Bergeron
V         Mr. Ronald Kostoff
V         Mr. Stéphane Bergeron

À 1055
V         Mr. Ronald Kostoff
V         Mr. Stéphane Bergeron
V         Mr. Ronald Kostoff
V         The Chair
V         Mr. Brent St. Denis (Algoma—Manitoulin, Lib.)

Á 1100
V         Mr. Ronald Kostoff

Á 1105
V         Mr. Brent St. Denis
V         Mr. Ronald Kostoff

Á 1110
V         Mr. Brent St. Denis
V         The Chair
V         Mrs. Cheryl Gallant (Renfrew—Nipissing—Pembroke, Canadian Alliance)
V         Mr. Ronald Kostoff
V         Mrs. Cheryl Gallant
V         Mr. Ronald Kostoff

Á 1115
V         Mrs. Cheryl Gallant

Á 1120
V         The Chair
V         Mr. Ronald Kostoff
V         Mrs. Cheryl Gallant
V         Mr. Ronald Kostoff

Á 1125
V         The Chair
V         Mr. Ronald Kostoff
V         The Chair










CANADA

Standing Committee on Industry, Science and Technology


NUMBER 088 
l
1st SESSION 
l
37th PARLIAMENT 

EVIDENCE

Tuesday, June 4, 2002

[Recorded by Electronic Apparatus]

À  +(1000)  

[English]

+

    The Chair (Mr. Walt Lastewka (St. Catharines, Lib.)): Order, please.

    On behalf of the Standing Committee on Industry, Science and Technology, I would like to thank you for taking the time to be with us this morning to share your experiences and your thoughts on peer review. I understand we've sent you a number of questions, and I'm sure the committee members will also have a number of questions.

    To bring you up to date, we have the members on the government side and we have members of the opposition from four different parties. When we're working on industry, science, and technology, we seem to be able to work more closely together for the good of science. We might have some differences from time to time, but hopefully, we stay on the topic of peer review.

    Mr. Kostoff, perhaps you would begin with your presentation, then we'll go to questions.

+-

    Mr. Ronald Kostoff (Individual Presentation): Mr. Chairman and members of the committee, I appreciate the invitation to testify before your committee.

    As agreed with your staff, I am testifying as a private citizen and not as a representative of any federal agency of the United States government. The following presentation will summarize my views on the use of peer review for allocating research funds. It has three main presentation sections. The first is a brief biography, as requested by Dr. Acharya. The second is a very brief response to seven questions posed by Dr. Acharya. The third is a very brief discussion on my principles of high-quality peer review. I have provided a written copy of the presentation. I will only go over highlights of the written copy.

    I received a PhD in aerospace and mechanical sciences from Princeton in 1967. I was at Bell Laboratories for nine years. I performed technical studies in support of the NASA headquarters and economic and financial studies in support of AT&T headquarters. The next eight years were spent at the United States Department of Energy. I managed the nuclear technology development division, the diffusion systems studies program, and an advanced technology program covering all areas of energy production. Also during the time at the Department of Energy I conducted a number of large-scale peer review evaluations and assessments.

    I have been at the Office of Naval Research since 1983. I was director of technical assessment for a decade. For most of the 1980s I was responsible for the selection, resource allocation, and periodic review of accelerated research initiatives. These were large five-year multidisciplinary programs that constituted about 40% of ONR's budget at that time.

    In 1997 I established a new effort in textual data mining, which is the extraction of useful information from texts. The purpose was to improve the utilization of the global technical literature in the full science and technology development cycle. In October 2000 I gave the keynote presentation at the TTCP International Technology Watch Partnership conference in Farnborough. I identified the technological roadblocks that must be overcome before global technology watch can be implemented successfully.

    Now I'll switch to responding to Dr. Acharya's questions.

    First, is interdisciplinary research treated appropriately by the peer review process and the present system for allocating research funds? There are many problems associated with the selection, conduct, management, and review of interdisciplinary research. Only one of these is the use of peer review in the research selection and review processes. A recent paper addresses this broader issue of interdisciplinary research. I included the paper along with the e-mail transmission of my presentation. As the paper shows, there are far more disincentives for interdisciplinary research than incentives, but peer review can treat interdisciplinary research appropriately, if care is taken to include representatives from each of the constituent research disciplines as evaluators.

À  +-(1005)  

    Should the granting agencies have programs directed to building research capacity at small institutions, so that their researchers can better compete in competitions for research grants with their large university counterparts? The U.S. sponsors some research programs that are directed towards supporting a number of under-represented entities, under-represented institutions, under-represented states, etc. The aims of these programs are to increase participation of the under-represented entities in the research enterprise. These goals are intrinsically political goals. Whether the funding for these types of programs leads to high-quality research as well remains to be seen. I personally have never examined data that rigorously compared the performance of these subsidized groups and institutions to the mainline institutions, so I really cannot comment further on the technical desirability of instituting such subsidies.

    Are enough funds directed to research in target areas of national importance? How is targeted funding dealt with by the U.S.? Most agencies tend to have advisory boards of prominent scientists and mission area specialists. These groups identify and prioritize strategic areas of national importance, usually in concert with agency management. The priorities are implemented through the establishment of special programs, if they're required, or through guidance provided to the proposed community. Some members of the U.S. research community have expressed their concern that strategic targets may be overly constrictive for the truly fundamental research, but there is not consensus on this particular issue.

    Are the outputs, outcomes, and impacts of federally funded research adequately measured and reported? For most agencies, performance metrics are grossly under-reported. Again, there are few incentives for reporting performance and many disincentives. The problems with reporting outcomes and impacts are of a different nature from the reporting of outputs. Because of the long-term nature of outcomes and impacts, they present different problems associated with data tracking and time. For science and technology, tracking output data over long time periods is very difficult. The other outcome problem associated with time derives from the observation that most research requires years or even decades before larger-scale outcomes or impacts can be realized. By that time the managers and performers who conducted the research may be long gone. What would be the practical use of such outcome data, especially in affecting the managers and the performers?

À  +-(1010)  

    One witness before the committee argued that the peer review system is untested. Is this true? Are there any viable alternatives to peer review for allocating research funds? Untested may be an overly strong word. There has not been much effort relating the scores and recommendations of peer review proposal evaluators to long-range quality and impacts of programs funded. Most agencies have a prospective focus, not a retrospective focus. In addition, there is not consensus on the metrics for success. Quantitative metrics can have multiple interpretations and are subject to gaming. If peer review is also used to evaluate downstream quality and impact, one subjective method is being used to evaluate the efficacy of another subjective method. After years of experience using many hundreds of peer reviews in program selection, management, and evaluation, I am comfortable that peer review is a very useful aid to decision-making if its results are used properly and not just followed blindly.

    As to alternatives, I have The Handbook of Research Impact Assessment, written in 1997. In part it addresses alternatives to peer review. Two of the leading alternatives will now be summarized. The first is bicameral review.

    There's an association of Canadian scientists who have promulgated bicameral review. In this approach, grant applications are divided into a major retrospective part, which is the track record of the proposers, and a minor prospective part, that is, the work proposed, and they are reviewed separately. The retrospective part only is subjected to peer review. The prospective part is subjected to in-house review by the agency solely with respect to budget justification. Funding is allocated on a sliding scale, replacing existing sharp-fund or no-fund cutoffs. As the merit rating of the projects decreased down the funding scale, the fraction of requested funds would decrease as well.

    The other leading alternative, circa 1997, is a productivity-based formula. There the philosophy is that past success is the best predictor of future performance. This alternative proposes that researchers be funded essentially on their track records, and it provides an algorithm for allocating funds.

    These two alternatives place heavy emphasis on awards to established researchers with strong track records, although they differ in how the track records would be determined. Both minimize the use of true technical experts in the evaluation of the prospective portion of the proposed research. My bottom line is that while peer review has its imperfections and limitations, there is little evidence that the best researchers and ideas are going without funding, and far less evidence that the alternatives described above would improve the situation.

    What are the problems associated with peer review? What types of improvements could be made to the peer review process? The commonly acknowledged problems with peer review are those addressed in the next section by my principles of high-quality peer review. These include the bias and objectivity of the reviewers, the competence of the reviewers, a normalization when the process is comparing very different disciplines, and reliability. Would the substitution of another peer review panel for the initial one provide the same or similar results? Improvements in the process would be centred around closer adherence to the principles in the next section.

    From my present perspective, probably the most serious problem with peer review now is how it treats high-risk research. Despite the many federal agency pronouncements on the importance of supporting high-risk, high-payoff research, in reality, there are few incentives and motivations for promoting truly high-risk research, and there are many disincentives. Program managers rarely, if ever, are rewarded for the failures characteristic of high-risk research. Use of committees for performing peer review, especially the large committees characteristic of many of the funding agencies, intrinsically leads to conservative judgments. Provision of incentives for funding high-risk projects, with their associated potential for high payoff, would be a major step forward.

À  +-(1015)  

    Finally, should there be better and more regular external and/or internal evaluation of agency programs and practices? There needs to be a balance between review frequency and review cost-effectiveness. Since the passage of the Government Performance and Results Act there has been increasing pressure for agency performance evaluation with greater inclusion of metrics to supplement research peer review. At some point the sheer time and effort burden of preparing for and participating in reviews becomes counterproductive. I believe more thought and effort needs to be given to performing fewer research evaluations, including peer review, but performing them correctly.

    Now I will switch to the final section, the principles of high-quality peer review, and I will only go over a couple of the principles. There are two major components to high-quality peer review. First, it should be an integral part of the strategic management process. Second, each of its procedures and the elements should be of high quality. In this section I will describe the specific requirements for each component to be high quality.

    First come the implementation-related problems. There are three major implementation-related problems with any of the management decision aids, of which peer review is only one, both as they are implemented in practice and as they are described in the published literature. These problems are that the management support techniques tend to be treated as add-ons, the management support techniques tend to be treated independently, and there is a major mismatch between the developers of the management support techniques and the users of these techniques. I go into more supporting detail about these implementation problems in appendix 1 of the written submission.

    I'll now talk about a couple of the elements and procedures.

    Element number four is the role, the objectivity, and the competence of technical experts in any science and technology evaluation. Each of the experts should be technically competent in his or her subject area. The competence of the total evaluation team should cover the science and technology critically related to the science and technology area of present interest and the disciplines and technologies that have the potential to affect the overall evaluation's highest level goals and objectives. Therefore, an appropriately balanced team will assess what I call the job-right aspects of the research. In other words, are the technical details being addressed properly, as well as the right job aspects, and have the right programs been selected to address the higher level objectives?

    There is another aspect to ensuring that the panel contains adequate people to judge whether the main goals and objectives are being met. Many of the peer review evaluations I have conducted or in which I have participated, and this is across a number of federal government agencies, have examined large programs in addition to individual projects. Most of the presentations tend to focus on the technical details of the approach, with a very small amount, if any, on the investment strategy. However, for competitive research programs that draw on a wide pool of performers, there are relatively few criticisms of the specific details of the technical approach selected. That has been my experience for most of these reviews. Most of the performers know the correct equations to choose and the best techniques to use to solve the equations. Most of the performers know the best experimental equipment to be used and how to use it appropriately. But the major weakness that invariably occurs in almost every presentation I've heard in a program assessment or evaluation is how well the investment strategy is presented.

À  +-(1020)  

    A full and credible exposition of the investment strategy should include both the tabulation of the allocated resources, where the money is being spent in the different programs, and more importantly, the rationale behind the priorities that established the allocation distribution. The presentations I have heard that address investment strategy, for the most part, tend to be heavy on the tabulations and light on the rationale.

    The last of the elements is global data awareness. This is the understanding of how science and technology projects develop global systems, operations, or events, and whether they are, in any way, supportive of, related to, or affected by the S and T programs under review. In other words, if you want to know the context of the S and T program you are reviewing, you need to understand what is happening globally in science and technology.

    At present, there are very serious deficiencies in obtaining global data awareness from the global technical literature in particular, and there are very serious deficiencies in the use of the technical literature in the full science and technology development cycle in general. These deficiencies stem from deficiencies in the literature databases and in the use made of these databases by the technical community. Because the databases involve the international community, any major progress must eventually involve that community. The reasons for these deficiencies are presented in more detail in appendix 2. The discussion contained in appendix 2 should be of particular interest to Canada, as Canada is an integral component of the TTCP's International Technology Watch Partnership, whose goal is to improve global data awareness.

    This concludes my presentation. I'm open to any questions you may have.

À  +-(1025)  

+-

    The Chair: Thank you very much. We really appreciate your presentation.

    When we ask questions, we normally go from the opposition to the government side. So we'll be going back and forth.

    I'd like to start off with the vice-chairman of the opposition, Mr. James Rajotte.

+-

    Mr. James Rajotte (Edmonton Southwest, Canadian Alliance): Thank you, Mr. Chairman.

    Thank you, Dr. Kostoff, for appearing before us today. It was an excellent and very substantive presentation. I certainly want to commend you for it.

    You mentioned during your presentation that for peer review to be effective, the results must be used properly and not be followed blindly. I want you to expand on this and indicate how we should use these peer review processes properly and not follow them blindly.

+-

    Mr. Ronald Kostoff: I believe peer review should be used to support the management decision process, not replace the management decision process. I have seen agencies where the peer review results were paramount and, to some degree, substituted for management decision-making. Scores were averaged and became the final metric. I have used peer review in the past, particularly when I was at the Department of Energy and running a number of large programs, but I always used the peer review results as inputs. I always felt that it was my responsibility, as the program manager, to make the final decision. There were times I would overrule even the consensus of the peer review, but I used the peer reviews for insights, rather than as hard recommendations that must be followed blindly. I guess that was the sense in which I made the statement in my presentation.

+-

    Mr. James Rajotte: One of the concerns you mentioned about peer review was the question of bias. Suppose you, as a program director, felt that a consensus developed through the peer review process was not the right decision. How would you combat any perception of bias on your part ?

+-

    Mr. Ronald Kostoff: There are a number of ways of addressing bias, both on the part of the evaluators and on the part of the program manager. With the evaluators, I can tell you what I used to do, in fact, what a number of organizations do. During the selection process for reviewers they will have essentially a checklist of different ways in which the reviewer could be in conflict with the proposer or the program being proposed. The fact that a candidate reviewer checks off one or more of the boxes, in other words, has some sort of conflict, does not necessarily mean that person would be excluded from the panel. It depends on the seriousness, or the perceived seriousness, of the conflict. There are agencies, the national academies, for example, that perform peer review, and after the candidate evaluator passes this one hurdle and is selected for the panel, at the beginning of the panel meeting, the panel will meet in executive session. Each of the panel members will describe his or her conflicts as completely as they can, so at least every member of the panel knows the potential conflicts of every other member of the panel, and they take this into account in the discussion and in any final consensus that occurs.

    You raise an interesting question about the bias of the program manager. I have never personally experienced a problem with that. There were times, especially when I rejected a proposal, the proposer disagreed with my decision. That gets into the issue of an appeal process. What sort of appeal process exists? What sort of appeal process should be established? It's very difficult for any person to recognize their own bias. Everybody believes they are unbiased and it's the rest of the world that has the problem. That, I think, is why it is important to have some sort of appeal process. I had a couple of cases where people went to the next level of management and appealed my decision. The management, in those cases, supported me on the decision. I know there have been larger criticisms of the lack of appeal process in a number of organizations, and there have been different proposals for establishing such appeal processes as exist in the legal or medical professions.

À  +-(1030)  

+-

    Mr. James Rajotte: Thank you, Dr. Kostoff.

    I do want to make a point; I don't know if it's a question. I want to thank you very much for identifying alternatives to peer review. I thought your analysis here was excellent. I think you are right that the alternatives that are there, the bicameral review and the productivity-based formula, are similar to peer review. They don't differ as much as one might imagine. And you do seem to say peer review is the best of these three alternatives.

+-

    Mr. Ronald Kostoff: I believe, when you look at them in detail, as I point out in the written submission, there's not as much difference as one might think, because the two alternatives place heavy emphasis on track record. When you look at peer review and what people believe is very important in the final score, in many of the cases we have examined team quality comes out to be the strongest criterion evaluators use in determining a bottom line score. So to that extent, both the alternatives and the mainline peer review do dovetail.

    If you are further interested in the alternatives, as I pointed out in the paper, there are a number of Canadian scientists in this organization, in particular Donald Forsdyke and Alex Berezin, and it might be useful to have them testify before your committee.

+-

    Mr. James Rajotte: In this study we are certainly focusing on peer review. We are also talking about research and development in Canada in general. I'd like to pose a couple of the issues to you. Perhaps it's not specific to your presentation, but I'd certainly like your expertise on this.

    One of the issues we're discussing is the difference in funding between the natural sciences, what are commonly called the hard sciences, and the social sciences and humanities. The social sciences and humanities tend to get less proportionally and say they should be getting more in relation to the hard sciences. Is this an issue in the United States? How do you deal with it there?

+-

    Mr. Ronald Kostoff: It is an issue. I personally have never had involvement with it. I guess I'm not really the best person to ask about it. It's not only the social sciences. It's really any aspect of research that does not seem to have an immediate, or even a long-term, commercial payoff. There are elements, for example, in the study of cosmology and the study of the outer planets that NASA conducts. What will be the commercial payoff? Really, the payoff is in increased understanding of the world around us. How much funding should be allocated to it? It seems to me it becomes almost an intuitive issue more than something that can be determined by quantitative analysis. The last three organizations I've worked in have all been mission-oriented. Whenever I've looked at the desirability of research, it's always related to various missions, comparing the importance of the missions, comparing the importance of different research proposals to each of the missions, and doing some sort of quantitative analysis based on the approach.

À  +-(1035)  

+-

    Mr. James Rajotte: Thank you.

    In Canada we have national agencies, like the National Research Council. We also fund research and development through regional development agencies. We tie it into the development of various regions. Does the United States fund research and development through regional development agencies, or do they simply do it through national agencies?

+-

    Mr. Ronald Kostoff: It's an issue I'm really not familiar with. I'm familiar with the national program, I am not all that familiar with what is being done regionally. There are special programs some of the agencies might have. It gets to the question about the special entities. There may be some under-represented states, for example, and they can apply for special types of funds. If there are larger programs targeted to regions, I'm really not familiar with them.

+-

    The Chair: Thank you, Mr. Rajotte.

    Mr. Bagnell.

+-

    Mr. Larry Bagnell (Yukon, Lib.): Thank you.

    I like your idea about incentives for risk. One of the major problems of peer review is the fact that people are thinking inside the box: if something so dramatic had been a good idea, they would have done it or thought of it. The great innovations in history, great experiments, great discoveries would probably never have been made through peer review. I like the idea of risk. I disagree perhaps with the reason for conservativeness in the group. I think it's for the reason I said, as opposed to the fact that it's a large group. I think studies on decision-making have shown that people in groups actually make more risky decisions than individuals.

    Can you comment on it?

+-

    Mr. Ronald Kostoff: I haven't seen that. I have seen that when a group of people get together, they reach a consensus towards the mean, not towards the very risky. My whole experience with it has been that the riskiest research will be supported by, typically, a single program manager who has a lot of flexibility in making his or her own decisions and is willing to take some chances. I have not seen committees, especially large committees, able and willing to support very risky work. I think it is the wrong way to go if you really want to support high-risk, high-payoff research.

+-

    Mr. Larry Bagnell: What kinds of incentives are available in the United States for riskier research? Or is it up to the program manager to bring in the risk?

+-

    Mr. Ronald Kostoff: In a sense, it's really up to the program manager. A lot of agencies, at least in their charters, support high-risk, high pay-off research. They talk a lot about it. In some real sense, that is one of the major roles of government-supported research. In my view, one thing government does, especially in today's economy, that industry, on average, does not do is support programs with a very high degree of risk. Industry tends to be risk-averse. If there's a role for government in partnership with industry, it is working at the front end to remove a large amount of risk that industry by itself would not take. This is one of the real problems when government agencies are not funding the high-risk work. They are basically doing what industry should be doing.

    In my view, there ought to be a clear demarcation between the role of government in research and the role of industry. I think one strong metric for this demarcation is the level of risk. Again, it's nice on paper and it's nice to make the pronouncements, but risk, especially high risk, means a large number of programs that are initiated will fail to meet their stated objectives. It doesn't mean the programs are worthless, because they may go down other paths, they may come up with useful information. They may fail, in whatever sense of the word, but at least some new information will have been obtained.

    The problem is, when you have large numbers of your programs fail, what program managers are going to be rewarded? Because that's what the program management, in a sense, gets down to. What types of rewards are going to accrue to the program managers? If a program manager has a number of failures, they typically do not get rewards for that. It's really the successes that give people the rewards. This is the real problem. I don't really have an answer to it, other than hiring people who, basically, are willing to take these risks and to accept the failures. It becomes a very personal issue. I don't see how one can, in a sense, legislate that organizations should be taking risk.

À  +-(1040)  

+-

    Mr. Larry Bagnell: I think the problem is even worse than you said. If you take it one step further to the political level, all these failures, by supporting risk, will obviously become political targets against whoever happens to be the government at the time: look at this ridiculous project you've funded. I don't know how you would suggest that be dealt with.

+-

    Mr. Ronald Kostoff: That really depends on the types of people in the oversight organizations in government and whether or not they have political motivations or they're interested in advancing science and technology. That is a problem. The larger the program, with high risks, the larger the failure and the more visible it becomes. And it becomes a larger target. That is a major problem.

+-

    Mr. Larry Bagnell: I come from a rural area, and we have smaller institutions that are less likely, therefore, to get funds. Are there any programs in the United States that assure that smaller, less capable, or less sophisticated colleges have access to research funds?

+-

    Mr. Ronald Kostoff: Yes. As I mentioned, there are a number of programs geared to supporting under-represented entities. I had a list with me and I left it back at my office. There's one the Department of Defense has; I believe it's called depth score. It focuses on states that are under-represented in the national research effort. Then institutions from these states can make proposals for the funds allocated in this program. There are other programs as well. There are programs aimed toward minority institutions. They do have programs to help out these particular institutions.

    One interesting thing has happened because of the Internet revolution. Whereas before, especially in a place like Canada, where you have some very isolated territories, the researchers might have been truly isolated from the mainline institutions, perhaps in Toronto or Quebec, now, with the Internet connections, a lot of research people can work almost as though they are working in situ at the larger institution. I think the geographic remoteness of a lot of these more isolated institutions can be overcome somewhat by the better connections through the Internet.

À  +-(1045)  

+-

    The Chair: Thank you very much, Mr. Bagnell.

    Mr. Bergeron.

[Translation]

+-

    Mr. Stéphane Bergeron (Verchères--Les Patriotes, BQ): Thank you, Mr. Chair.

    Thank you very much, Mr. Kostoff, for having accepted our invitation. You have provided us with a number of information that will certainly give us food for thought on this issue. I must say that I was quite impressed by the very methodical fashion in which you have answered the questions put by our researcher. We really appreciate it.

    That being said, I would like to come back on the difference between hard and natural sciences and human et social sciences. There is in Canada a disproportion between the funding of research in pure and natural sciences and the funding of research in human and social sciences. That disproportion can be explained in a number of ways, including the fact that the cost of research in pure and natural sciences is higher than that of research in human and social sciences. But an other explanation might be that there is a kind of judgment call on the part of the society in general regarding the value of pure and natural sciences as opposed to that of research in human and social sciences. So there is in Canada a disproportionality in the funding, a disproportion that we are trying to deal with in some way.

    Is there a similar disproportion in the United States between the funding of research in pure and natural sciences and the funding of research in human and social sciences?

[English]

+-

    Mr. Ronald Kostoff: This is an issue I personally have not studied. I assume there are many more papers being generated in the natural sciences than in the social sciences. For the last five years I have been working in this area of text mining, and I go through a number of large databases on a regular basis. Among the databases I use regularly are the science citation index and the social science citation index. They basically deal with the major journals in the natural sciences and the social sciences respectively. Typically, the number of papers are maybe two or three to one in favour of the natural science journals. I assume that basically reflects the larger discrepancy in funding.

    The natural sciences to a large extent, but not completely, support the economy, support the national defence, so a lot of commercial activity drives the need for research in these particular areas. A lot of the social sciences, especially a lot of the humanities, don't have such motivations. They have motivations of personal satisfaction, personal knowledge, and social knowledge, but they don't have the same commercial interests and national security interests that drive the physical sciences. So I think there is a substantial discrepancy, and there will be as long as there are commercial and national defence drivers for this research.

À  +-(1050)  

[Translation]

+-

    Mr. Stéphane Bergeron: Regarding the financial support of smaller teaching and research institutions, there are two ways of dealing directly or indirectly with this problem. The first approach, which seems to be that of the United States, consists in creating special programs in order to fund research in smaller universities. In my view, the problem with this approach is that it does not necessarily focus on excellence, but rather on supporting a few well targeted institutions.

    Regarding the other approach that we have been considering to support research, I come back to the issue of under-funding in the area of human and social sciences. We have found that, in a great number of cases, smaller universities tend to specialize in human and social sciences research programs. Now, if we were to decide at the national level the allocation of funding for human and social sciences, we could, through the back door, manage to give more support to research efforts in these smaller universities.

    In your view, is that a positive or relevant method to ensure that smaller universities receive proper funding?

[English]

+-

    Mr. Ronald Kostoff: So you're proposing to subsidize the social sciences in the small institutions as a way of helping them out?

[Translation]

+-

    Mr. Stéphane Bergeron: Not exactly. In fact, the idea is simply to give more funding to human and social sciences, which, indirectly, should have the effect of allowing small teaching institutions, which generally specialize in human and social sciences, to have an easier access to funding. They could thus develop as research institutions.

[English]

+-

    Mr. Ronald Kostoff: That means you fundamentally have to make the decision that you want to increase the balance of social to natural sciences support. If you make that decision, you are saying, because the smaller institutions tend to have smaller facilities for doing natural science work, they would probably get more funding if more funding became available for the social sciences. That's probably a reasonable way of doing it, if you want to put more funding into social sciences. That's really a decision the legislature has to make. That is one way of getting more funding into these smaller institutions.

[Translation]

+-

    Mr. Stéphane Bergeron: Is this being done in the United States? Is this king of trick being used to support smaller institutions in the United States?

[English]

+-

    Mr. Ronald Kostoff: Again, I am the wrong person to ask about support for social sciences. I'm really not familiar with it.

[Translation]

+-

    Mr. Stéphane Bergeron: I understand. Perhaps I could go on to another question.

    At the level of granting institutions in Canada, we have chosen not to fund the indirect costs of research. Do you believe that it is important for granting agencies to fund equally the indirect costs associated with research? In the United States, are granting organizations, at the government level, funding part or all of the indirect costs associated with research?

À  +-(1055)  

[English]

+-

    Mr. Ronald Kostoff: As far as I understand, the indirect costs are part of the proposed costs. So when an award is made, it is made for the total, which includes whatever overhead costs are ascribed to the particular research. I don't think the councils really get into that.

[Translation]

+-

    Mr. Stéphane Bergeron: Just like Canada, the United States are a federal state made up of federated states. So it is not a unitary state. How is cooperation organized between the various levels of government in the United States in the area of research and teaching? How is the cooperation established at the level of research funding and priorities, for example? That issue is obviously of interest to us. We would like to have some knowledge of what is being done abroad, in other federal states, with regard to the cooperation between the federal and state authorities in the area of research.

[English]

+-

    Mr. Ronald Kostoff: There are many different types of links, not just one particular one. There are special programs, for example, the MDRIs, the multidisciplinary research initiatives. They tend to be multi-discipline, multi-institution types of programs. There are special dollars set aside for them and special competitions. Principle investigators from different institutions are free to make whatever linkages and joint proposals they feel are most profitable. So when I was running programs, I would have a number of joint proposals coming in from different organizations where people basically had established collaborations. Each organization complemented the strength of the other collaborating organizations, and we funded the full project. So I think a lot of the linkages percolate up from the bottom, and they're not necessarily established by the government.

    In my introduction, when I talked about my background in ONR, I mentioned these accelerated research initiatives. These were large five-year multi-million dollar programs. They basically spanned a number of disciplines. They typically would involve a number of different organizations that had come together because they needed these multiple disciplines, which no one group intrinsically had, in order to address the problem. So that was another example of where the institutions, from the most elementary level, joined forces so they could address the problem completely with the different players.

    So to answer your question, there is a combination of formal programs that bring together different institutions, as well as different institutions who join up with members of other institutions on their own initiative to propose relatively large-scale programs.

+-

    The Chair: Thank you very much, Mr. Bergeron.

    Mr. St. Denis.

+-

    Mr. Brent St. Denis (Algoma—Manitoulin, Lib.): Thank you, Mr. Chair, and thank you very much, Dr. Kostoff, for taking this time with us.

    The peer review community, the people who help our governments, yours, ours, and elsewhere, to make the best choices on what projects should qualify for funds and move forward, can you talk a bit about that community as you understand it? Is it a community of people who work in isolation? I am going to assume none of my colleagues at the table here has ever been a peer reviewer; I know I haven't been. Do they work in isolation? Do they meet in conferences? Is there a standard that a peer reviewer must meet? Do you apply to be a reviewer? I assume you get paid to review by the paper, by the amount of the project funding requested. Can you talk a bit about the peer review community? Because this whole system is built upon the quality and nature of that community.

Á  +-(1100)  

+-

    Mr. Ronald Kostoff: There are different types of peer review and different applications. There is peer review of proposals, there is peer review of faculty people for tenure, there's peer review of dissertations, there's peer review of programs, and peer review of projects. There are different types of members of the community. There are the people who manage the peer reviews, and there are a number of organizations set up, the most prominent of which in the United States is the National Academy of Sciences. It's really the National Research Council, which is the administrative arm of the National Academy of Sciences. They have people who run various boards and commissions, which bring together panels. They perform reviews, typically of programs, sometimes of large-scale proposers.

    I don't believe the people are paid for their time. For example, in our particular department, we have an annual review that's conducted through the National Research Council and its naval studies board. They will assemble a group of experts and they will pay the experts for their travel costs and their per diem, but not their time. I used to run peer reviews at ONR and in the Department of Energy. I personally never paid a reviewer for their time. I paid them a per diem and I paid travel costs.

    So there are the people and the organizations that manage these reviews. There are people who do the studies on peer review, and one was a previous witness of yours, Fiona Woods. I've done a little bit myself. For the most part, they tend to be academics, and they will do studies of the efficacy of peer review and suggest improvements.

    The final group, which is the largest group, is the actual evaluators themselves. These people, for the most part, let's say for natural science reviews, tend to be technical experts in the technical area being reviewed. Are they a given community? They are well-known experts in their community. They are people who have stature in the community. That's where the “peer” comes from: they have a certain standing in their part of the technical community. There are different ways to define what a peer is. If you look in the dictionary, a peer is a person who has equal standing with another person. So a peer review is a review by people who have equal standing with other people. That doesn't necessarily mean they are in the same detailed technical discipline.

    When I conduct peer reviews of given technologies, I will use as evaluators not only experts in those technologies, but people who may be operational people, people who are expert in advanced technology development. If I'm looking at research, for example, I will bring in technologists as well as other researchers. Typically, I will bring in various groups of people who, in one way or another, will eventually be affected by that research. The way I interpret “peer” is that they're all of equal standing in their respective communities.

    There are a number of organizations that take a much narrower interpretation of “peer”. To them, a peer is a person who has detailed technical knowledge equal to that of the proposer. That's a very narrow interpretation, and the problem with bringing in just those types of peers, as I mentioned in my presentation, is that there is the job-right aspect of a review and the right-job aspect. If you bring in only these people who have technical background equal to that of the proposer, they will be very good at addressing the job-right aspect of the proposal, but they may not be the best people for addressing the right-job aspect, because they're too focused on that particular detail or that particular approach and may not be looking at the different approaches that are required to address the broader objectives.

Á  +-(1105)  

    That's a long way of answering your question. The peers are really well-known people in the technical community who can provide value-added in commenting on these proposals or programs.

+-

    Mr. Brent St. Denis: Thank you for that. This is maybe a strange comparison, but I'm thinking of the figure skating issue we saw at the Olympics. The figure skating judging is based on a volunteer system, and they're talking about moving to a paid system to try to remove some of the problems that were evident, at least as reported in the media. It may be that there are no problems within the peer review system, that what is more or less a volunteer system is working fairly well. I gather it must be working fairly well.

+-

    Mr. Ronald Kostoff: That's an interesting point. I did not really address the point of compensation. I haven't really seen a major problem in compensation in program reviews. For whatever reason, people, at least at the headquarters level, are willing to come and review for no salary. Some of our laboratories will conduct peer reviews. They will give people a certain stipend. They may give them a few hundred dollars for appearing for a day.

    When you talk about peer review of manuscripts, as far as I know--and I serve as a peer reviewer for a number of journals--there is no compensation for that. In some sense, at least hypothetically, there could be a problem associated with the lack of compensation. If somebody submits a paper for publication and it goes for peer review, you have a person who may have spent hundreds or thousands of hours of their own time writing that paper. They are intimately familiar with every detail. The peer reviewers, on the other hand, may spend an hour or a few hours a day, a couple of days, in doing the evaluation. You have a major imbalance in the amount of effort expended on generating this paper versus reviewing it, yet a negative review will counteract all the positive work that has gone into it, based on much less time being expended in thinking about the problems.

    One thing that compensation, at least for a manuscript review, would do is force the reviewers, hopefully, to spend more time on the paper. One of the problems you have in this whole business of review is that all the organizations, whether they're journals or federal agencies, want to get the best people to do the reviews. When they write up a report, they like to show they have brought in the top people. Consequently, you get a non-linear effect. You get a relatively small number of people being bombarded with requests from journals and federal agencies to participate in all these reviews. The good people spend a lot of their time turning down all sorts of requests, and when they participate, their time will be very limited.

    By their nature, what you are getting on the reviews is a relatively small amount of the time of good people. That's because you're not compensating these people. What compensation would give you is having the good people spend some more time on the review to make the time they're expending more compatible with the time the proposer or author has spent in generating the proposal or paper. There is that problem, which is really not addressed in the literature I've seen.

Á  +-(1110)  

+-

    Mr. Brent St. Denis: Thank you very much, sir.

+-

    The Chair: Thank you, Mr. St. Denis.

    Ms. Gallant.

+-

    Mrs. Cheryl Gallant (Renfrew—Nipissing—Pembroke, Canadian Alliance): When you were talking about funding to small institutions, you mentioned that the goals are intrinsically political. Would you please elaborate on that?

+-

    Mr. Ronald Kostoff: When your goals are purely technical, you really don't need any special programs or subsidies. The open competition itself will lead to the best technical results. Whenever I've seen any of these programs for special types of institutions or groups, they are typically subsidized, and somebody has made a political decision that these groups require more funding to get a better match with the mainline groups or institutions. So that was my interpretation of political, something other than a pure technical-based open competition.

+-

    Mrs. Cheryl Gallant: In talking about reporting the performance, you mentioned that there are many disincentives for doing that. Would you be able to itemize a few of those?

+-

    Mr. Ronald Kostoff: With high-risk research, and even with less risky research, there will be many instances where the original research objectives were not met. Some oversight organizations could view this as failure. In addition, bibliometric studies that both our organization and other organizations have done have shown that the seminal research is produced by relatively few performers, and that is independent of whether the metric is the number of papers you produce, the number of patents, the number of citations, or whatever. Especially for outputs, which are the quantification of the near-term products, why would organizations be motivated to show the concentration of productivity in a relatively small number of performers?

    When you get to outcomes and impacts, things a lot of people now seem to be focusing on, you get a different type of problem. The scope of outcomes and impacts is broader than that of outputs. They tend to aggregate over individual performers, and they lower the productivity differences among performers to some extent. Because of the long-term nature of outcomes and impacts, they present different problems that are associated with data tracking and with time. This is a critical problem that we face almost on a regular basis.

    For science and technology, tracking output data over long periods of time is difficult. When you do research, it gets conducted in a given organization. It evolves into technology development. That may be conducted in another organization. It may be sponsored by another sponsor. That will proceed and will be transformed into engineering development. It keeps going like that to eventual application. What you have is multiple performers over time. You have multiple organizations and multiple sponsors. Each of these stages, especially when you get to the higher developmental levels, is not documented very well. If you consider documentation that is widely available to a wide audience, it's almost non-existent. In addition, what I call the technical heritage in any of the documentation that does exist, the references to previous work, tends to emphasize the contributions of the documenting organization and to minimize the contributions of the external organizations. The point is, it is very difficult to track research that was sponsored and performed by organizations initially into eventual applications.

    There have been studies that have attempted to look at long-range outcomes resulting from science and technology. I list a few of them here, Project Hindsight, Project Traces, a study we did with the DoE, in 1983. What we have found is that the main tool used to track these long-range outcomes is corporate memory. You find people who have been at a laboratory for 30 or 40 years. They were there at the genesis of a particular research or technology project and, for reasons of personal interest, have followed it through. They then become the tool that's used to track the evolution to the application. That's a very incomplete process, because it's based on only those people with corporate memory. It's a very skewed process and very incomplete.

    What you really need is to have some sort of database that collects the outputs at each of these developmental stages and is widely available. Until you have such a database, it's going to be almost impossible to track these outcomes.

Á  +-(1115)  

    There's another equally serious problem. It's associated with time. You will probably have people testify before your committee who will keep talking about how important outcomes are, as opposed to outputs. In some sense, that's true. In a sense, the outcome is the impact on the larger societal goal, whereas the output is really an intermediary. You don't fund research because you will get so many papers as a result; the papers are a means to the end. You do it because you want to improve health in a region, safety, transportation, or whatever. The problem is that most research requires years, even decades, before the larger-scale outcomes or impacts will be realized. By that time the managers and performers who conducted the research may be long gone. Then one has to ask what the practical use is of those data, especially in affecting the managers and the performers. Usually, there's a reason you do a review. It is done to correct the problems that exist in conducting the research. This includes the performance of both the performer and the manager. The long time delay that characterizes the measurement of the outcome obviates any real-time performance correction. So this lack of utility of the outcome analysis for improving an organization's operations is a major disincentive for performing such studies.

    These are really what I thought were the disincentives.

+-

    Mrs. Cheryl Gallant: Right.

    I was very interested in hearing that you used to work with the Department of Energy. I have just a couple of questions. My first has to do with fusion and the next one with fission. As you know, it will probably be 50 years before we have even a prototype for the fusion project. How does the U.S. government justify to the public the great expenditure that is required in this field? Is a formula applied to the allocation of funding, given that a payoff may not come for another hundred years, or is a percentage of GDP committed? How do the President and his cabinet arrive at a decision to fund this sort of project?

Á  +-(1120)  

+-

    The Chair: Ms. Gallant, I want to ensure that we respect the fact that Dr. Kostoff is here representing himself as an individual. I just wanted to put that back on the record.

+-

    Mr. Ronald Kostoff: I worked in both the controlled fusion program and in the fission program. Why did I come to the fusion program? I think that's a very important aspect of it. In 1973 there was a major energy crisis in the United States, probably in Canada as well. I don't remember the reasons for it, but there was a shortage of gasoline for a number of months. There were tremendous gas lines, and a lot of people, including myself, started to get concerned about the finiteness of the fossil energy supply. We looked for alternatives that could be supported. At the time I thought fusion had great potential as an alternative, and it still may, one never knows. It was not clear at the time how long it would take to develop fusion into a workable commercial energy source. I had done some papers on fusion. In 1954, Professor Lyman Spitzer of Princeton proposed to the Atomic Energy Commission a project called a stellerator, saying in five years they would have a demonstration, a prototype, of commercial fusion power. That was 48 years ago, and we still have to get to that point.

    The point with something like fusion is that if it works, the payoff will absolutely swamp any of the development costs, even though the development costs have been substantial. The amount of money that is put in has increased and decreased over time. As I remember, in the very early seventies it was $100 million or less per year. When I came to the program, it was of the order of $300 million to $350 million a year. It eventually rose to something like $500 million or so a year. I haven't kept track of it, but it dropped down to a few hundred million dollars a year. So it has cycled, and one of the problems tends to be that it is a long-term program and the support from the legislature waxes and wanes. But if it can be made to work, it does offer a very attractive alternative.

    There is no formula I know of. When the legislature takes a look at the budget, at the energy supply, how dependent we are on the Middle East for oil, and how many real alternatives there are, concepts like fusion are in the category of very high risk and very high payoff. I think it's the promise or the potential of the payoff that keeps the funding going.

+-

    Mrs. Cheryl Gallant: The benefits of research into improving the efficiency of energy generated through nuclear fission are much more readily obtained. To what extent does the U.S. still continue to fund the hard science behind nuclear fission, perhaps for next-generation reactors? Are they still involved, and if they are, do they work through the Department of Defense? How does it work?

+-

    Mr. Ronald Kostoff: I am not familiar with what they are doing today. They are funded through the Department of Energy. When I last tracked it in the mid-eighties, they were funding advanced nuclear fission concepts, and at that time one of the real motivators was safety. The problems at Three Mile Island had made a number of people concerned about fission, and there was work done on alternatives to the light water reactor and on variants of the light water reactor that would be intrinsically safer. That was the real emphasis when I left, and I suspect it's probably not all that much different today, that safety is still a major concern, as well as the cleanup of the nuclear waste. That's a problem that still, in my view, has not been resolved, and there are a lot of political problems associated with the resolution, as well as technical problems.

Á  -(1125)  

+-

    The Chair: We got a little off topic there, but it was very valuable information.

    Dr. Kostoff, we're slowly getting to the end of our time. Before we get cut off, I want to thank you very much on behalf of the industry, science, and technology committee for taking the time this morning to be with us and for giving us an insight on peer review and sharing with us all your experience. If there are any remarks you'd like to make, please do so.

+-

    Mr. Ronald Kostoff: I have no remarks. I appreciate, again, the invitation to testify. It was a very enlightening experience for me.

-

    The Chair: So, Dr. Kostoff, thank you very much for being with us today and have a good day. Goodbye.

    Okay, that ends the video portion of this meeting. We will now continue the rest of the meeting in camera.

    [Editor's Note: Proceedings continue in camera]