Skip to main content
Start of content

INDU Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Industry and Technology


NUMBER 101 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Tuesday, December 5, 2023

[Recorded by Electronic Apparatus]

  (1550)  

[Translation]

    I call the meeting to order.
    Good afternoon everyone, and welcome to meeting No. 101 of the House of Commons Standing Committee on Industry and Technology.
    Today’s meeting is taking place in a hybrid format, pursuant to the Standing Orders.
    I’d like to welcome our witnesses today, Mr. Jean-François Gagné, an AI strategic advisor, who will be given an opportunity to give his opening address when he joins us a little later. We also have with us Ms. Erica Ifill, a journalist and founder of the Podcast Not In My Colour, and from AlayaCare, Mr. Adrian Schauer, its founder and chief executive officer.

[English]

     I want to thank you, Mr. Schauer, for making yourself available again today. I know we had some technical difficulties before, but the headset looks fine this afternoon. Thanks for being here again.
    Thank you, Madam Clerk, for the help, as well.
    We have, from AltaML Inc., Nicole Janssen, co-founder and chief executive officer; and from Gladstone AI, we have Jérémie Harris.

[Translation]

    And last, we will have Jennifer Quaid, associate professor and vice-dean research, civil law section, Faculty of Law, University of Ottawa along with with Céline Castets-Renard, full law professor, Faculty of Civil Law , University of Ottawa.
    As we have several witnesses, we will begin the discussion immediately. Each of you will have five minutes for an opening statement. Mr. Gagné, please begin.

[English]

     Madame Ifill, the floor is yours.
     Good afternoon to the industry and technology committee as well as a lot of their assistants and also to whoever may be in the room.
    I am here today to talk about part 3 of Bill C-27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts. Part 3 is the Artificial Intelligence and Data Act.
    Firstly, there are some issues, some challenges, with this bill, especially in accordance with societal effects and public effects.
    Number one, when this bill was crafted, there was very little public oversight. There were no public consultations, and there are no publicly accessible records accounting for how these meetings were conducted by the government's AI advisory council, nor which points were raised.
    Public consultations are important, as they allow a variety of stakeholders to exchange and develop innovative policy that reflects the needs and concerns of affected communities. As I raised in the Globe and Mail, the lack of meaningful public consultation, especially with Black, indigenous, people of colour, trans and non-binary, economically disadvantaged, disabled and other equity-deserving populations, is echoed by AIDA's failure to acknowledge AI's characteristic of systemic bias, including racism, sexism and heteronormativity.
    The second problem with AIDA is the need for proper public oversight.
    The proposed artificial intelligence and data commissioner is set to be a senior public servant designated by the Minister of Innovation, Science and Industry and, therefore, is not independent of the minister and cannot make independent public-facing decisions. Moreover, at the discretion of the minister, the commissioner may be delegated the “power, duty” and “function” to administer and enforce AIDA. In other words, the commissioner is not afforded the powers to enforce AIDA in an independent manner, as their powers depend on the minister's discretion.
    Number three is the human rights aspect of AIDA.
    First of all, how it defines “harm” is so specific, siloed and individualized that the legislation is effectively toothless. According to this bill:
harm means
(a) physical or psychological harm to an individual;
(b) damage to an individual's property; or
(c) economic loss to an individual.
     That's quite inadequate when talking about systemic harm that goes beyond the individual and affects some communities. I wrote the following in The Globe and Mail:
“While on the surface, the bill seems to include provisions for mitigating harm,” [as said by] Dr. Sava Saheli Singh, a research fellow in surveillance, society and technology at the University of Ottawa's Centre for Law, Technology and Society, “[that] language focuses [only] on individual harm. We must recognize the potential harms to broader populations, especially marginalized populations who have been shown to be negatively affected disproportionately by these kinds of...systems.”
    Racial bias is also a problem for artificial intelligence systems, especially those used in the criminal justice system, and racial bias is one of the greatest risks.
    A federal study was done in 2019 in the United States that showed that Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

  (1555)  

     A study from the U.K. showed that the facial recognition technology the study tested performed the worst when recognizing Black faces, especially Black women's faces. These surveillance activities raise major human rights concerns when there is evidence that Black people are already disproportionately criminalized and targeted by the police. Facial recognition technology also disproportionately affects Black and indigenous protesters in many ways.
    From a privacy perspective, algorithmic systems raise issues of construction, because constructing them requires data collection and processing of vast amounts of personal information, which can be highly invasive. The reidentification of anonymized information, which can occur through the triangulation of data points collected or processed by algorithmic systems, is another prominent privacy risk.
    There are deleterious impacts or risks stemming from the use of technology concerning people's financial situations or physical and/or psychological well-being. The primary issue here is that a significant amount and type of personal information can be gathered that is used to surveil and socially sort, or profile, individuals and communities, as well as forecast and influence their behaviour. Predictive policing does this.
    In conclusion, algorithmic systems can also be used in the public sector context to assess a person's ability to receive social services, such as welfare or humanitarian aid, which can result in discriminatory impacts on the basis of socio-economic status, geographic location, as well as other data points analyzed.
    Thank you very much.

[Translation]

    Mr. Schauer, please begin.

[English]

    I think this will be an interesting perspective side-by-side with Erica's.
    I'm the founder and CEO of AlayaCare. It is a home care software company. We deliver our solutions both to the private sector providers and the public sector health authorities.
    In the machine learning domain, we have all sorts of risk models we deliver. One of the things that you can imagine our ultimately building up to is a model that, on the basis of an assessment and patient data, will help at a population health level determine where the health system's resources get optimally allocated. In that use and case, it's definitely a high-impact system.
    I really like two things about the framework in this bill. One is that you're looking to adhere to international standards. As a developer of software looking to generate value in our society, we can't have a thousand fiefdoms. Let me start with a thanks for that. The second thing I really appreciate is your segmentation of the actors between the people who generate the AI models, those who develop them into useful products, and those who operate them in public. I think that's a very useful framework.
    On the question of bias, I think it raises some interesting questions. I think we have to be very careful about legislating against bias in the right way. In developing the model, really the only difference between a linear regression—think of what you might do in Excel—and an AI model is the black box aspect. Yes, if you're trying to figure out how to allocate health system resources, you probably don't want to put in certain elements that could be bigoted into your model, because that's not how a society wants to be allocating health resources. With a machine learning model, you're going to feed a bunch of data into a black box and out comes a prediction or an optimization. Then you can imagine all sorts of biases creeping in. It might be that a certain identity, for example, that left-handed people can actually get by with a bit less home care and still stay out of the hospital.... That wouldn't be programmed into the algorithm, but it could certainly be an output of the algorithm.
    I think what we need to be careful of is assigning the right accountability to the right actor in the framework. I think the model developers need to demonstrate a degree of care in the selection of the training data. To the previous example—and I can say this with some certainty—the reason that the facial recognition model doesn't perform as well for indigenous communities is that it just wasn't fed enough training data of that particular group. When you're developing the AI model, you need to take care and demonstrate that you've taken care of having a representative training set that's not biased.
    When you develop and put an algorithm into the market, I think providing as much transparency as possible to the people who will use it is definitely something that we should endeavour to do. Then, in the use of that and the output of that algorithm you have a representative training set and the right caveats. I think we have to be careful that you don't bring inappropriate accountability back to the model developers. That's my concern. Otherwise, you're going to be pitting usefulness against potential frameworks for bias.
    What I think we have to be careful about with this legislation is to not disproportionately shift societal concerns on how resources should be allocated—you name the use case—to the tool developer and sit them appropriately with the user of the tool.
    That's my perspective on the bill.

  (1600)  

[Translation]

    Thank you very much, Mr. Schauer.
    I will now give the floor to Jeremie Harris, of Gladstone AI, for five minutes.

[English]

     Thank you and good afternoon, Mr. Chair and members of the committee.
    I'm here on behalf of Gladstone AI, which is an AI safety company that I co-founded. We collaborate with researchers at all the world's top AI labs, including OpenAI and partners in the U.S. national security community, to develop solutions to pressing problems in advanced AI safety.
    Today's AI systems can write software programs nearly autonomously, so they can write malware. They can generate voice clones of regular people using just a few seconds of recorded audio, so they can automate and scale unprecedented identity theft campaigns. They can guide inexperienced users through the process of synthesizing controlled chemical compounds. They can write human-like text and generate photorealistic images that can power, and have powered, unprecedented and large-scale election interference operations.
    These capabilities, by the way, have essentially emerged without warning over the last 24 months. Things have transformed in that time. In the process, they have invalidated key security assumptions baked into the strategies, policies and plans of governments around the world.
    This is going to get worse, and fast. If current techniques continue to work, the equation behind AI progress has become dead simple: Money goes in, in the form of computing power, and IQ points come out. There is no known way to predict what capabilities will emerge as AI systems are scaled up using more computing power. In fact, when OpenAI researchers used an unprecedented amount of computing power to build GPT-4, their latest system, even they had no idea it would develop the ability to deceive human beings or autonomously uncover cyber exploits, yet it did.
    We work with researchers at the world's top AI labs on problems in advanced AI safety. It's no exaggeration to say that the water cooler conversations among the frontier AI safety community frames near-future AI as a weapon of mass destruction. It's WMD-like and WMD-enabling technology. Public and private frontier AI labs are telling us to expect AI systems to be capable of carrying out catastrophic malware attacks and supporting bioweapon design, among many other alarming capabilities, in the next few years. Our own research suggests this is a reasonable assessment.
    Beyond weaponization, evidence also suggests that, as advanced AI approaches superhuman general capabilities, it may become uncontrollable and display what are known as “power-seeking behaviours”. These include AIs preventing themselves from being shut off, establishing control over their environment and even self-improving. Today's most advanced AI systems may already be displaying early signs of this behaviour. Power-seeking is a well-established risk class. It's backed by empirical and theoretical studies by leading AI researchers published at the world's top AI conferences. Most of the safety researchers I deal with on a day-to-day basis at frontier labs consider power-seeking by advanced AI to be a significant source of global catastrophic risk.
    All of which is to say that, if we anchor legislation on the risk profile of current AI systems, we will very likely fail what will turn out to be the single greatest test of technology governance we have ever faced. The challenge AIDA must take on is mitigating risk in a world where, if current trends simply continue, the average Canadian will have access to WMD-like tools, and in which the very development of AI systems may introduce catastrophic risks.
    By the time AIDA comes into force, the year will be 2026. Frontier AI systems will have been scaled hundreds to thousands of times beyond what we see today. I don't know what capabilities will exist. As I mentioned earlier, no one can. However, when I talk to frontier AI researchers, the predictions I hear suggest that WMD-scale risk is absolutely on the table on that time horizon. AIDA needs to be designed with that level of risk in mind.
    To rise to this challenge, we believe AIDA should be amended. Our top three recommendations are as follows.
    First, AIDA must explicitly ban systems that introduce extreme risks. Because AI systems above a certain level of capability are likely to introduce WMD-level risks, there should exist a capability level, and therefore a level of computing power, above which model development is simply forbidden, unless and until developers can prove their models will not have certain dangerous capabilities.
    Second, AIDA must address open source development of dangerously powerful AI models. In its current form, on my reading, AIDA would allow me to train an AI model that can automatically design and execute crippling malware attacks and publish it for anyone to freely download. If it's illegal to publish instructions on how to make bioweapons or nuclear bombs, it should be illegal to publish AI models that can be downloaded and used by anyone to generate those same instructions for a few hundred bucks.
    Finally, AIDA should explicitly address the research and development phase of the AI life cycle. This is very important. From the moment the development process begins, powerful AI models become tempting targets for theft by nation, state and other actors. As models gain more capabilities and context awareness during the development process, loss of control and accidents become greater risks, as well. Developers should bear responsibility for ensuring the safe development of their systems, as well as their safe deployment.

  (1605)  

     AIDA is an improvement over the status quo, but it requires significant amendments to meet the full challenge likely to come from near-future AI capabilities.
    Our full recommendations are included in my written submission, and I look forward to taking your questions. Thank you.
    Thank you very much, Mr. Harris.

[Translation]

    Over to you, Professor Quaid.
    Mr. Chair. vice-chairs and members of the Standing Committee on Industry and Technology, I am very pleased to be here once again, this time to talk about Bill C‑27.

  (1610)  

[English]

    I am grateful to be able to share my time with my colleague Céline Castets-Renard, who is online and who is the university research chair in responsible AI in a global context. As one of the preeminent legal experts on artificial intelligence in Canada and in the world, she is very familiar with what is happening elsewhere, particularly in the EU and the U.S. She also leads a SSHRC-funded research project on AI governance in Canada, of which I am part. The project is directed squarely at the question you are grappling with today in considering this bill, which is how to create a system that is consistent with the broad strokes of what major peer jurisdictions, such as Europe, the U.K. and the U.S., are doing while nevertheless ensuring that we remain true to our values and to the foundations of our legal and institutional environment. In short, we have to create a bill that's going to work here, and our comments are directed at that; at least, my part is. Professor Castets-Renard will speak more specifically about the details of the bill as it relates to regulating artificial intelligence.
     Our joint message to you is simple. We believe firmly that Bill C-27 is an important and positive step in the process of developing solid governance to encourage and promote responsible AI. Moreover, it is vital and urgent that Canada establish a legal framework to support responsible AI governance. Ethical guidelines have their place, but they are complementary to and not a substitute for hard rules and binding enforceable norms.
     Thus, our goal is to provide you with constructive feedback and recommendations to help ready the bill for enactment. To that end, we have submitted a written brief, in English and in French, that highlights the areas that we think would benefit from clarification or greater precision prior to enactment.
     This does not mean that further improvements are not desirable. Indeed, we would say they are. It's only that we understand that time is of the essence, and we have to focus on what is achievable now, because delay is just not an option.
     In this opening statement, we will draw your attention to a subset of what we discuss in the brief. I will briefly touch on four items before I turn it over to my colleague, Professor Castets-Renard.
     First, it is important to identify who is responsible for what aspects of the development, deployment and putting on the market of AI systems. This matters for determining liability, especially of organizations and business entities. Done right, it can help enforcers gather evidence and assess facts. Done poorly, it may create structural immunity from accountability by making it impossible to find the evidence needed to prove violations of the law.
     I would also add that the current conception of accountability is based on state action only, and I wonder whether we should also consider private rights of action. Those are being explored in other areas, including, I might add, in Bill C-59, which has amendments to the Competition Act.
     Second, we need to use care in crafting the obligations and duties of those involved in the AI value chain. Regulations should be drafted with a view to what indicators can be used to measure and assess compliance. Especially in the context of regulatory liability and administrative sanctions, courts will look to what regulators demand of industry players as the baseline for deciding what qualifies as due diligence and what can be expected of a reasonably prudent person in the circumstances.
     While proof of regulatory compliance usually falls on the business that invokes it, it is important that investigators and prosecutors be able to scrutinize claims. This requires metrics and indicators that are independently verifiable and that are based on robust research. In the context of AI, its opacity and the difficulty for outsiders to understand the capability and risks of AI systems makes it even more important that we establish norms.
     Third, reporting obligations should be mandatory and not ad hoc. At present, the act contemplates the power of the AI and data commissioner to demand information. Ad hoc requests to examine compliance are insufficient. Rather, the default should be regular reporting at regular intervals, with standard information requirements. The provision of information allows regulators to gain an understanding of what is happening at the research level and at the deployment and marketing level at a pace that is incremental, even if one can say that the development of AI is exponential.
     This builds institutional knowledge and capacity by enabling regulators and enforcers to distinguish between situations that require enforcement and those that do not. That seems to be the crux of the matter. Everyone wants to know when it's right to intervene and when we should let things evolve. It also allows for organic development of new regulations as new trends and developments occur.
     I would be happy to talk about some examples. We don't have to reinvent the wheel here.
     Finally, the enforcement and implementation of the AI act as well as the continual development of new regulations must be supported by an independent, robust institutional structure with sufficient resources.
    The proposed AI data commissioner cannot accomplish this on their own. While not a perfect analogy—and I know some people here know that I'm the competition expert—I believe that the creation of an agency not unlike the Competition Bureau would be a model to consider. It's not perfect. The bureau is a good example because it combines enforcement of all types—criminal, regulatory, administrative and civil—with education, public outreach, policy development and now digital intelligence. It has a highly specialized workforce trained in the relevant disciplines it needs to draw on to discharge its mandate. It also represents Canada’s interests in multilateral fora and collaborates actively with peer jurisdictions. It matters, I think, to have that for AI.
    I am now going to turn it over for the remaining time to my colleague Professor Castets-Renard.
    Thank you.

  (1615)  

[Translation]

    Thank you very much, Mr. Chair, vice-chairs and members of the Standing Committee on Industry and Technology.
    I would also like to thank my colleague, Professor Jennifer Quaid, for sharing her time with me.
    I' m going to restrict my address to three general comments. I'll begin by saying that I believe artificial intelligence regulation is absolutely essential today, for three primary reasons. First of all, the significance and scope of the current risks are already well documented. Some of the witnesses here have already discussed current risks, such as discrimination, and future and existential risks. It's absolutely essential today to consider the impact of artificial intelligence, in particular its impact on fundamental rights, including privacy, non-discrimination, protecting the presumption of innocence and, of course, the observance of procedural guarantees for transparency and accountability, particularly in connection with public administration.
    Artificial intelligence regulation is also needed because the technologies are being deployed very quickly and the systems are being further developed and deployed in all facets of our professional and personal lives. Right now, they can be deployed without any restrictions because they are not specifically regulated. That became obvious when ChatGPT hit the marketplace.
    Canada has certainly developed a Canada-wide artificial intelligence strategy over a number of years now, and the time has now come to protect these investments and to provide legal protection for companies. That does not mean allowing things to run their course, but rather providing a straightforward and understandable framework for the obligations that would apply throughout the entire accountability chain.
    The second general comment I would like to make is that these regulations must be compatible with international law. Several initiatives are already under way in Canada, which is certainly not the only country to want to regulate artificial intelligence. I'm thinking in particular, internationally speaking, of the various initiatives taking being taken by the Organisation for Economic Co‑operation and Development, the Council of Europe and, in particular, the European Union and its artificial intelligence bill, which should be receiving political approval tomorrow as part of the inter-institutional trialogue negotiations between the Council of the European Union, the European Parliament and the European Commission. Agreement has reached its final phase, after two years of discussion. President Biden's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence also needs to be given consideration, along with the technical standards developed by the National Institute of Standards and Technology and the International Organization for Standardization.
    My final general comment is about how to regulate artificial intelligence. The bill before us is not perfect, but the fact that it is risk-based is good, even though it needs strengthening. By this I mean considering risks that are now considered unacceptable, and which are not necessarily existential risks, but risks that we can already identify today, such as the widespread use of facial recognition. Also worth considering is a better definition of the risks to high-impact systems.
     We'd like to point out and praise the amendments made by the minister, Mr. Champagne, before your committee a few weeks ago. In fact, the following remarks, and our brief, are based on these amendments. It was pointed out earlier that not only individual risks have to be taken into account, but also collective risks to fundamental rights, including systemic risks.
    I'd like to add that it's absolutely essential, as the minister's amendments suggest, to consider the general use of artificial intelligence separately, whether in terms of systems or foundational models. We will return to this later.
    I believe that a compliance-based approach that reflects the recently introduced amendments should be adopted, and it is fully compatible with the approach adopted by the European Union.
    When all is said and done, the approach should be as comprehensive as possible, and I believe that the field of application of Bill C‑27 is too narrow at the moment and essentially focused on the private sector. It should be extended to the public sector and there should be discussions and collaboration with the provinces in their fields of expertise, along with a form of co‑operative federalism.
    Thank you for your attention. We'll be happy to discuss these matters with you.

  (1620)  

    Thank you very much.
    Mr. Gagné, you have the floor.
    I'm pleased to be here to testify as an individual.
    I'm a strategic advisor in artificial intelligence. I' ve spent my entire career using AI technology, which became available in the early 2000s. I worked in operational research, artificial intelligence, and applied mathematics. I developed tools and software that have been used around the world. In 2016, I founded Element AI and was the company's president until it was sold to ServiceNow in 2021.
    I have frequently collaborated internationally. For two years, I was the co‑chair of the working group on innovation and marketing for the Global Partnership on Artificial Intelligence. I also represented Canada on the European Commission's high-level expert group on artificial intelligence. Canada was the only country to have participated that was not in the European Union. I co‑chaired the drafting of the main deliverable on regulation and investment for trustworthy artificial intelligence.
    I was involved in many events held by the Organization for Economic Co‑operation and Development and the Institute of Electrical and Electronics Engineers, in addition to many other international contributions. I was also a member of federal sectoral economic strategy tables for digital industries.
    Despite Canada's track record in artificial intelligence research, and its undeniable contribution to basic research, it has gradually been losing its leadership role. It's important to be aware of the fact that we are no longer in the forefront. Our researchers now have limited resources. Conducting research and understanding what is happening in this field today is extremely expensive, and many innovations will emerge in the private sector. It's a fact. Much of the work being published by researchers has been done in collaboration with foreign firms, because that's how they can get access to the resources needed to train models and conduct tests, so that they can continue to publish and come up with new ideas.
    Canada has always been somewhat less competitive than the United States, and although things have not got worse, they haven't improved. For a technology as essential as artificial intelligence, which I like to compare literally to energy, we're talking about intelligence, know-how and capabilities. It's a technology that is already being deployed in every industry and every sphere of life. Absolutely no corner of society is unaffected by it.
    What I would like to underscore is the importance of not treating artificial intelligence homogeneously, just as the various regulations and statutes for oil, natural gas and electricity are not so treated. I could even start breaking it down into all the subsidiary aspects of production for each of these resources. It's very difficult to treat artificial intelligence in the same way for each of its applications. Everything is moving forward very quickly and it's highly complex, and when you put all the facts together, we feel overwhelmed. That, unfortunately, is what we hear all too often in the media. We've been here for quite a while and we've already heard words like "fear" and "advancement". there has also been talk of uncertainty about the future.
    So, to return to the subject at hand, yes, it's absolutely urgent to take action. I am in no way hinting that measures ought not to be taken, but they ought to be appropriate for the situation now facing us.

  (1625)  

    We are facing a rapidly evolving complex situation that affects every sphere of society. It' s important to avoid adopting a single, straightforward and overly forceful response. What would happen if we took that kind of approach? We would perhaps protect ourselves, but it would certainly prevent us from taking advantage of opportunities and promoting the kind of economic development and productivity growth that would enrich the whole country. That's simply a fact. We can't deal with every single potential situation, because it would be too complex.
    If we try to do everything and cover all aspects, our regulations will be too vague, ineffective and misunderstood. The economic outcome of vague regulation—you know this better than I do—will be that investments will not flow in. If consequences are unclear or definitions left until later, companies will simply invest elsewhere. It's a highly mobile digital field. Many Canadian workers compile and train models in the United States, beyond the reach of our own rules for our companies and our universities. It's important to be aware of that.
    I believe that these are the key elements. They are central to our deliberations about how to write the rules, and in particular the way that they will be fine-tuned. Not only that, but they will guide the effort required to do the work properly to come up with a clear and accurate regulatory framework that promotes investment. With a framework like that, we'll know exactly what we are going to get if we make such and such an investment, and would understand exactly what the costs will be to provide transparency, to be able to publish data and to check that they have been anonymized.
    That would enable organizations to invest as much as they and we want. If we are clear, organizations will be able to do the computations and decide whether or not to invest in Canada and deploy their services here. It will then be up to us to determine whether the bar has been set too high and whether the criteria are overly restrictive.
    Vague regulations would guarantee that nothing will happen. Companies will simply go elsewhere because it's too easy to do so. Various other elements are on my list and I will summarize these. Please excuse me for not having done so prior to my presentation. I will send the committee all the details and recommendations with respect to the adjustments that should have been made.
    In this regulatory framework, I believe that transparency will be very important if there is to be a climate of trust. It's important to ensure that users of the technology are aware that they are interacting with it. Some questions and subjects arise in all industries. It's important to be able to know what we are getting.
    I'm talking about the underlying principles: stating what services we can access, their parameters and their specifications. If a service changes or its model is updated, that would enable us to assess the repercussions of using it. There are also all the other principles that would ensure people are not being manipulated and that require compliance with ethical and other issues. These are fundamental principles that must be part of the regulatory framework.
    One of my most serious concerns is the lack of specificity and the possibility that the law would be too broad in scope. I learned a lesson from my participation in what led to the European Union's artificial intelligence law. Europe tried to come up with exhaustive legislative measures that attempted to include almost everything. However, many of the recommendations made by the committee at the time focused on the need to work with industry, the need for accuracy and avoiding a piece of legislation that tried to cover everything.
    Of course, something new always comes up. It could be generative artificial intelligence or the next generation of artificial intelligence as applied to cybersecurity, health and all aspects of the economy, services and our lives. There's always something that has to be amended or altered.

  (1630)  

    My view is that caution is needed in this respect, as well as an extremely surgical approach that would lead to the development of regulations specific to each and every industry sector, with their assistance, the automobile sector for instance.
    Thank you very much, Mr. Gagné.
    That concludes the statements from the witnesses. We are now going to begin the first round of questions.
    Mr. Généreux, you have the floor for six minutes.
    Thanks to all the witnesses.
    Earlier, Mr. Harris, I had the impression I was in a movie in which a parliamentary committee was conducting a study on an artificial intelligence bill. You were telling the people on this committee that the third world war was about to arrive and that it would be technological, by which I mean that no weapons of any kind would be used. Listening to you today, I felt like swearing, but unfortunately, I couldn't.
    My greatest frustration, and I think I'm not alone around this table to feel that way, is that the bill before us includes a series of elements, underpinned by three principles, which are privacy, the courts and artificial intelligence. However, according to the testimony we heard today, artificial intelligence should have been dealt with in a separate bill.
    We are being told that there have already been major advances in artificial intelligence since the start of our study, including the signing of a memorandum of understanding in England. Some countries decided to introduce a voluntary code while awaiting the adoption of various bills.
    Ms. Castets-Renard, you spoke about a trialogue that would address certain issues. You are no doubt talking about Europe. Mr. Gagné, you also spoke earlier about measures that were proposed in reports you submitted to the European Union. Are you talking about the same thing? I'm not sure I've understood properly.
    I will let Ms. Castets-Renard take that one, because she's the expert in European law.
    I'm trying to understand whether there's a link between the work done by Mr. Gagné and the European Union, and the build that could possibly be adopted tomorrow.
    I can't speak on behalf of Mr. Gagné because I'm not exactly sure what he was involved in, but I think he took part in the work on ethics done by the group of experts that preceded the proposed European Union regulations.
    What I'm talking about are proposed regulations disclosed in 2021 by the European Commission, and afterwards adopted by the Council of the European Union in December 2022, and by the European Parliament in June 2023. In Europe, law is decided by three partners or co‑legislators. In the case under discussion, the three partners have to agree on the same wording, because where things stand now, each has adopted different versions. Since the summer, and particularly since September, this trialogue has been under way, and there has indeed been debate among the representatives of these three institutions. There is going to be a very important meeting tomorrow—the fifth of its kind. It is therefore possible that there might be political agreement on the wording, which in any event must be adopted before the coming European elections prior to June 2024.
    So, Ms. Castets-Renard, a bill could be passed before 2024 if the text is adopted or agreed to by the three parties tomorrow.
    Mr. Gagné, you and Mr. Harris spoke about how quickly artificial intelligence, technology, and research and development were moving forward. Everyone is aware of that. Earlier, Ms. Castets-Renard referred to the amendments introduced by the minister, whose actual content we don't really know because we haven't yet had an opportunity to read them. What's your view of this bill compared to what is being done elsewhere in the world?
    This question is also for Ms. Quaid.

  (1635)  

    This bill seems to be on a tangent that is not unlike the one in Europe, by which I mean that there is an attempt being made to come up with legislation on artificial intelligence. However, in my address, I suggested that you think about the scope of this legislation and the effort required to get there.
    The United Kingdom also has a bill in the works and over 280 people are working full time on it, which indicates the scale of the task. As Canada is going through a process similar to the one in Europe, I believe it would be a good idea, in view of Canada's resources—
    Don't be afraid to say it plainly, Mr. Gagné.
    —and our role in this, to share the work. For example, when people talk about self-driving cars, that's artificial intelligence. Smart cities, that's artificial intelligence. All these sectors need to look into the impact of artificial intelligence on privacy. there are cameras in cars and these vehicles involve risks; for example, how can you certify that a car is self-driving and completely automatic? How would that work in a parking lot for cars that interact? What will the rules of the game have to be? What data could be shared? I used cars as an example, but I could go on for quite a while.
    What I mean is that it's a good idea to come up with a framework and principles. There are certain basic principles for the protection of privacy and personal information, as well as data anonymization. Everything you've been working on in some parts of the bill is, I believe, extremely useful, because it's a specific subject.
    But artificial intelligence is not a specific subject. It's a technology that has many uses. That's my point of view.
    Thank you, Mr. Gagné.
    Ms. Quaid, do you have anything you'd like to add, briefly?
    I'd like to point out that it's not necessary to put sectoral regulations or frameworks in opposition to general regulations. I think that the danger is mixing too many things up when the emphasis should be on what's on the table, which is a general framework. That does not include other frameworks, at the provincial level for example.
    We are lagging behind. Insofar as it's something that is affecting every sector, there will have to be some legislation more specifically suited to certain sectors. However, this doesn't exclude a general framework or set it in opposition to such a general framework. Europe certainly has gone in that direction. It has overall regulations and sectoral regulations, including for transportation.
    The United Kingdom has decided not to introduce legislation and will continue with a voluntary framework. Without wishing to speak on behalf of Ms. Castets-Renard, who knows much more about it than I do, I can say that that's the wrong thing to do. I believe we need regulation and a framework.
    We signed it, at least.
    Excuse me?
    We signed that agreement.
    Yes, but it's a voluntary agreement.
    I don't want to use up your time, but if you want to talk about corporate compliance and how the voluntary rules work in comparison to the binding rules, I could go on forever.
    Yes, that's not surprising.
    Thank you very much, Ms. Quaid.
    I'd like to inform the members of the committee that the amendments on the portion of the bill pertaining to artificial intelligence were released last week. They are now accessible and have been distributed.
    Mr. Turnbull, you have the floor for six minutes.

[English]

    Thank you, Chair.
    Thanks to all the witnesses for being here today. This is a very challenging topic for even the smartest of legislators. I really value the expertise that all of you bring to this conversation. It's really helping inform our discussions.
    Professor Quaid, you said in your opening remarks that delay is “not an option”. You used the words “vital” and “urgent”. It sort of sounds like right now in Canada AI development and the regulations around it are a bit of a Wild West in terms of anything goes. Can you speak to the urgency that you spoke to and just stress that a little bit more?
    I think the urgency comes from the fact that for a long time there has basically been an unregulated sphere. Perhaps everyone was a little bit asleep at how quickly things evolved. I think now we are late. I mean, everyone is late.
     I can't speak to the specificity of the development of the technology. I am not a scientist of artificial intelligence. But I do know a thing or two about law and about business law, and I can tell you that if you want businesses to modulate their behaviour as a function of the public interest, you need legislation. The profit motive or the structure of our corporate law is extremely permissive. They will not make the choices you want to make. We have to make those choices, or rather, you, as the representatives of Canadians, have to make those choices. What is most important? You put that down.
    That doesn't mean we don't fine-tune. That doesn't mean we don't adapt. But we have to start laying some rules down, because right now what's driving the choices is self-interest, and mostly that's economic.

  (1640)  

    Thanks. Sometimes we talk about perfection being the enemy of the good. It seems like this is one of those situations where we need to get legislation passed in order to have something, which, of course, as Mr. Harris has pointed out, with the rapid pace of the evolution of AI development, we're probably going to need to continue to update.
    Would you agree with that, Ms. Quaid?
    Yes. I would say there are some examples of other sectors that evolved very rapidly and that we have lots of experience regulating. We don't need to reinvent the wheel. We do need to be creative. We need to be more agile. We need to be prepared to bring new elements into the regulatory process.
    I think there are lots of smart people who have great ideas to help you with that. I don't think we can start by saying, oh, it's new and we don't know what to do. I think the time for that has long gone. We need to move forward. It's not perfect. I will never say that it's perfect—no law is perfect—but it is perfectible or improvable. We need to start somewhere.
     Thank you.
    Ms. Quaid, I'm going to you again.
    You mentioned something called “structural immunity” as being a risk in your opening remarks, I think.
    I understand the concept itself, but I'd like to have an example of where that might be a real risk for us, in terms of our work moving forward, and how we might be able to avoid that.
    I'm coming at this with my corporate criminal liability hat, and this statute is primarily criminal law. That was one of the astonishing things when I first read this bill. Relying on criminal enforcement comes with some costs in terms of how you prepare evidence and put things together.
    What I'm concerned about is when we don't have transparency about who's involved with what decisions in relation to this technology. I can't speak to how it's actually done. I think the experts here can say something about that. What we need to insist on is transparency about who does what, because you cannot convict a corporation or an organization in this country without knowing who did what, what their status is and what their decision-making power is in the organization. I will direct you to section 2 of the Criminal Code, if you want to read it.
    Even in the case of regulatory liability, where an employee can engage the liability of the organization, you don't need to have a status-based association that they're a senior officer. You still need to know who did what, otherwise you have no evidence. I think it's really important to make sure we create a regime that forces the information out so that then we can assess.
    That doesn't mean we're going to convict all the time or that we're going to prosecute all the time, but if everything is hidden, then this is just window decoration. You will never, ever get a prosecution, or even administrative liability, in my view.
    Thank you for that. It's very helpful testimony.
    Mr. Harris, I want to ask you a question similar to that of Mr. Généreux's.
    I similarly had the experience of listening to you and feeling like I was in a horror movie, a sci-fi novel or some intersection of those when I heard you talk. I know that you're bringing up these risks and potential harms as a very real thing, so I don't want to take that lightly, but it is quite scary to hear.
    I want to ask you a bit of an ethical or philosophical question. You had talked about mitigating the risks. You had talked about a blanket ban on, or explicitly forbidding, certain types of AI or advanced AI systems. One question that occurs to me when we're dealing with, essentially, advanced AI, is whether it is surpassing human intelligence. I think that's what I'm hearing. You talked about the superhuman and the power-seeking behaviours as being a real risk.
    I'm interested in how we develop an ethical and/or legal framework. I think that is a core challenge in this work, which I'm grappling with. A lot of our ethical and our legal concepts rely on things like reasonably foreseeable futures. They rely on concepts of duty, etc., most of which rely on humans' ability to look at what the outcomes might be, given our past experience.
    You talked about how some of our national security assumptions had been invalidated. Are some of our ethical assumptions and our legal assumptions being invalidated by the advancement of AI? How do human beings create a system or a set of guidelines for something that is actually beyond our intelligence?
    It's a tough question.

  (1645)  

    I think those are excellent questions.
    I think, fortunately, we're not without tools for dealing with them. To piggyback off the testimony that Jennifer just gave, I think it's actually quite right to ask, “How can we massage this into a form that fits within our legal frameworks?” We're not going to overhaul the Constitution tomorrow. It's not going to happen.
    One thing we can do is to recognize the fact that we can't predict the capabilities of systems at the next level of scale, so safety by design would seem to imply “until we can”. We're not talking about a blanket ban. We're saying, “until we can”, let's incentivize the private sector to make fundamental advances in the science of AI and to give us a scientific theory for predicting the emergence of those dangerous capabilities.
    I'd also say we can draw inspiration from the White House executive order that came out recently. One of the key things they do—again, to piggyback off this idea, like sunlight is the best disinfectant, to bring this all out to the fore so that we can evaluate what's going on—is have a reporting requirement in the executive order. If you train an AI system that uses above a certain amount of computational power in the training process, you need to report the results of various audits you've performed, various evaluations. Those evaluations have to do with bioweapon design capability, chemical synthesis ability and self-replication ability. That's all baked into the executive order.
    Seeing something like that, where we have a tiered process that essentially mirrors what we see in the EO, where we base it on computational processing power thresholds; above this line, you have to do this, and above that line, you have to do that. It's that sort of thing.
    That's very helpful.
    How much time do I have?
     It's an interesting line of questioning, Mr. Turnbull. You can continue.
    Thank you. You're very generous.
    Is it really the case that computational power is the key predictor of how an advanced AI system will evolve and how it therefore correlates with the level of risk?
    I'm reluctant to think it's that simple. Perhaps that's what you said. Am I accurate?
    No, you're quite right to be reluctant to think it's that simple. That's the single best indicator that we have right now. A couple things can factor into this, too. You can make breakthroughs at the theoretical level, the algorithmic level, that effectively mean you can squeeze more juice out of the lemon. For the same amount of computational power, you can do more. That's precisely why, whatever that computational power threshold is, you want to offload that to regulators to determine what that is. Don't enshrine that into law, because it will change quickly. That's one piece.
    To the question of what other capabilities might emerge from these systems, it also depends on the training data. If you train these systems on bio-sequence data, they will learn for less computational power how to make a bioweapon. That's enshrined as well in the executive order. There's a lower threshold for those sorts of technologies.
    Thanks.
    Mr. Chair, I can end there, but I see that Mr. Gagné wants to make a comment. Maybe we could allow him that.

[Translation]

    Of course.
    Go ahead, Mr. Gagné.

[English]

    I have a quick reaction here.
    The latest progress in science has demonstrated techniques where you could invest a significant amount of money in compute inference—that's not training—to be able to have models of a certain size perform like they were 10 times bigger. It's never that simple.
    Yes, it is a proxy model size, but there are ways with sufficient money or sufficient compute that you can go further than model size. There are ways to go around that and get performance out of these models. There are also ways to specialize smaller models.
    Again, I think it's a use case-based approach that can potentially offer an opportunity to mitigate the risks. I think the use cases mentioned are absolutely relevant, but the triggers are never that simple.

[Translation]

    Thank you very much.
    Mr. Lemire, you have the floor.
    Thank you to all the witnesses.
    Mr. Harris, I remember when you came to tell us, as legislators, about the risks of things going wrong with artificial intelligence. If I'm not mistaken, in your address you gave a potential example. You said that if someone wanted to get to Toronto more quickly, they could use artificial intelligence to simulate a major police intervention following an accident or some kind of attack. That would clear the road for them to get there more quickly.
    In a situation like the truckers convoy near the Hill last year, it would be all too easy to use artificial intelligence to show an image of the Parliament Buildings on fire, as part of a serious disinformation ploy.
    Was it actually you who gave that talk?

  (1650)  

    It was indeed me.
    Okay. We'll continue later.
    Mr. Gagné, I've been listening to you from the beginning and find that we agree on the need to adopt parts one and two of the bill fairly quickly.
    However, for part 3, given the rapid development of the situation around the world, is the current form of the bill still relevant today? Are we on the wrong track? Should we stop and rewrite everything, or continue with what we have been doing?
    I agree on the fact that it's urgent to establish a base.
    You know things work with legislation and other such matters better than I do. I don't know how long it would take to start over from scratch, but I think it would be a lengthy process. I feel that an effort should be made to come up with a version that provides a solid foundation that applies to most instances and, most importantly, is specific. That, in my view, is the way to go.
    The danger arises when you start adding things. I read the amendments. I also felt bad when Mr. Généreux said that they had not been published, because I had read them on the train on my way here. I asked myself why I had been given access to the text of the amendments.
    The list of high-impact artificial intelligence system categories was presented. On that, I'd like to say that there are so many applications that I was wondering why there is a separate category. It's important to be specific and more transparent, to comply with the regulations, and to factor in all the costs of implementing the infrastructure. If any thought is being given to the health, media or social media sectors, more precision is needed. If the field is too broad, it leaves room for interpretation.
    If startup companies conducting research are attempting to develop products for the health field, they will need capital to put something very elaborate in place, and the costs will be high. Those are the kinds of factors that have to be kept in mind. It's important to be specific in what you're looking for.
    Absolutely.
    I was gratified when I heard your testimony, because I've been reading about artificial intelligence issues for several months. My first observation is that while Canada was once a leader in AI, that is no longer the case, unfortunately.
    We need to adopt the best existing approach rather than attempt to invent something ourselves. Personally, as a Quebecker, I am always concerned about preserving our cultural distinctiveness and finding a way to protect the future of our young companies. That has an economic impact.
    One of the criticisms of the bill is its lack of clarity in terms of criminal liability. The bill covers industry, and if there is to be legislation, it's not going to be for those who are behaving, but rather those who are not. Are the bad guys afraid of what's in the bill? Are these regulations really binding? How can we regulate the offenders in the industry?
    The bad guys will just take their model to the country next door and make it available on the Internet.
    I understand wanting to have ways of stopping them and punishing them, but it's important not to try to achieve a perfect system or a perfect law that will avoid any risks or criticism. That would slow down innovation, and Canadian businesses adopting these technologies shouldn't continue to lag behind. Basically, we don't want to end up either impeding or requiring very much in some of these areas.
    It is possible to place certain obligations on some players with huge economic interests in the country. They can be held accountable. On the other hand, if the goal is to have a framework that actually works, then it's important to ensure that it's not overly general and that it is not applied either too loosely or too broadly, because that would make it difficult for dynamic Canadian organizations to innovate, make rapid decisions and have confidence in the regulatory framework rather than be afraid of it.

  (1655)  

    Internationally, the Americans made an important move with their recent executive order. Is that the way to go? Is the consensus reached at the recently held summit on artificial intelligence security adequate? Is that a minimum or a benchmark? As legislators, what should we be aiming at to get the job done for us?
    I think these are good guideposts. An enormous amount of work was done by the international community to understand the issues. I think that many of the things I was reading about in Bill C‑27 and the amendments are valid, and I could identify which portions were intended to cover health or a specific aspect of biotechnology. I could really tell. However, it seems to want to cover all industries Canada, from the smallest to the biggest. What's really needed is to think carefully about them, make adjustments, and if there are specific situations, work with these sectors, while concurrently protecting people and being careful not to hinder innovation.
    That's really my greatest concern. I have friends who are entrepreneurs, I'm an entrepreneur myself, and reading this worried me. It's already difficult to innovate and try to stand out from the crowd. If it becomes even more expensive to develop and launch products, it would make things more complicated.
    I believe my speaking time has run out. Thank you very much.
    Thank you, Mr. Lemire.
    Mr. Masse, you have the floor.

[English]

     Thank you, Mr. Chair.
    Thank you to the witnesses.
    Ms. Ifill, if I can, I'll go to you. One thing I had a chance to do this summer was attend some conferences in the United States. They had some of the larger players that are developing artificial intelligence, and they identified what you spoke about. In their modelling that they're doing now, a lot of biases currently exist.
     Can you speak more to that in terms of how it can stream people and stream ideas if we don't have the right people building artificial intelligence models in the first place that reflect more of society versus models that aren't inclusive?
    There are many ways this could materialize into something that is not beneficial for some groups. For example, predictive policing is one way that we see artificial intelligence in use to predict criminal activity, but the training data that's used is historical. If you're using historical or certain types of data to train the AI system, you're going to get a compounded effect whereby those neighbourhoods that are overpoliced become even more policed.
    Another way it comes about is in hiring. Hiring agencies have used AI to search for executives for executive positions. Unfortunately, a lot of that data is also historical, which means there's a bias against women, because traditionally, women haven't held those positions.
     These are very real consequences that are at scale, and I think the scale and the speed at which this could happen are very concerning. I believe the Edmonton police recently used a system using DNA to predict the facial features of one of the suspects of a sexual assault, and what it came up with was a 14-year-old Black boy. That's the other thing. This adultification of Black boys is another way AI manipulates what we see and what we consider as victims and as perpetrators, or anything like that.
    I think the problem is that it has to do a lot with the training data, but the systems.... I'm not sure if the right questions have been asked or the right assumptions have been made to create the model itself.

  (1700)  

     I appreciate that, and we haven't talked about it.
    Mr. Schauer wants to get in here.
    Please go ahead. I think that this is an important subject that we haven't really delved into too much because the modelling is critical.
    Yes, that would be an argument in support of Mr. Gagné's proposition, which is to legislate in the domain in which the model is used. In this specific example, in law enforcement, you have to take care to not do predictive policing that's biased. To say that the legislation should live at the model level could lead to very adverse effects of that legislation in a totally unrelated domain like health care or like transport with a self-driving car model. It's hard to say that we can find a legislation so perfect at the level of the underlying technology to cover the use cases in all these different domains.
    That would be my reflection on it.
    Thank you.
    Ms. Castets-Renard, please.

[Translation]

    Thank you.
     I'd like to add something about Bill C‑27. A risk-based approach would avoid treating all artificial intelligence systems in the same way, or placing the same obligations on them. Other options include the high-impact concept, and the amendments introduced by the minister, Mr. Champagne, explain what this concept means in seven different sectors of activity.
    I therefore don't think it's fair to say that it would be applied everywhere, on everyone, and haphazardly. It's possible to discuss how it's going to be applied in seven different activity sectors. Some, no doubt, would say that doesn't go far enough, but it is certainly not a law that will lack specifics, because the amendments specify the details.
    To return to what was said earlier, it also means that there can be a comprehensive approach with general principles, and an separate approach for each sector or field. That's what the European Union has done with its amendments. That's why statutes being adopted in other countries need to be considered.
    As for what was said about the United Kingdom earlier, Canada has signed a policy declaration which has no legal or binding value. It's a very general text that adds nothing to what we have already said about the ethics of artificial intelligence. It definitely does not prevent Canada from following its own path, as the United States did when it issued its executive order right before the summit in England. The Americans were not willing to wait for England to take the lead.
    Those are the details I wanted to add.

[English]

    Thank you, Mr. Chair.
    Do I have any time left?
    You don't, but I see that Madam Ifill has her hand up and that Mr. Harris does, as well, so we'll take both interventions before we go to Mr. Vis.
    Thank you.
    I do have an issue with our not providing accountability for these harms that haven't really been laid out properly. I also have a problem with the little accountability that we have in the bill today: a commissioner that really doesn't have any sort of public responsibility. Yes, we can say that we don't want to legislate this or that we don't want to legislate that, but we know the harms are here now. I don't want a big swath of Canadians to be part of those unintended consequences when we know what the consequences can be.

  (1705)  

    Mr. Harris.
    I just want to circle back to this notion of whether you regulate the model or the end applications. That is pretty central here. We're going to have to walk and chew gum at the same time. There are risks that, irreducibly, come from the model. Look at OpenAI's ChatGPT, for example. They build this one system, one model. I don't know if I can.... In fact, I know that I can't. I know that no one, technically, can count the full range of end-use applications that a tool like that would have. You'll use it in health care today, and you'll use it in space exploration tomorrow and software engineering the day after.
     The idea that we're going to be able to take a general-purpose model like this and regulate it as if somehow we can play this losing game of whac-a-mole.... This is just not going to track reality, unfortunately. This is true for a certain subset of risks—the more extreme ones. We can look at the risks, for example, from general-purpose models that can orient themselves in the world and have high context-awareness. You have to regulate the model at that point because that is the source of the risk, irreducibly.
    For other things, yes, we need to have application-level regulation and legislation. Again, you see that in the executive order—that we're doing both things. However, I just want to surface that although there might seem to be a tension between these two approaches, they are actually not at all incompatible. In fact, in some ways, they are deeply complementary.
    I just wanted to prop that thought.

[Translation]

    Thank you very much.
    Mr. Vis, you have the floor.
    Mr. Gagné, you said that we needed an act that could track and adapt to the technological evolution of artificial intelligence. Does the department have the capacity to monitor this technological evolution? Can it really meet this challenge at the moment?
    I wouldn't think so.

[English]

     I think it would be hard.

[Translation]

    Thank you.

[English]

    Madam Quaid, you mentioned the model where we adopt a competition bureau for artificial intelligence. You're the first person to raise that suggestion during our testimony.
    Can you just expand a bit on that, maybe in one minute or less? What would it look like in practical terms?
    You've given me a challenge. I'll do my best.
    The idea is the following.
     I'm not saying that we absolutely imitate the Competition Bureau; there are some things that we could do differently. The kind of legislation that is imagined here has some similarities to the kind of legislation that is in the Competition Act. That is to say, it's responsible for a whole array of responses: true criminal, regulatory, administrative and civil. There is a specialized tribunal with that, but we don't need to talk about that right now.
    I think the point is that it has developed an expertise and it has a large permanent staff divided into directorates. It's developed a digital intelligence agency.
    Those things support what I think other witnesses have been skeptical about, which is the capacity to actually deliver on this.
    The U.S. has basically not made a secret of it. They've just said, “Let's use our strong antitrust institutions while we wait to create something else”. In some ways, we would be consistent with what's being done there.
    The other thing I really want to insist on—sorry, it's an extra 10 seconds—is that having an agency headed by an independent commissioner will allow Canada to participate in the international arena. That is how you get around these enforceability problems: You have to work with your friends.
    In Canada, we might only target local things, but we need to work with allies and we need a player at the table for that.
    Thank you. It almost reinforces the apprehension expressed in my first question about the capacity within the department, so that's much appreciated.
    Mr. Harris, you painted a scary model in the beginning.
    What do we have to look forward to with AI? I don't think it's all bad.
    You're quite right. That's part of the paradox here. We're talking about intelligence, as was highlighted earlier. This is the most general purpose thing humans have. It may be the most general purpose tool ever.

  (1710)  

    Just out of curiosity.... The Abbotsford hospital has a cancer treatment centre. I'm wondering what role artificial intelligence will play in the treatment of cancer in the next 20 years.
     I can't imagine any scenario where we're doing it the same way as we are today, because of this technology.
    To land the plane and give you concrete timelines here, when you talk to folks in the frontier labs, the median, totally reasonable supposition they'll give is that they think it could be anywhere from two to five years to reach human-level AI across the board.
    Let's say they're completely off.... That kind of estimate and the amount of change that's implied by that in areas like health care.... It's not just health care, though. In material design, Google DeepMind just came out with a paper where it made essentially 800 years of progress in material science in a couple of months. There's huge potential here.
    When I was in university—over 20 years ago now; I can't believe it—we learned about the impact of the Gutenberg press.
    Is the moment we're going through right now even larger than the Gutenberg press?
    I don't want to be flippant about this, but I think it's clearly bigger than that. Again, the one thing that distinguishes human beings from other species on this planet is the thing that we are right now trying to figure out how to bottle up on a bunch of servers.
    Yes, there's huge potential, but great power...great responsibility. I've seen Spiderman as well. I can quote scripture.
    Voices: Oh, oh!
    Mr. Jérémie Harris: This is the nature of the technology. The challenge is that, yes, we're going to be facing the strongest economic temptation to take risks with this and to swing for the fences right when the risk is most acute. That's what we keep hearing from these labs and we see that dynamic play out internally as they compete with each other in the race to the bottom.
    Finally, I have one more quick question.
    I, like many of us here around this table, have children. What do we need to consider for children with respect to AI?
    Is there anything specific we can be doing on the AI aspect of Bill C-27 to ensure that we do whatever we possibly can to protect the innocence of kids?
     I'm not an expert on child welfare or child psychology, but what I will say is that the persuasive abilities of these systems are ratcheting up at insane rates.
    Look at OpenAI's GPT-4. They ran evaluations on it. They found out it was able to persuade human beings to solve captchas for it—those annoying tests that prove that you're not a robot. Well, it was able to persuade people to do those for it.
    Think of the applications for marketing. I think adults and kids are going to belong to the same equivalence class of entities relative to these things. I think it's bad for kids, but I think adults start to look an awful lot more kid-like in the face of highly persuasive reasoning machines like this, essentially.
    It's a reason to have an agency, and we're already thinking about these questions in the competition context, if I can tell you, in terms of the persuasive use of digital technologies and so on. These things are already happening, so I fully agree.
    The pitch I'm going to make is that the Competition Bureau and the commissioner have a mandate to try to strike this balance between risks associated with concentration, misleading advertising and so on, and dominance and the benefits of economic activity, dynamism and innovation. I think you are certainly going to see that same tension in certain parts of artificial intelligence.
    You need a specialized agency that has the expertise to look at those things and to make proposals. Ultimately, of course, you in government will have to tell us what the values are that are important, but I think it's very analogous. We have models we can use. It's not rocket science, believe it or not.

[Translation]

    I'd like to add that I'm very concerned about the content recommendation tools for social media, particularly for content recommended for children. I'm worried about the compulsive behaviour and dependence that these tools might engender. I'm very concerned about this as a parent.
    Thank you. That's extremely interesting.
    Ms. Lapointe, before giving you the floor for five minutes, I'd like, on behalf of the committee, to wish you a very happy birthday.

[English]

    My question is for Ms. Quaid.
    In May 2022 you appeared before the public safety and national security committee. At that time, they were performing a study on assessing Canada's security position in relation to Russia.
    At that time, you—

  (1715)  

    Are you sure you have the right testimony? I have never appeared before that committee, but anyway I will listen to your question.
    Okay, maybe it's a different one.
    You said that cyber-threats are becoming more sophisticated and are increasingly pervasive.
    Is that associated to you?
    Maybe that's artificial intelligence imitating me.
    I am not a cybersecurity expert at all, and I would not venture an opinion on something I don't feel I have the expertise to comment on, so it's a little surprising.
    I have been before the industry committee, but no other committee. I have been before the Senate banking committee, but not national security. It must be someone else.
    I must have a different one.
    I will ask you a different question. I would be very interested in your thoughts on what role you see for public awareness and education in building a resilient nation against AI-related threats in Canada.
    I think it's crucial. There's no question that education is a fundamental, especially when we're talking about children. I think it's going to be challenging, and I don't want to understate it. Once again, I defer to the technical experts who know what is in the technology, but there is no question that education helps.
    I'm going to beat my horse this entire meeting just using the example of the Competition Bureau, which does a lot of education proactively—a lot to do with misleading advertising, and so do all the consumer protection agencies of the provinces. There's quite a good collaboration there, and that's because fraud, and particularly digital manipulation, is going through the roof.
    People need to be informed. We're playing catch-up. You know that has been true of the criminal law and the criminal justice system forever. That's not going to change, but we still have to try and, as best we can, keep up with what's going on.
    I think the worst thing we can do is say that because it's too hard, we do nothing. That's why I'm here, and my colleague agrees 100% with me. We have to do something. It's going to be imperfect. We're going to play catch-up, but it's important.
    You know what? There are a lot of people who can contribute their expertise to developing the tools. I really do believe this is not an impossible task— hard but not impossible.
    Mr. Harris, I would like to ask you the same question.
     Yes, it's a pleasure to answer.
    This is close to one of the areas I work in a lot. One subset of the work I do is training for U.S. officials, especially more senior ones, in the defence and national security universe. One of the challenges with training is.... It's been commented on many times. The space is moving so fast that the training has to somehow be relevant and fresh.
    There are a few core things the public should understand about the drivers of this technology—about capabilities. This idea, for example, of scaling up AI systems so we can have a rough sense.... If I tell you roughly how many computations went into building a system, you can have a rough sense. “Okay, that's a ChatGPT-level system.” Immediately, you have a comparable that you can establish. We can basic things like that.
    There are other things. I think this is a solvable problem. We've had a lot of success finding scalable ways of doing this.
    Anyway, there are a lot of partners to collaborate with, in terms of the point that was just made here. I'm optimistic on that front.
    Mr. Harris, earlier we heard Ms. Quaid talk about that need to strike a balance. She believes a good, viable solution around that is the creation of an agency.
    I would like to ask you how Canada can strike a balance between harnessing the benefits of AI for security while minimizing those associated risks.
    One of the frameworks I like is, again, computing power as a kind of barometer we use to determine the level of the general capability of systems. There are asterisks galore on that. We heard that, yes, you absolutely can do—in technical terms—inference time augmentations. You can do all kinds of stuff, but the fundamental capabilities of a base model are limited by the amount of computing power you put into it.
    In that sense, look at what's being done in the executive order. They're pulling on that thread. They're starting to build institutional capacity for using that as a yardstick. I think that's the best yardstick we have. It's imperfect and I wish it were not, but it is the best yardstick we have at the moment.
    There's a lot of stuff we can do around evaluations and audits, depending on what level you are at on that computing-power hierarchy. The more computing power you spend to build a model, the more it costs. GPT-4 is costing, by our estimates, anywhere from $40 million to $150 million to train, just in computing power alone. I'm sorry, but if you can afford to train GPT-4, you can afford a little auditing.
    That's the nice thing about this yardstick. It maps onto resourcing, as well, and we can use that to calibrate the trade-off between risk and reward.

  (1720)  

    From your perspective, how important is international collaboration in addressing the global security implications of AI?
    Hugely.
    I think, ultimately, partly because of what's going on in the open source world right now, the problems that Canada generates, the rest of the world gets to eat. The problems the rest of the world generates, Canada gets to eat. We live in one Internet ecosystem. If we drop the ball, we're letting the world down. If the world drops the ball, they're letting us down. We can't have a hypocritical system where we turn around to other, adversary countries and say, “Hey, you ought to do this”, if we're not doing it ourselves.
    I think there's a certain universalism to the situation we find ourselves in.
    Ms. Quaid, would you like to add to that?
    I think this is the central challenge. You need to figure out....
    We're having this same conversation in competition, namely, of how do we adjust to the digital economy? That's a word I don't like. It's the new economy, which has a lot of digital artifacts. There's a lot of experimentation happening internationally. People are trying different things. Part of that is because we have different legal structures, institutional structures and cultures.
    I think the balance that has to be struck is this: There are probably a small number of things—and you are the elected representatives who have to make that assessment for the country—that really matter to us, as Canadians, and that might be unique to us. Monsieur Lemire evoked our linguistic identity and the cultural specificity of Quebec, but there are other things that might be very important to us. Think of our indigenous communities. If those are very important, we have to bake them into our system. Then, internationally, what we try to do is make sure we're aligned on most of the big things. We can have a couple of things that are very important to us, and maybe we have special rules about these, but we need to have general alignment, because otherwise it doesn't work.
    The challenge is making sure we take those general-agreement principles and translate them into operational legal rules. I guess I'm a bit of a nuts-and-bolts lawyer for that kind of thing. We have to be cognizant of the structural and legal limitations of our system.
    We exist in a federation. I want to make one point about regulating everything. There is a division of powers, and a lot of regulation has to come from the provinces. Let's be very clear: This bill is centred on interprovincial and international trade, and on criminal law power that doesn't cover everything. Co-operative federalism is going to be essential.
    International co-operation is important, but we also have to agree in the federation.
     Where have we seen that done well?
    I guess you could ask yourself whether the Europeans have done it. Maybe Céline wants to say something about how well they have navigated the necessity of integrating these considerations.
    I think the European system already lends itself to this necessity of dialogue. This trialogue is the three entities that represent the three basic political powers in the European Union—and I'm mangling this—and they have to get together and agree.
    I think that we might have to imagine something like that. I know no one likes that idea, but I think a conversation has to occur among the provinces and the federal government. It also probably has to involve local communities. It's all hands on deck.
    I take the point that we can't regulate everything with one general framework, but we do need a general framework to set things up.
    There are some out-of-bounds things. Let's say, "Kids—out of bounds." Just period, right? We can do blanket prohibitions. We do it, right? It can be done, but you have to target those things, and then for other things we need to make sure that the patchwork fits together.
    I'll share one concern I have and then I'll stop. My concern is that if we are not co-operative in the federation, what is going to happen? There will be a set of litigation, founded on the division of powers, like we had 25 years ago in environmental law where large economic interests who have the money to do it will say, “this isn't federal jurisdiction”, and then provinces will want to say, “this isn't provincial jurisdiction", and it will take years to sort out.
    If there's agreement, you can make sure there are no holes and that Canadians are protected.

[Translation]

    Thank you very much.
    Go ahead, Mr. Lemire.
    Thank you, Mr. Chair.
    Ms. Castets-Renard, I heard you yesterday on Radio-Canada as I was headed to Ottawa, and the topic was really interesting. You were talking about the things that could go wrong with artificial intelligence as a result of its use by law enforcement authorities, particularly in connection with facial recognition. What I understood from the case that occurred in Ireland was that the use of artificial intelligence could, for instance, place the presumption of innocence at risk.
    Are current Canadian laws sufficiently advanced to protect against potential social problems? Bill C‑27 may not be the solution. How can we plan for or protect ourselves from these problems, which are probably imminent?
    Not only that, but the use of artificial intelligence in political face-saving endeavours might well lead to other restrictions. That's what happened, I understand. Is that right?

  (1725)  

    In Canada and the provinces, the use of facial recognition, generally speaking, and in particular by law enforcement agencies, is not circumscribed. Of course, without a legal framework, it becomes a matter of trial and error. As was demonstrated in the Clearview AI case, we know from a reliable source that facial recognition was used by several law enforcement agencies in Canada, including the Royal Canadian Mounted Police.
    When there is no legal framework, things become problematic. Practices develop without any restrictions. That's why people might, on the one hand, fear the legal framework because its existence means the technology has been accepted and recognized, while on the other hand, it would be naïve to imagine that the technology will not be used and can't be stopped, and possibly has many advantages for use in police investigations.
    It's always a matter of striking the right balance between the benefits of AI while avoiding the risks. More specifically, a law on the use of facial recognition should ideally anticipate the principles of necessity and proportionality. For example, limits could be placed on when and where the technology can be used for specific purposes or certain types of big investigations. The use of the technology would have to be permitted by a judicial or administrative authority. Legal frameworks are possible. There are examples elsewhere and in other fields. It is certainly among the things that need to be dealt with.
     I would add that Bill C‑27 is not directly related to this subject, because what we are dealing with here is regulating international and interprovincial trade. It has nothing to do with the use of AI in the public sector. We can, in due course, regulate companies that sell these facial recognition AI products and systems to the police, but not their use by the police. It's also important to ask about the scope of the regulation that is to be adopted for AI, which will no doubt extend beyond Bill C‑27.
    Ms. Ifill, I'd like to hear your point of view on potential uses of artificial intelligence by the police, along with the problems that might follow. As you made some rather astute preliminary comments on this topic, I'd like to hear what you have to say.

[English]

     One of the problems is that it does tend to misidentify not only racialized people but also non-binary people. There are cases such as self-driving cars having a problem recognizing women. When these technologies start to affect larger proportions, or a significant proportion, of the population without some sort of accountability measure, we're looking at a very bad fragmentation of society on an economic level, a social level, and in ways that would fracture our politics. I think that can be minimized, to be honest.
    One of the things I would like to see with AIDA is that it be its own bill. I personally think it should be spun off so that we can look at these things more clearly, because, as it stands right now, there is nothing to.... For example, if you go for a loan and AI predicts that your loan should be rejected because of a variety of factors, or maybe factors that aren't attributed to you because of race, gender, class, geographical location, religion, language, all the things.... If we're going to build these systems, we have to protect people from the negative impacts of those systems, especially when they happen at scale and especially when they happen with government agencies.
    I think one of the problems with this bill is that a lot of government agencies, especially in national security and law enforcement, will be exempt. Those are some of the areas—you think of immigration too—where you will see large uses of AI.
    I would say about education that a lot of the education over time should have come from journalists and journalism. We should have had a more robust journalistic tech field that could inform all of us and look into these issues with AI and tech writ large.

  (1730)  

[Translation]

    Thank you very much.
    Thank you.

[English]

    Mr. Masse, the floor is yours.
    Thank you, Mr. Chair.
    Ms. Quaid, if this bill doesn't get passed, we'll go to the new year. Then it has to get through the House of Commons. Then it has to go to the Senate. Then if the Senate has any amendments, it has to come back to the House of Commons with them.
    Give us some of your concerns about the delay in the process. Is there anything else we can do in-between to deal with this? Even if we have political consensus in the chamber to move this as quickly as possible, our schedule is such that it won't be until the new year—and probably clause-by-clause and so forth will take some time. Then we have to send it to the other place. The other place sometimes can take some time. Then, again, an amendment would have to come back us in our chamber.
    You are the experts of your own procedures. I'm not going to lecture you on how the legislative process should happen.
    I will allow myself this small editorial comment. It's funny, on the one hand, that the AI bill has been moving slowly and has sat for a while. There was an original bill that was proposed, but died on the Order Paper, and there was a new one. I contrast that with some things, like in competition, where we're moving at lightning speed. It seems like it's also a question of establishing priorities. That said, I'm not recommending the budget bill process. Please do not quote me as saying that. I think it's a terrible thing.
    On how you could move forward, here are some suggestions without my having any knowledge of what your procedures are, so I might be saying things that are wrong.
     Yes, that's fair. That's fine. I appreciate that.
     I am a planner, like many busy working mothers. I'm thinking that maybe you can already start checking out.... If you like my idea of creating an agency, for example—just to pick a random one—maybe you would start already looking at some possible structures and what's being done elsewhere. Maybe a study can be commissioned and ISED can already start looking at that. What kinds of skill profiles would you need to populate that? Maybe you can already start looking at some of the challenges that might be involved in the criminal and regulatory enforcement part.
    It's been said before and beaten over the heads of many, but I think there needs to be more work done on getting an idea of what's going to be in those regulations, and thinking about how we are going to create a system where we can have this iterative process of updating the regulations. There are models out there. There are smart people in Canada writing about agile regulation. I think we can already start lining up what the feasibility of certain solutions is before the bill is enacted. Yes, it might not happen, and it might be effort wasted, but I think there are lots of researchers who'd be happy to look at these things and provide you with options. That would be my suggestion.
    Thank you very much.
    Thank you, Mr. Chair.
    Thank you, Mr. Masse.
    Speaking of parliamentary procedures, the bells are ringing. We'll need to have unanimous consent, if you will, just to proceed until 5:50, which would be the time our committee would adjourn for this part of the meeting.
    Do I have unanimous consent to continue?
    Some hon. members: Agreed.
    The Chair: That's amazing.
    Mr. Fast, the floor is yours.
    Thank you very much.
    Mr. Chair, I'd like to get back to the bill before us.
    Without prejudging the outcome of Mr. Champagne's proposed amendments, we will assume for the time being that those amendments would be passed and incorporated into AIDA. I'd like to know from the three of you—Mr. Harris, Ms. Quaid and Mr. Gagné—if you support AIDA's going ahead as it is right now. It's my understanding that in the future it is almost certainly going to be amended, re-evaluated, recrafted; and it may come back in a different form.
    We have to make a decision on the bill before us right now. You're here giving us advice.
     Is it your advice for us to go ahead with this, or are there substantive amendments that you would propose?

  (1735)  

    I can speak for myself, to begin with.
    I think the bill right now is significantly better than nothing. One of the key factors for me in evaluating this is just the timeline. Do we want to be confronted in the year 2024, 2025 or 2026 with nothing on the books? My strong impulse is to say no, we must have something.
    Given the timeline, as has been explained to me by folks who are working on this bill, it seems unlikely otherwise that we would have something on the books by then. That's my understanding—it may be wrong.
    If that is the case, then the bill in its current form is better than nothing. That's literally how I'm approaching this. There are things that are actually very good. I think the general purpose AI system stuff and the cessation-of-operations components to the bill are really good.
    Overall for me, given the current landscape and the timelines, I would be in favour of the bill's going ahead. However, I see significant issues with it, which I highlighted, including the computational power thresholds and all of that stuff, in my testimony.
    I can speak for Professor Castets-Renard as well on this point. We are of the view that it is better to proceed with this bill. It is improvable. We have identified the things that we think should be improved. I do think that making sure there are sufficient resources and an institutional framework to support the actual implementation of the bill is important. We have things that we suggest could improve and strengthen it—although those are probably at the level of regulations, or could be achieved through regulations. I think we need to move forward.
    I would urge this committee to continue to solicit opinions from, or to pay attention to, people who are analyzing for the potential problems. That's not because they should defeat the bill, but we should go in with our eyes open to what the potential challenges are. I definitely come down in favour of the need to move forward. It's already too late.
    I will provide a list of recommendations from my perspective on this. I think there are lots of aspects that need improvement. Some are too broad and others are too harsh in terms of the consequences around certain actions. Personally, my perspective would be to keep working at it.
    Mr. Ed Fast: And not go ahead with the current form of the bill?
    Mr Jean-François Gagné: No.
     That's helpful.
    Were any of the three of you or Ms. Ifill consulted in the lead-up to this bill's being drafted? All four of you are coming in after the fact and providing advice.
    Go ahead, Mrs. Ifill.
    This bill didn't have public consultation. It seems like a big step that the things we're talking about now would have been mitigated with proper consultation. To me, that's a big red flag about this bill.
    Second, it really has no protection for the public. I know that there's this framework, but even the framework is insufficient.
    I would have to agree with the last person who spoke and asked if you really want to push through a bill that has no protection for the public.
    Thank you.
    Mr. Gagné, in your opening remarks, you suggested that Canada is falling behind the rest of the world—certainly the developed world—in addressing the challenges of AI. You also went on to talk about Canada's losing investment in this space.
    I'd invite you to expand on both of those. They may be related. Maybe we're losing investment because Canada hasn't had a regulatory framework in place, or because we have taxation challenges or other challenges that scare away investment.
    What are your comments, sir?

  (1740)  

    There are a multitude of aspects that come into play when you're going to evaluate. If you're going to, for instance, train one of these very large models, one that has access to data, there are currently certain issues around the copyright law that prevent companies from using text or images, whereas, in the U.S. or Japan, they can freely use them.
    This has been pointed out in the past. This prevents anyone who's training these types of models from training them or operating them from Canada. What you're seeing is that Canadian companies are going to the U.S. and driving runs of hundreds of millions of dollars of training into the U.S.—not in Canada, because that prevents them from being able to innovate. I'm giving an example. My concern was that by having a blanket approach, there will be pockets of situations like that.
    What I'm pointing out here is a small thing. It's just text for this particular situation, and then, boom, Canada is not playing in the large language model game. There will not be any large machine-learning data centres built in Canada, not a single one. These are multi-billion dollar investments.
    Go ahead, Mr. Harris.
    To share a perspective on it, why is it that OpenAI, Google DeepMind and Anthropic are all based in Silicon Valley and there is no Canadian equivalent?
    I'm a start-up founder veteran. I built all my start-ups in Silicon Valley; I didn't build them in Canada. I was born and raised here, and I've lived here basically the whole time. I moved to Mountain View to build my start-ups early on, and then I moved back, but I still based them there.
    There are some regulatory factors. It's nice to have a Delaware C corporation, but that's not the fundamental reason. The fundamental reason these companies are based in the Silicon Valley area is just that the best investors in the world are based there. That's it. That is literally the single most important factor by far.
    When I go to Y Combinator, I hear the best advice on start-up building on planet Earth. There is no equivalent to Y Combinator in Canada. This is the world's best start-up accelerator, full stop.
    The best investors are in Silicon Valley, the Vinod Khoslas and the Sam Altmans. That is why this is happening there.
    There are, at the margins, regulatory things going on here, but as a start-up founder who has done this multiple times and has been faced with this exact decision many times, whether it's with AI or other things, there's a kind of talent delta there in terms of the best VCs, the best angel investors. That's the ecosystem.
    Tobi Lutke from Shopify started his company here in Ottawa, but there's a reason that their cap table is filled with Silicon Valley money. It's because that's where the best investors are.
    At the end of the day, it's the same story over and over. I think we're just seeing it replicated in AI. I don't think there's anything too different there.
     That's interesting.
    Thank you.
    Thank you.
    Mr. Van Bynen, the floor is yours for five minutes.
    Thank you, Mr. Chair.
    The proposed artificial intelligence act does not provide for a pre-market conformity assessment of artificial intelligence systems, but proposed section 15 allows the minister to order an audit. The government’s proposed amendments to the AIDA include a series of tasks to be completed before the general-purpose or high-impact artificial intelligence system can be made commercially available, including an assessment of adverse effects and a test of the effectiveness of measures to mitigate the risk of harm and biased results.
    What do you think of those new obligations that the government wants to impose on people who do develop the systems?
    I'll start with Ms. Quaid and then I'll go to Mr. Harris and Mr. Gagné.
    We talked about this a little bit in our written brief, but not in a lot of detail. Now I can draw on a different example, which is securities regulation. I think these amendments are in the right direction. I made comments in the brief that they probably don't go far enough. Part of the reason I think they don't is that you really need to build a base of understanding of what's going on in order to be an effective regulator. We see this in other sectors too. They need to know what's going on and get an understanding.
    International finance and capital markets are things that change constantly—all the time—and we're always playing catch-up. They manage. Even with 13 regulators in this country, they manage. I do think that imposing disclosure obligations or having an audit function are important, but I would make it mandatory and I would make it standard, which is to say not ad hoc and dependent on having some inkling of what's going on. In fact, we learn things because we find them out. That's what continuous disclosure in securities regulation is about. It's making sure that we're constantly on top of things.
    So I think it's in the right direction, but it could be strengthened.

  (1745)  

    I'm just going to echo what she said, really. I think it's what you see in, again, the White House executive order context. There is a reason that everything is being designed around disclosure—tell us what tests you've run and what the results were—and setting up infrastructure to reveal that and to train regulators to understand the results of those audits.
    Go ahead, Mr. Gagné.
    I think that's where we have to be prudent. Canada is a small market. If you impose a cost that is too prohibitive or rules that are very hard to clearly comply with, companies will just elect to not publish their models or give you access to it. Entropik is not in Canada. Tons of companies decide that they're just not going to give you the tools, and then suddenly your entire ecosystem, all your industries, can't benefit from the uplift in productivity.
    Should the artificial intelligence and data act require a compliance audit before the artificial intelligence systems are in place in the market?
    Perhaps you could start, Mr. Harris.
    Yes. I strongly agree. I think that's essential. One lens that I'd bring to this is national security. You know, you build these systems and you don't have to deploy them for them to be tempting targets for theft and exfiltration. I can say from first-hand experience in certain contexts that the labs that are leading the way here are not resourced to withstand a sustained exfiltration campaign from nation-state attackers. Just through that lens, as we build more powerful systems, there has to be some level of responsibility prior to deployment. Then there is also the issue of loss of control, which can happen prior to deployment as well.
    So I would say yes.
    Go ahead, Ms. Quaid.
    I agree. To go back to the fact that we do have examples elsewhere, we have high-risk industries that exist already. The nuclear industry is one of them, but there's also finance.
    I disagree slightly with Mr. Gagné, because to the extent that the Americans are doing something, they have the force to insist. That is the importance of being a player at the international table with an advocate in the form of a commissioner who is truly independent and able to make decisions. Then you can make sure you're on side with everything. I do believe that at the end of the day international co-operation is essential, but I agree that looking for dangers ahead of time before they're launched on the market should be the case. We insist on that already for other things, in product safety and in other things, so to me it's not new. We have to adapt it to AI.
    The other thing I would add that is important, because sometimes this comes out, is, oh, we can't force companies to share this sensitive information, because there's a competitive dynamic. But government departments handle confidential information all the time. That's what they do. The Commissioner of Competition does this all the time. Yes, of course there is a risk that it gets leaked or whatever, but I don't think that's any higher than other risks. I think sometimes that's overstated. Government regulators can handle sensitive information and can use it in the public interest. That's why we trust them to do it, I would say.
     Thank you.
    Mr. Gagné.
    If I may....
    Go ahead.

[Translation]

    I'd like to add that it's essential not only to provide for a compliance and verification system before putting products on the market, but also economically necessary for Canadian firms, because that's what other countries will have. We therefore need to start preparing Canadian companies for this level of competition and competing jurisdictions.
    Thanks very much, Mr. Van Bynen.
    I now turn the floor over to Mr. Lemire for this last mini-round of questions.
    I understand the time constraint, Mr. Chair.
    Mr. Gagné, in your opening remarks, you made a fundamental point that deserves to be heard and that goes beyond the scope of Bill C‑27: there's an urgent need to provide new Quebec and Canadian businesses with computing power. How important is it to provide computing access? How essential is it globally?
    It's definitely essential. You need only look at the numbers of the companies that build and produce these semiconductors. They're growing like weeds. It's as simple as that. It's reached the point where many new businesses—I don't want to name any single one over another—are making advances in artificial intelligence. They've had to raise so much capital just to stay in business that I don't see how the economics of it will work out.
    Every form of assistance or support in accessing computing power counts. We already know, for example, that we want to use very large models to specialize them in executing tasks that will help us improve productivity. However, that requires computing power. Anything that can be done to facilitate it will accelerate its adoption by Canadian companies, improve their productivity and potentially help them make more realistic attempts to create new organizations and businesses.

  (1750)  

    I think the right people are listening to you now.
    Thank you.
    Thank you very much, Mr. Lemire.
    Thanks to all the witnesses for enlightening as us with their perspectives on this important bill this afternoon.
    I spoke to Mr. Harris before the meeting. For those who submitted briefs before the amendments were made public, please don't hesitate to send the committee a revised document reflecting any necessary adjustments made in response to those amendments.
    Once again, thanks to all the witnesses for appearing in person and by videoconference.

[English]

    Thank you for joining us.
    Thank you to the interpreters, support staff and analysts.
    We'll be back shortly after the vote.
    The meeting is suspended.

  (1750)  


  (1825)  

[Translation]

     Ladies and gentlemen, colleagues, we will continue this session and resume meeting no. 101 of the House of Commons Standing Committee on Industry and Technology.
    I would like to take this opportunity to apologize for the accumulated delay as a result of the votes held following the oral question period and those just held, but here you are.
    Pursuant to the motion adopted on November 7, 2023, the committee is resuming its study on the recent investigation and reports on Sustainable Development Technology Canada.
    I would like to welcome today's witnesses.
    We have George E. Lafond, Strategic Development Advisor. Stephen Kukucha, President and Chief Executive Officer of CERO Technologies, is joining us by videoconference from Vancouver, and we also have, in person, Guy Ouimet, Engineer at Sustainable Development Technology Canada.
    If you wish, each witness will have five minutes to present your remarks.
    Mr. Lafond, you have the floor for five minutes.

[English]

     Thank you very much, Mr. Chair and honourable committee members, for having me here today.
    I want to begin by acknowledging that we are on the unceded and unsurrendered territory of the Anishinabe Algonquin people.
    My name is George Lafond. I'm a citizen of the Saskatchewan Muskeg Lake Cree Nation in Treaty No. 6 territory. I currently serve as an adviser to businesses, educational institutions and social and cultural organizations and I am known for successfully leading strategic initiatives requiring first nation engagement.
    Previously I served two terms as the treaty commissioner of Saskatchewan, the first treaty Indian to serve in that role. I was appointed by the Harper government in 2012 and then reappointed in 2014. I served as a tribal vice-chief and then later as a tribal chief of the Saskatoon Tribal Council, a first among equals, with seven first nation chiefs and their diverse first nation communities.
    My entire public service has been devoted to supporting reconciliation, wellness, economic development and innovation for my communities. Improving access and the quality of education for indigenous youth is what underpins all of my efforts, and this work is informed by my educational background and experiences as a public school teacher some 42 years ago.
    In the education sector, I served as an adviser to three university presidents and also served as a university board governor. I advised them on how to ensure that indigenous students could be set up for success throughout not only their time in post-secondary education but also their future careers. It is in this role that I worked with the Saskatchewan Indian Institute of Technologies, commonly referred to now as SIIT.
    It was these public service roles that led me, in 2012, to be appointed by the Harper government as an expert to examine first nations education on reserve and to bring advice forward to address a new relationship between the federal government and first nation communities with respect to education. It was there that I witnessed the fact that first nations people were doing well in primary industries but were almost non-existent in the clean tech industry.
    Since I was appointed to the board of SDTC in 2015 there has been a noticeable change in how this organization has modernized to better meet the needs of the markets and the Canadian clean-tech industry. It was paramount to ensure that indigenous communities were also factored into this equation, to determine how indigenous peoples could be set up with the proper skills and training needed to participate in this critical sector and also that this sector could respond to the unique needs of our communities.
    Strides have been made over the last decade, but there's no denying that the clean sector and innovation agenda present an even steeper hill to climb given the lack of access to training and education for indigenous youth. Indigenous people are at risk of being excluded from innovation in Canada. We're under-represented in STEM, with Statistics Canada reporting that the total employment in this industry is less than 2.5% for indigenous persons with post-secondary training.
    During my time on the SDTC board, I had conversations about this very issue with SDTC management and I advised organizations and post-secondary institutions of their obligations to ensure that indigenous youth did not miss out on the future of innovation.
    In 2020, SDTC approved funding for a maker's lodge for SIIT, Canada's first innovative accelerator dedicated to educating and empowering grassroots indigenous entrepreneurs. This pilot project was done through the SDTC ecosystem funding stream, which encourages innovation and collaboration among diverse persons in the private sector, academia and not-for-profit organizations. This is a part of SDTC's mandate and a part of their contribution agreements.
    I want to be clear. Although I spoke to SDTC about these important issues and about finding solutions to ensure indigenous participation and I introduced them to SIIT leadership, I was in no way part of the decision-making process with respect to funding the SIIT project. When SIIT entered conversations with SDTC, I proactively disclosed my conflict and recused myself from any and all discussions moving forward.
    Following the RCGT report, I was made aware that SIIT mistakenly included my services as a part of their expenses under the guidelines of the SDTC project. This was an error. I immediately contacted SIIT, which promptly resubmitted their expense claims. I never received a payment from SIIT related to this project. My contract with SIIT is as adviser to the president and is unrelated to this project.

  (1830)  

    As I've said, I had spent years working in indigenous education and on improving outcomes for indigenous communities. Although I'm an adviser to the SIIT, this program has provided no personal benefit to me. However, it does have potential benefit for thousands of indigenous youth, giving them an opportunity to combine traditional knowledge with a new idea and to contribute to the innovation landscape of Canada.
    As the committee does its study, I do not want this important work to be lost. It is important that, through the creation of innovation programs like this innovation accelerator, we help mentor indigenous leaders and entrepreneurs, and ensure that not just middle-class communities but all Canadians benefit from a meaningful contribution to a modern Canadian economy.
    Thank you very much, Mr. Chair.

  (1835)  

    Thank you very much.
    I'll now yield the floor to Mr. Kukucha for five minutes.
     Thank you, Mr. Chair and honourable members.
    My name is Stephen Kukucha, and I have served on the SDTC board since February 2021. I live in Vancouver. I'm a retired lawyer, and I'm certified by the Institute of Corporate Directors.
    SDTC's work is critical to the development and success of Canada's clean-tech ecosystem. I believe that my unique perspective and positions within the clean-tech sector bring value to my role on the board. My over 20-plus years of experience in clean tech provide me with an understanding of the challenges that companies face in acquiring capital. That struggle has been exacerbated by the market downturn in late 2021, the dramatic increase in U.S. government investment in this space, and now the pause in SDTC's work.
    Whatever happens because of these hearings or other investigations, it's critical to state the important and unique role that SDTC plays and that the organization's mandate has for Canada. I ask this committee to consider its importance to all the fledging companies it supports.
    As well as my work in clean tech, I should also disclose that I have been involved in politics in the past, both federally and in British Columbia, and I'm very proud of that involvement. I believe that engagement in our country's democratic process, no matter what party one supports, is important to civil society. For example, I have a profound respect for all of your decisions to run for office and to seek careers in public service. It's one of the more important things a Canadian can do.
    I would also like to disclose that I was the recipient of the whistle-blower call to the board. I'd like to put that on the record. Unknown to me, our call was surreptitiously recorded. However, I'm comfortable tabling a transcript to show the level of professionalism that this individual was afforded in good faith. On multiple occasions, the whistle-blower was asked to share their dossier and the facts that they were basing their allegations on so that the board could respond and address them in a professional manner. Unfortunately, they did not.
    After my one-hour conversation with this individual, I quickly realized that the board needed to be informed, that legal counsel need to be engaged and that a proper process needed to be followed. An immediate investigation was commenced without informing the individuals who were the subjects of the allegations. I acted in good faith and followed proper governance, and in my opinion, the board undertook its fiduciary duty.
    With regard to my investments in clean-tech companies, any and all conflicts I had were disclosed prior to my appointment. In fact, I was asked to resign from the board of a company that had previously received SDTC funds, and I promptly did so. Any conflicts after joining, either real or perceived, were also disclosed. Finally, I have not had access to any files related to those conflicts, and I have recused myself from any decision-making.
    With regard to payments during COVID to SDTC companies, I'd like to share my perspective as well. At my first board meeting, two weeks after being appointed, a recommendation came forward to give management discretion, within an allotted pool of capital, to make assistance payments if required. No individual companies were listed in the board documents. I'm willing to table a copy of that document to show you what the board received if required. There was also legal advice given to directors at that meeting: that if they had previously declared conflicts, they did not have to redeclare. I had declared mine two weeks prior.
    Finally, I have not received a dollar from any company that has received SDTC funds, and none of the companies I'm invested in have exited or provided any return to me. I've not been compensated in any way by these companies or other organizations I'm affiliated with. I've received no payment, no dividend and no remuneration at all. In fact, my partners and I have contributed significant personal time and financial resources to keep these companies and other non-clean-tech companies contributing to the Canadian economy over these last few very challenging years.
    In closing, in my experience, the team at SDTC has been professional and has delivered results. While no individual or organization is perfect and we should always strive to improve, I'm very proud of the SDTC team and the work I've done on this board.
    I'm happy to take your questions.

[Translation]

    Thank you very much, Mr. Kukucha.
    Mr. Ouimet, the floor is yours.
    Thank you, Mr. Chair and thanks to the committee members for welcoming me today.
    My name is Guy Ouimet, I am originally from Montreal and still live there. I am an industrial engineer graduated from the École Polytechnique of the University of Montreal. I hold an MBA from McGill University and am certified by the Institute of Corporate Directors.
    After starting my career within multinationals, I quickly moved towards a role as an industrial investment professional having worked for most of my career in the fields of venture capital, private placement, project financing and mergers and acquisitions. In this capacity I acted as a senior executive for the Société générale de financement du Québec for 10 years before launching my private practice in the form of a boutique investment bank.
    This practice has developed over the years based on my multi-sector and technological expertise, particularly in the fields of energy, metals and minerals, chemicals and petrochemicals, the automotive industry, other manufacturing sectors and, among other things, the evolution of these sectors towards the decarbonization of the economy.
    I have had institutional and government funds as clients as well as numerous private companies for 25 years. I participated in the setting up of multiple investment projects and transactions. Among other things, I acted as an external advisor for SDTC between 2006 and 2014. My combined expertise in multi-sector venture capital and the setting up of large-scale projects was then called upon, particularly with regards to the NextGen Biofuels fund of SDTC for which $500 million was allocated to SDTC in 2007 by the federal government then in power.
    Since 2020, I have been exclusively a corporate director and act on six boards of directors and various committees.
    Having taken a distance of 4 years from SDTC and a new management being in place, I joined the SDTC board of directors on November 8, 2018, following an appointment resulting from my application to a recruitment process conducted by the Governor in Council which lasted over a year. Having no political affiliation and having requested no references except those required by the validation procedures during the recruitment process, I have declared all my background and skills, including my previous role as an advisor for SDTC. At the end of the Governor in Council process, I was recruited based on my expertise to contribute to the SDTC Board of Directors.
    In addition to being a member of the board of directors, I am a member of the Project Review Committee, or PRC, and the Governance and Nominations Committee. Appointments to committees were subject to the approval of the Chair of the board, in this case Mr. Jim Balsillie at the time of my appointment.
    During the Governor in Council recruitment process, I declared a conflict of interest with a company which I had advised, and which had been approved for SDTC funding prior to my appointment to the board. Once appointed I discussed this conflict of interest with Mr. Gary Lunn, then Chair of the SDTC Board Governance Committee. He advised me and I subsequently followed his recommendations as required, all within the framework of already established governance.
    Since my appointment to the board, I have periodically declared all real, apparent or potential conflicts, I have not had access to the files in question and have recused myself from any decision relating thereto.
    Regarding the emergency COVID‑19 payment to businesses by SDTC, as already indicated, the SDTC Board referred to the Osler legal opinion which was based on the prior declaration of conflicts of interest, the urgency of the situation and the universal nature of the measure for which no company benefited from individual treatment. Like the rest of the Board directors, I acted in good faith, in accordance with this opinion.
    SDTC's mission appeals to me in the sense that it is absolutely relevant for the conversion of the Canadian economy towards decarbonization as demonstrated by its track record over more than 20 years. The relevance and effectiveness of SDTC have been recognized on several occasions through periodic performance audits. Furthermore, cleantech entrepreneurs praise its contribution and venture capitalists consider a SDTC contribution as prior validation for their own investment. These facts are well known in the industry across Canada.
    When joining the board, I noted the quality of SDTC's governance, as well as the stature and reputation of my fellow directors. As a finance professional and corporate director, this is an essential prerequisite that I have always applied before joining each of the 21 boards of directors on which I have participated in my career.
    I am available to answer your questions.

  (1840)  

    Thank you very much, Mr. Ouimet.
    Mr. Perkins, you have the floor for six minutes.

  (1845)  

[English]

    Thank you, witnesses, and Mr. Lafond, for clarifying that opening statement and the relationships you have.
    My questions initially will be for Mr. Kukucha.
    Mr. Kukucha, you said you joined in February 2021. What is your relationship with the company Semios?
    As part of my work with PacBridge Partners, we did due diligence on Semios to see if we wanted to invest, and we did not invest. That's the extent of my relationship.
    PacBridge is a venture capital firm.
    It's a private equity and venture capital firm.
    What's your relationship with Terramera?
    In 2016, five years before I joined, I made a very small investment because I knew the CEO. That's the extent of it.
    What is Miraterra technologies?
    I believe Miraterra and Terramera are the same company, or are they just split? My recollection is....
    I believe Terramera received about half a million dollars in special COVID relief before you joined the board. After you joined the board it received another $349,000 in COVID relief.
    Wait one second, Mr. Kukucha, before you answer. Can you get the boom of your microphone up a little bit?
    I think that should be better. You can go ahead. You can answer the question by Mr. Perkins.
    I don't know, Mr. Perkins, if you want to repeat it.
    I'll restate it.
    Terramera, which you have an interest in, received—I have to get it right—$141,000 before you were on the board in special COVID relief payments and then another $349,000 after you joined the board.
    That seems to make sense.
    Like I said, sir, I invested $15,000 in 2016 as a favour to the CEO. I'm a passive investor. I don't track the company. When those COVID payments were made, I declared Terramera as a conflict when I joined, because I had a small investment in them. As I have previously stated, we had legal advice. I just declared the conflicts two weeks prior to my joining.
    You don't have any interest whatsoever in Semios?
    I have no interest in Semios whatsoever, sir. We did due diligence to consider investing, but we chose not to invest. I believe that was also prior to my joining the board.
    Terramera got half a million dollars in COVID relief. Some was before and some was after you joined the board.
    Terramera received, I believe after you joined the board, announced funding from SDTC in March 30, 2021, of almost $8 million. They received about four and a half million dollars before you were on the board from SDTC. Is that correct?
    I honestly don't know. I'll have to take you at your word, sir.
    Like I said, I made that investment five years before I even joined the board. It's such a small investment on my part. I don't track it. I don't follow it regularly. I hope the company succeeds, as I'm sure every Canadian would, considering SDTC's—
    What's your interest in Intelligent City Inc.?
    I have no personal interest in it whatsoever.
    I'm a senior adviser to a small, mid-level investment bank in Vancouver called Fort Capital. Fort Capital, prior to my joining that bank and the board, raised money for Intelligent City. I felt the need to declare it, because the firm I was attached to had previously raised capital for it.
    While you were on the board, you and some of the other board members were on the special COVID committee. Were you aware when you were voting for the COVID relief money that your fellow board member, Andrée-Lise Méthot's company, in which she has a private equity venture capital interest in, received $1.4 million in COVID relief?

  (1850)  

     I was not presented.... Like I stated, sir, I joined the board two weeks prior to the second COVID vote, I believe. I had declared all of my conflicts, and we had received legal—
    I understand that.
    You weren't aware that other board members were receiving money. In her case, one of her companies was getting 10%, not the 5% that Annette Verschuren said.
    I can't speak to those facts, sir. I have no knowledge.
    Annette Verschuren's companies were receiving almost $300,000 in COVID relief payments. She was the chair of the board when she moved the motion.
    Again, sir, I can't speak to those facts. I'll have to take your word for it.
    You were totally unaware of anyone else's conflicts of interest when they were voting for money for their own companies.
    On the second board vote, we were advised by lawyers that we did not have to redeclare. Considering I was not at the first board vote, I would not have had knowledge of that.
    Do you not think, under the ICD rules of real and perceived conflicts of interest that you were in a conflict of interest when you joined the board, because you had an investment in companies that already had a relationship with SDTC?
    When I joined the board, sir, I was asked to resign from the board of one company that had received funding, and I promptly did that.
    On February 5, when I joined the board, I declared five conflicts, three of which I had investments in. Yes, sir.

[Translation]

    Thank you very much, Mr. Perkins. That's all the time you had.
    I now turn the floor over to Mr. Turnbull for six minutes.

[English]

    Thanks to all of the witnesses for being here today.
    Mr. Ouimet, I'm going to ask you a quick question.
    We've heard from other witnesses who were former board members at SDTC that recusals on the board for real or perceived conflicts of interest were a fairly regular practice, but what I want to know is whether they were documented, because I think there's some discrepancy as to whether they were adequately documented.
     From your perspective, were recusals regularly documented?

[Translation]

    The conflict of interest management procedures are rigorously followed. It's important to note that the act introduced in 2001 to constitute SDTC requires that directors come from the green technology industry and that they be connected. The legislator has thus put in place a recipe for creating conflicts of interest. Accordingly, we have had thoroughly rigorous practices in place to manage them from the start.
    Every time a file is submitted to the governance committee, we provide those who receive it with a list of businesses, stakeholders, shareholders and and officials involved and we ask them whether they have any conflicts of interest. As a result, a person can immediately see whether he or she has a real, perceived or potential conflict and, if so, immediately recuses. From that point, the individual receives no documentation and does not participate in decision-making.
    The list of individuals in conflict of interest is noted at the start of every meeting of a decision-making, investment or advisory committee. From what I understand of the subsequent reports, in certain cases, there is no indication that a particular person left the meeting at a particular moment or subsequently returned. However, since the practice was known to everyone, that person declared a conflict of interest at the start of the meeting, recused himself or herself during consideration of the file in question and subsequently returned to the meeting. I have been attending board meetings in my capacity as director since 2018; I attended those meetings in another capacity starting in 2006, and I regularly witnessed recusals by all the directors of several generations.

[English]

    Thank you for that. It sounds like recusals were regular practice, but perhaps, at times, were not documented as well as they should have been. I think we've heard that.
     Would you agree that perhaps they weren't documented as regularly as they should have been?

[Translation]

    The administrative recommendations noted in the recent reports convey that perspective. It's probably factual.

[English]

    Thank you.
    Mr. Lafond, in your case, you mentioned a specific instance with the MakerLodge, I think, in the SIIT leadership, and the decision that you recused yourself from, which is great to hear.
    Was that documented? Do you know if that was documented?

  (1855)  

    Yes. It was.
    Okay. Great. Would you be able to table that with the committee?
     Yes, I can.
     Thank you very much. That would be great.
    Mr. Kukucha, I'm going to you now. You said you were the recipient of the whistle-blower call, which is important for us to dig into. You said they did not submit any evidence, even though it sounded like you had requested evidence. You also said in your opening testimony that you felt the board fulfilled its fiduciary responsibilities with regard to those complaints that came in. Can you describe, in a little more detail, what process you followed, to give us that assurance? I want to know how you can make the claim that the board filled its fiduciary responsibility.
    Absolutely. Thank you.
    I have my notes for the timeline. I received a call on January 27 and immediately consulted with some board members on the governance committee. We formed a special committee on February 1. We referred that and hired counsel on the 3rd, and we started on that as early as February 10. Our goal was to make this a priority and to address this immediately because there were serious allegations that were outlined by the whistle-blower.
    After that, there was a rigorous investigation by the special investigatory counsel we hired to look into this. They put in 23 hours' worth of interviews. Over one to three sessions, for the number of people they talked to, they reviewed over 13,000 documents and worked with the special committee to answer the questions that were raised to me. It appears, now, that we didn't have all the facts because of the dossier. It was raised with me that the mandate be provided to the counsel.
    Thank you.
    With regard to the COVID support decision-making that the board had done, you mentioned that was the second decision, and that was just before you joined the board, where you declared your conflicts of interest just prior to that decision.
    I understand from reading documents and hearing other testimony, that the decision was made based on a whole portfolio of companies, where no company was actually disclosed in the motion that the board voted on. Is that true?
    That's correct, sir.
    Okay. Is that why the legal advice you received said that there was no perceived conflict of interest? I think this is a very important point, because we've heard from numerous witnesses that there was no perceived conflict of interest because, in fact, you were approving a whole portfolio with a flat amount or a percentage, I think. I'm not sure which, but maybe you can just clarify that.
    My recollection is that it was a pool of capital that management had the discretion to make more investments in, up to a certain percentage, but I honestly don't know the specifics, sir. It's been a while.
    From a legal perspective, that was my understanding, certainly, that those conflicts had been declared, and we did not have to re-declare again.
    Just very quickly, would you be able to table the document that you had offered? You offered two documents in your opening testimony that you could table with this committee. Would you be able to table those? One was that no companies were disclosed. I think the document that actually demonstrates that would be helpful.
    Then you mentioned another one. I had it in my notes. Could you also table the other document you mentioned?
    Certainly, sir. I'll ask SDTC to table the director's motion. We received a partial transcript from the phone call that the whistle-blower taped, so I'd happily be open if the whistle-blower wanted to disclose the whole tape. I'm very comfortable with my conduct.
    I offered to save space for that individual to testify, and I let him know that we were going to take these allegations very seriously and going to action them immediately.

[Translation]

    Thank you very much.
    Go ahead, Mr. Lemire.
    Thank you, Mr. Chair.
    I'll go to you first, Mr. Ouimet.
    For a corporate director such as yourself, one's reputation is probably, even obviously, what's most important. In that connection, earlier you mentioned that it was normal for there to be conflicts of interest, given the way SDTC was constituted.
    You have to understand the level of expertise that was required around the table. That was particularly the case when SDTC was founded, and the same is true today, 20 years later. The context required the involvement of persons who had a very clear understanding of what it meant to create a new business in emerging economies and in the green economy. It required a high level of expertise.
    Could anything have been done differently than to engage individuals with confirmed experience and detailed knowledge of the sector? Could that program have been established differently?

  (1900)  

    Originally, in 2001, SDTC's mandate concerned much more limited fields, such as water, air, soil, processes and decontamination projects. So its mandate was very narrow but complex.
    Its mandate has now become more complex because clean technologies have expanded into virtually all sectors of the economy. You need only consider the investments being made around the world, particularly in the United States, to see that.
    Thorough knowledge of many sectors is therefore required. SDTC has 15 directors. The organization thus has to attract directors who have vertical sectoral skills in all fields. SDTC also requires a matrix of technological skills and knowledge of the various stages in the development of a business. Three factors must be considered: the sector, the kind of technology and the stage of the business's development.
    Some of the new businesses that SDTC finances are at the bench-scale stage, others are starting up, and still others are growing. They also have completely different management and technological development dynamics depending on their stage of development. SDTC therefore requires a board that is capable of assessing the situations of those businesses because it has to consider a large volume and broad diversity of investments.
    My impression is that, when it comes to analyzing businesses to determine which ones to invest in, the federal government has vast expertise in the oil and gas sector but perhaps is less specialized in emerging technologies and that it's also less familiar with all that has to be done to ensure that investments, especially involving public money, prosper and further the development of intellectual property that will create employment and be sustainable in Quebec and Canada.
    In that context, isn't dealing with a board of expert directors an entirely appropriate way to proceed?
    I think it is. However, I can't assess the federal government's competencies. I'll let others do that.
    In 2001, the federal government decided to create SDTC and to give it a mandate and form of governance. SDTC isn't a group of entrepreneurs who have constituted a non-profit organization. The government created SDTC. I think it decided to use the proceeds from the sale of Petro-Canada to establish a green fund. It then chose a form of governance that would function in the prescribed manner, and that's how it has functioned ever since. Its expertise in the field has developed.
    The organization was initially in the start‑up phase. Today, we have a pan-Canadian clean energy image with a logo that many people are proud to adopt.
    You're still an active member of the board, aren't you?
    That's correct. I was appointed on November 8, 2018. My term was extended by a year at the request of the Governor in Council.
    Following the allegations and the ensuing investigation report, SDTC suspended new project approvals and stated that it wouldn't accept new applications until the recommendations had been implemented.
    How many new projects would normally have been approved in the ensuing months?
    How many applications are normally accepted during that same period and can't be accepted now as a result of this uncertainty dynamic?
    I don't know the number offhand, but I can tell you that SDTC generally invests between $175 million and $200 million a year in businesses in tranches of $2 million, $3 million, $4 million, $5 million or $7 million a copy, as they say. So it's a considerable amount of money.
    There can be seven or eight projects to be reviewed at every meeting of SDTC's investment committee. Since there are six meetings or so a year, SDTC considers some 50 projects annually. If you add up the amounts involved, they come to a considerable sum.
    And that's not all. Businesses that are already in SDTC's portfolio are waiting to move on to the next phase of their projects. For some of them, phase one may simply lapse because phase two won't be starting. And that's not counting the businesses that have been neglected.
    Consequently, there's a serious risk that the clean technology ecosystem may be damaged for reasons that warrant our presence here today. It's unfortunate to have to suspend funding.

  (1905)  

    In the meantime, the minister is attending COP28 alongside oil executives.
    I can't comment on the minister's work. I perform my role as a director.
    You're doing well.
    Could it have an impact on SDTC's overall mission in future?
    I think it's a matter of time. Reports have been written on the allegations. To date, all the investigations have confirmed that they were baseless. Every report included a series of administrative recommendations, and they were highlighted in a management report that was submitted to the minister and tabled on December 1. As a result, the minister will be informed of all the corrective measures put in place, which may afford him some support in resuming funding for the businesses subject to certain conditions. That's what counts.
    Thank you.
    Thanks as well for your commitment.
    Thank you.
    Thank you, Mr. Lemire.
    Go ahead, Mr. Masse.

[English]

     Thank you, Mr. Chair.
    Mr. Lafond, I will start with you.
    Maybe walk us through the process of declaring a conflict of interest in the boardroom there. Walk us through what happens.
    How does that come about? Do you just basically say it, leave the room and come back in? Maybe walk us through that experience.
    In 2015, 2016 and 2017, we began to realize that our organization was really beginning to respond to the marketplace. In other words, we were responding to what was given to us and to our obligations through our statutory requirements through the order in council. We recognized that we were stepping into a much higher-risk profile with high-risk profile companies, so we had to change our conflict of interest to keep pace. We also had to recognize the types of risks we were now moving into.
    We really began to deal with conflict of interest by getting good advice from, I believe, KPMG, which would give us examples as to what was happening inside IP, inside of AI, in data and how we could make sure we protected those.
    Basically what would happen is, as board members, in advance of a board meeting, we would get a list of the companies we were about to receive and would be making a decision on at the next board meeting. We were to declare any perceived or clear conflict of interest. It would be a profile from left to right, saying that this is the amount; these are the actors, whether it's a VP or...; and these are the other investors inside of it. It would list the sector—whether it was the tech sector or the oil and gas sector—and what it intended to do.
    Then we would respond back and say, “I have a conflict”. When the discussion came through inside the board discussion, people would say that there's a conflict and I recuse myself and leave the room.
    When you get that package, does it include all the information for where you might have a conflict of interest? Do you get a pre-package that would list all the potential decisions of the investments and not have information...? Do you get the full package that could involve some of the information that you might be in conflict with?
    No.
    Like I said, it's left to right. Usually it's the amount, the people involved, the sector it's in and what is there. Is it oil and gas, are they doing something with water treatment, or is it hydro? Are they doing new modular thinking regarding using a technology that allows them to be more cost saving?
    Then you would say that you don't know anybody on the board or any of the investors and you're not involved in this sector. You would really look and ask what the value of it is. Then you would basically say if—
    You wouldn't have any information about what the case would be and then they would send subsequent information about all those investments. You would get two packages.
    That's right.
    Okay. Thank you for that.
    I'm trying to understand a little bit about the culture of the board. Were there board members who would personally know each other outside of the boardroom—professionally, perhaps in co-ownership or joint ownership of other companies or on other boards, or in social circles?
    Was it common in the time you were there to have mixed experiences with other board members?
    For me, it was zero. I had zero awareness of any other board member in this area and in the regions across Canada.
    You have to remember that I come from a very specific community. As I said, there was almost no involvement inside the clean-tech industry.
    If it was the oil and gas industry, sure; I know a lot of people in the oil and gas industry. If it was people in the potash company, yes; I knew what was going on. In clean tech, I had no involvement with anybody else on the board. It was all new to me.

  (1910)  

    Thank you. That's helpful.
    Mr. Kukucha, how about you? What's your experience of the board?
    You have been there for a while. Do the board members socialize together? Do they sit on joint ventures together? Do they sit on other boards together? What's the environment?
    Second to that, are there any rules about board members talking about the contents of the meeting prior to the meeting or is that not restricted?
    I have sat on a lot of boards, both in the United States and Canada. This is one of the most professional boards I have sat on. The quality of people who sit on it is impressive.
    Because of that level of professionalism, there was not one conversation I had outside of the boardroom referring to any of the projects. There's no coinvestment and there are no relationships that extend beyond that.
    Social occasions tend to be about social occasions and whatever activities people are up to. I have never ever seen a discussion and I have never had one myself that was related to anything regarding investment or the projects.
     Okay. I guess there were social times as well when board members would be together too.
    Periodically, we'd have a dinner. Most of those dinners were filled with, for instance, expert speakers or people who would come in to talk about government programs. Even the board dinners were largely working. Conversations in the hallway or a drink afterwards were largely related to what we were doing as an organization.
    Yes. There were those things happening. Listen, I come from an era when a lot of decisions about other people were made on golf courses. I'm just trying to get a picture here. I mean, you must be a little bit concerned about the.... Here I'd appreciate it, and it would be helpful, if you tabled the whistle-blower's information.
    Obviously, the minister has suspended things right now. These things don't happen in a vacuum. What do you think was really missed in this process? I mean, right now a professional body has been engaged to try to allow whistle-blowers to come forward. Some are still concerned. You've been there for a while. If everything is so good, as Mr. Ouimet says, why are we here?
    Please give a brief answer.
    All I can speak for is that at the meetings we have, there's vigorous discussion. Oftentimes, the terms of the funding get changed in those meetings, but we operate as a consensus organization when it goes to a vote. The board members are very much engaged and very much focused on the cause.
    Not being on the People & Culture committee, obviously, if there were issues, some of that did not filter up to the board or to me. But this is one of the better-run organizations I've ever been a part of. It's incredibly professional and very results-driven. I can't say much more than that, sir.
    Thank you very much.
    I'll now yield the floor to MP Barrett for five minutes.
    Mr. Ouimet, how long have you been on the board for SDTC?

[Translation]

    I've been sitting on it since November 8, 2018.

[English]

    You were appointed by order in council?

[Translation]

    Yes.

[English]

    How many companies that you have had stakes in have received taxpayer money from SDTC?

[Translation]

    One.

[English]

    Did you vote to give your own companies taxpayer money?

[Translation]

    First of all, I don't have any companies. You don't own a business when you hold 1% of its shares.
    We discussed the situation—

[English]

    Let me offer you precision, then, sir. How many companies that you have an interest in have received money from SDTC?

[Translation]

    I'll repeat my answer: one company.

[English]

    Did you vote to give these companies that you have an interest in taxpayer money?

  (1915)  

[Translation]

    If you understood the logistical aspect of the COVID‑19-related payment, you'll understand that the answer is yes because, according to the legal opinion, the conflict of interest was managed.

[English]

    I do understand that. Let me lay this out. One of the companies you have an interest in is called Lithion. Is that correct?

[Translation]

    Yes.

[English]

    You answered “yes”. I'm not sure if that was audible.
    We heard about how rigorously the conflict of interest rules have been followed at the company. I want to go over that with you.
    In front of me I have meeting minutes from the Monday, March 23, 2020, meeting where a COVID payment for Lithion of $192,100 was paid.
    Mr. Guy Ouimet: Yes.
    Mr. Michael Barrett: You've agreed “yes”.
    On Tuesday, March 9, 2021, that company you have an interest in received $201,705.
    Mr. Guy Ouimet: Yes.
    Mr. Michael Barrett: So there was one payment of $192,000 and one of $201,000. You voted to award that to both of those companies.
    Now, in attendance at both of those meetings, your name is listed. Were you in attendance at those meetings, sir? Just give a yes or no.

[Translation]

    Yes.

[English]

    Perfect.
    I see that noted on the minutes are “regrets”. You're not included in regrets.
    When I turn the document over, it says that the decision was unanimous on that item. That was on the 2020 item. The same was true for 2021. It was unanimous. There's no indication that there was a recusal made.
    In further items, it goes on to actually detail when someone leaves the room. It talks about how the CEO left for an in camera item. Of course, we can infer from this that you didn't leave the room for the decision on the COVID payments. Is that correct?

[Translation]

    Yes.

[English]

    So you voted in both cases to award the money to the company that you have an interest in. Is that right?

[Translation]

    Yes.

[English]

     Sir, do you see how it could be perceived as a problem for taxpayers that you're there as a government appointee, and then you vote to give a company, which you have an interest in, hundreds of thousands of dollars? Do you see how that's a problem?

[Translation]

    I can answer that.
    The problem was acknowledged at the outset. I repeat that we established an emergency measure. COVID‑19 arrived and created a state of emergency. There remained $20 million. As the companies needed working capital, a universal program was immediately set up
    We all declared conflicts of interest at one point or another. Well, that wasn't the case for all of us, but it was for a number of directors. You mustn't think that all directors are constantly in conflict of interest.
    So that decision was made. We didn't vote blindly, as you seem to suggest. We immediately asked management how the program should be managed, and it ultimately took the form of an allocation to management. Here we're talking about 5% of companies in the portfolio as an addition to the existing contract. We submitted that decision to the legal adviser and subsequently complied with his rule.

[English]

    We understand that the amount awarded was a percentage amount; but, sir, there's no way that reasonable people would find these to be reasonable actions. It's unbelievable that we have hundreds of thousands of dollars being paid. We have a government appointee taking taxpayer dollars and then giving these dollars to their own company they have an interest in, because other people, who are also implicated, like the chair of the board, have said that this kind of behaviour is okay. To say that this is a company or an organization that has rigorous ethical standards that have been followed is absolutely egregious.
    We saw in the report that the government commissioned that the ethics adviser advise people to backdate their ethics filings. Sir, this is not just problematic; this kind of insider dealing and corruption is very problematic for Canadians.
    Sir, we'll have more questions about that. I think I'm out of time.
    You are indeed, Mr. Barrett.
    I will now yield the floor to Mr. Sorbara.
    Good afternoon, gentlemen, and welcome.
    I want to turn my attention to a couple of things. The first is the role of SDTC as an incubator, as an investor in the clean tech in Canada.
    If I can go over to you, George, how important is that role?
    It's plays very critical role in where we placed it inside the value chain. As I said, when we started realizing that that clean-tech definition was changing dramatically, when we began this journey as a country—it even goes way back to when I was on the National Round Table on the Environment and the Economy—we were really looking at the hydrocarbon issue. That's why, when you take a look at the funds that we had or were initially allocated, you see that it was about biofuels.
    We recognized, as I said, in about 2015, 2016, that there was a change in how you would define clean tech, because then you had to overlay IP, you had to overlay data, you had to overlay AI. We recognized that we had to keep up with where the market was going.

  (1920)  

     I was an investment banker at one time in my life, in my twenties, and it was a pretty unique experience. The traditional metrics when you're a valuation company don't really apply when you're looking at early-stage entities.
    A voice: That's right.
    Mr. Francesco Sorbara: I'm going to put this question to George, Stephen—and even to Guy, if he wants to chime in.
     In the approval process, when project ideas or ideas for investment were submitted to you, there must have been some sort of governance process that you would go through in normal times. All three of you gentlemen are very experienced in this area. In your view, how robust was that governance process that you would be looking at or utilizing in analyzing the project ideas?
    We have George and Stephen, and then Guy, if you can keep it quick.
    I remember receiving a briefing note that talked about where we were going to start positioning SDTC, which was in a high-risk area. We basically looked at all of what I call the "silos", whether it was the type of research that's required for technology, or whatever.
    You said you were a corporate banker. As you know, when you start doing start-ups, you're really looking at the capital costs, you're looking at the tangible assets, like a factory. In Saskatchewan it would be $200 million for a canola crushing plant. What entrepreneur has that? Basically, you're looking at banks, you're looking at investors, whatever it may be. We recognized we had to be even on top of that tier. When we started talking about the types of loans, or the types of grants, soft loans, whatever they may be, we knew we had to be the point of these issues surrounding the new way in which we had to help fund and support start-ups or scaling up.
    I would say to you that the rigour was there, because in many ways when you see how some of the projects that were starting up then moved to scale up, it told me that we were really graduating. We were putting bets on companies that really were saying what they were going to do.
     Perhaps we can go to Stephen online, please.
    I've seen and done a lot of due diligence in my day. The materials that come from this group are first class. They are top notch. It's to the point that this organization, over its 20-year history, has invested about $1.7 billion into companies, and that has leveraged about $13 billion. That only happens if the due diligence and the effort the organization puts in are respected and viewed by external investors as something that's impeccable.
    As a member of the project review committee, I'll receive 1,000 or 1,200 pages to read prior to a meeting a week before, and it is detailed. I'm very confident with the work product that this group and team puts out.
    Stephen, you mentioned earlier that you recused yourself on occasions where you needed to and that was the policy that was put in place. Is that correct?
    That's correct, sir. Yes.
    Guy, perhaps you could comment on what I've asked the other two gentlemen, s'il vous plaît.
    Is it about due diligence?
    It's on the due diligence and the recusal.

[Translation]

    SDTC's due diligence process is absolutely robust, and it has evolved. It was initially a way to evaluate projects when SDTC was financing projects with various partners. In more recent years, we've encouraged SDTC to adapt to the market, to finance companies rather than projects. You have to understand that projects sometimes disappear, whereas companies have a greater chance of successfully creating value.
    SDTC's due diligence process therefore expanded in the sense that we now focus more on all the components of a company's value, rather than solely on technology. In so doing, we've established a robust process within SDTC using outside resources.
    Thank you.
    That's all the time you had, Mr. Sorbara.
    Mr. Lemire, you have the floor.
    Thank you, Mr. Chair.
    I'll continue with you, Mr. Ouimet.
    From what we can understand, the oil industry generates profits of roughly $200 billion. TVA informed us this week, and we mentioned it earlier in the House of Commons, that people in that industry have had some 2,000 meetings with Liberals at the various levels of government, which amounts to an average of three meetings a day.
    In all those meetings and all those investments in the political parties, both Liberal and Conservative, can't you perceive an apparent conflict of interest that might be worth pointing out?

  (1925)  

    That's a good question, but I'm not here to talk about politics. That's not my world. I come from the business sector, and I have experience as a director. The role of political parties and oil industry lobbyists is a sensitive topic for me because I'm not part of that world. However, I'm well aware that lobbyists are active in every industry, and the oil lobby in Canada is no doubt very powerful.
    Do you think that can have an influence on the kind of priorities that certain political parties set or that it can encourage them to put certain questions to witnesses based on the interests of those businesses or their financial interests?
    I'll ask my question in a different way. Given all the positive impacts that your businesses can generate in a new oil-free economy, is it possible that the oil companies may perceive the emergence of green technologies as a threat to their own economy and be happy that people ask questions for them based on their monetary interests?
    I won't answer that question directly, but I will say that the industry associated with the fight against climate change using clean technologies isn't a threat to the economy or to oil. Oil is abundant and will be around for a long time too. Oil production will decline over a long period. There are all kinds of ways, such as carbon capture, to clean up the oil industry and contain it where that's essential. It will continue to exist.
    It's like when we started digitizing accounting and stopped doing it on paper: it didn't destroy jobs. The economy is shifting. Different technologies will come to light after the industry in general has been decarbonized, and everyone will benefit. The people in the oil industry will survive and those who evolve will go elsewhere.
    Thank you very much.
    That's all the time I had.
    Thank you, Mr. Lemire.

[English]

     For our final questioner, we have Mr. Masse.
    The floor is yours for two and a half minutes.
    Thank you, Mr. Chair.
    I'll go back to Mr. Kukucha.
    You heard testimony earlier this evening about board members voting for their interest in regard to their own companies. Is this the highest model of how it operates? We just heard testimony that people with a conflict of interest voted in favour of projects they were tied to.
    Is this a regular practice? Do you want to correct the record? Would you expect this to take place on other boards?
    Absolutely not.
    It certainly wasn't a regular practice. I know the RCGT report pointed out some documenting issues. I think some of the minutes and documents may not accurately reflect.... I can't speak about all of them with specificity.
    I know for a fact that, when directors had a conflict, the process George discussed is exactly what happens. You declared the conflict before you received materials, and you left the room. Perhaps, in the instance of a COVID payment—the second one; I can't speak on the first one—some people, including me, had an interest, but there was no list of companies that came forward to us.
    To this date, to be honest, I didn't even know the companies that I [Inaudible—Editor] conflict received funds.
    We just heard one of our panellists, Mr. Ouimet, suggest that he was in the room and voted for money going to a company he had a financial interest in.
    The proper procedure, and the procedure that was followed, was this: Directors, either physically or virtually, left the room.
    That's my recollection of what always happened at meetings.
    I'll leave it there.
    Thank you, Mr. Chair.

[Translation]

    Thank you very much, Mr. Masse.
    That's all the time we had.
    I would like to thank the witnesses, Messrs. Kukucha, Ouimet and Lafond, as well as the interpreters, the support staff, our clerk, the analysts and everyone here present.
    Go forth in peace.
    The meeting is adjourned.
Publication Explorer
Publication Explorer
ParlVU