Skip to main content

ETHI Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Access to Information, Privacy and Ethics


NUMBER 019 
l
1st SESSION 
l
45th PARLIAMENT 

EVIDENCE

Wednesday, November 26, 2025

[Recorded by Electronic Apparatus]

(1630)

[English]

     I call the meeting to order.
    I want to welcome everyone to meeting number 19 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

[Translation]

    That, pursuant to Standing Order 108(3)(h), the committee undertake a study to assess artificial intelligence (AI), the challenges it poses, and how it should be regulated.

[English]

    I would like to welcome our witnesses for today. As an individual, we have Antoine Guilmain, who is the partner and co-head of Gowling WLG's national cybersecurity and data protection practice group. From the Machine Intelligence Research Institute, we have Malo Bourgon, who is the chief executive officer.
    Mr. Guilmain, I'm going to give you up to five minutes to address the committee. If you want to start, go ahead, please.

[Translation]

     Thank you very much, Mr. Chair and members of the committee, for inviting me to comment on the challenges related to regulating AI.
    Although I will be testifying in English today, I will respond to your questions in English or French.

[English]

    I am co-leader of the national cybersecurity and data protection group at Gowling WLG, and I'm an associate professor at the faculty of law at the Université de Sherbrooke. I am a practising lawyer called to the bars of Quebec and Paris. My evidence today represents my own views. I am here as an individual, not representing my law firm, clients or any third parties.
    Much of my legal career has focused on comparative analysis of legal regimes across the globe, advising clients on their compliance obligations in the jurisdictions where I am qualified to practise. My practice focuses on data protection and cybersecurity, and it naturally extends to artificial intelligence, given its role as a major data-driven technology.
    To me, Canada has always been a model of education, growth and innovation. That's why I chose to pursue my doctorate, start my family and build my life here—recently earning citizenship, which remains one of my proudest moments. I believe that Canada's institutions, diverse economy and culture of innovation create an environment well suited for the effective development, adoption and regulation of AI technologies.
    Today I would like to discuss the challenges of AI, not simply as an ever-evolving technology but as a new field of regulation. In my view, grounded in my experience in the current international landscape, there are three key pitfalls that we must not overlook.
    The first one is that newer doesn't mean better. There is a natural tendency to respond to new technology by creating new laws. However, consistent with the civil law tradition, leading jurists have long recommended applying ancient law to technological revolutions. This approach is not about doing nothing. Rather, it calls for revisiting existing areas of law and adapting them, case by case, to each new technology.
    Today, AI does not exist in a legal vacuum in Canada. A wide range of legislation already applies, including copyright, liability, trademark law and data protection. In this last area, we are already seeing new obligations related to automated decision-making, including in Quebec, to ensure transparency when AI is used. In that sense, prior to tabling bills like the former AIDA, we should assess current laws and identify any gaps before imposing new requirements.
    My second message would be that faster doesn't mean better. There is a natural tendency, again, to adopt laws as quickly as technologies evolve. However, in law more than in any other field, slow and steady often proves the wiser approach. A look at both domestic and international developments illustrates why.
    In data protection, for example, the GDPR, the general data protection regulation, was adopted in 2016, but it took Quebec five years to amend its own legislation in response, with Law 25, particularly in light of the GDPR's international impact. In the realm of AI, the EU AI Act, which came into force in August 2024, is already facing a form of retrenchment, especially regarding implementation timelines and the regulatory burden on tech companies. Whether it will achieve the same success as the GDPR remains uncertain.
    Closer to home, AIDA faced significant changes after its introduction. The most recent version contained no fewer than 70 references to upcoming regulation in just 20 pages—an ambitious effort, but far from a self-contained legislative text.
    My last message would be that heavier doesn't mean better. Again, there is a tendency to assume that the greater the burden on organizations, the better the protection for the public. This is not always the case, and, more importantly, it can undermine the competitiveness of small and medium-sized enterprises. AIDA reflected this trend, mandating multiple assessments at various stages of an AI system's life cycle. While theoretically sound, this approach is rarely feasible in practice, at least based on my experience.
    In sum, I believe that AI legislation can succeed only through sustained and substantive collaboration with stakeholders in industry, academia and civil society to ensure that any framework, first, reflects a risk-based approach; second, appropriately takes into account the state of AI technology, including its current limitations; third, assigns responsibility along the AI value chain; and finally, harmonizes core concepts with existing international frameworks.
    With the chair's permission, I would be pleased to submit a short written brief in French and English on the issues I have addressed in my opening remarks.
    Thank you, and I look forward to answering this committee's questions.
     Thank you, Mr. Guilmain.
    Mr. Bourgon, you have five minutes, sir. Go ahead.
    Mr. Chair and members of the committee, my name is Malo Bourgon. I'm the CEO of the Machine Intelligence Research Institute, or MIRI, a non-profit founded in 2000 to make sure that the development of powerful AI systems is beneficial for humanity.
    I grew up in Ontario, where I studied engineering and computer science at the University of Guelph, and I've worked at MIRI since 2012. Our research helped create the field of AI alignment, the study of how to build AI systems that reliably want—and do—what we want them to.
    Governments face many urgent AI concerns, such as disinformation, surveillance, labour displacement and threats to democratic institutions. These are all real and important. However, my focus today is on something different: dangers from AIs that are smarter than the smartest humans at every mental task—what's often termed artificial superintelligence.
    The leading AI companies today say that the creation of artificial superintelligence is their explicit goal. OpenAI's CEO, Sam Altman, recently called for “making superintelligence cheap [and] widely available”. Anthropic's Dario Amodei talks of building “a country of geniuses in a datacenter”. These AI companies weren't founded with the intention of making chatbots. To them, chatbots are a stepping stone.
    Researchers at MIRI are concerned that if the world continues racing towards superintelligence using anything like today's techniques and understanding, the default outcome is that we'll lose control, likely resulting in human extinction.
    Why is there such a big danger? For one thing, AI is unlike traditional software and doesn't behave exactly as its creators intend. Traditional software is written line by line, and a programmer can understand every part. Modern AI systems are grown as enormous neural networks, trained through trial and error with massive computation. Their creators have little insight into what's actually going on inside them. As a result, AIs often exhibit behaviours that nobody asked for and nobody wanted.
    For years, MIRI has warned of this eventuality, and now we're starting to see early evidence. Frontier AI systems get caught cheating on evaluations. AIs sometimes drive users into states that clinicians call AI-induced psychosis, even in cases in which the AI systems themselves can readily tell that their responses are harmful to the user. When we look at their chains of reasoning, we see growing signs of attempts at deception. An especially concerning complication is that models are increasingly recognizing when they're being tested; this is called situational awareness, and it threatens the validity of all safety evaluations moving forward.
    At current capability levels, these behaviours are concerning, but not catastrophic. The systems are still limited, but we must ask, what happens when they reach the capabilities the companies are aiming for? Will future AI systems start pursuing their own objectives? If so, what will those objectives be? Do these systems endanger us? Can we just pull the plug? Many who have studied these questions have found the answers quite concerning.
    Canadians Geoffrey Hinton and Yoshua Bengio are two of the three godfathers of deep learning—which is the paradigm that underlies most of the modern AI systems today, and certainly the most powerful ones. They have publicly warned of the dangers of extinction. In 2023, they joined other top AI scientists, and even the CEOs of OpenAI, DeepMind and Anthropic, some of the leading frontier labs, in this statement: “Mitigating the risk of extinction from AI should be a...priority alongside...pandemics and nuclear war.”
    Some of these signatories lead the very companies racing fastest to build superintelligence. Elon Musk called AI a “fundamental risk to the existence of civilization”. Dario Amodei said, “there's a 25% chance that things go really, really badly”, including extinction. This is an unprecedented situation, in which even the creators of the technology are saying it's incredibly dangerous.
    Catastrophe, however, is not inevitable. These dangers can be averted. The race that so many see as unstoppable is taking place in a world where most people don't understand the threat. That can change.
    What can Canada do? Policy-makers can say what other leaders seem to lack the courage to: that, according to top experts, the race to superintelligence is far too dangerous. Canada can start a global conversation that changes what's possible when it comes to averting this threat. This very House could ask the leaders of those companies to testify under oath about these grave dangers.
(1635)
     As for the motion that initiated the study, the mover said that Canada should not “unnecessarily slow down technological development”. I agree, of course. We can keep the self-driving cars, the chatbots of today and the AI-powered drug discovery tools among many very promising AI applications. Much of the technology is extremely beneficial and promising. The only thing we need to stop is the race to AIs that exceed humans in every way. The extremely specialized chips and enormous data centres essential to that race can be reined in. Canada cannot do this alone, but it can help start the global conversation.
    Canadian scientists led the way on this technology. They continue to lead through their efforts to get the world to avert the dangers. My hope is that Canada will use its voice and moral authority to push the world forward so that our best plan is not that we hope we get lucky in the presence of this threat.
    Thank you. I look forward to your questions.
(1640)
    Thank you, Mr. Bourgon.
    With that, we'll start our questioning. It's going to be a hell of a discussion, I think.
    Mr. Barrett, go ahead. You have six minutes, please.
     Mr. Guilmain, the current reliance that we have is on a voluntary AI code, if I understand correctly. The European Union enforces the AI Act, which is binding. From a compliance perspective, does this gap expose Canadians and Canadian businesses to legal uncertainty? Does it create a greater risk for privacy and algorithmic bias risks?
    Not necessarily, and I will explain why. We have, at the moment, data protection laws that are working pretty well. We have a privacy commissioner at the federal level and in different provinces as well. In these laws, we already have some existing requirements, including when it comes to AI—more specifically, when there's an automated decision-making process.
    It's interesting that you raised the EU example. As I mentioned in my opening remarks, it came into force in 2024, but last Wednesday—as a matter of fact, on the same day I got the invitation for this session—the Europeans tabled a digital omnibus on AI. Essentially, this text aims to extend timelines for compliance, as well as to simplify the burden and the obligation for small and medium-sized organizations.
    It tells us that they came up with a proposal, but it's still evolving while we're moving forward. It's important to keep this in mind.
     What measures do you think should be in place, for individuals who are shaping AI policy, to ensure and prevent...that there isn't regulatory capture or conflicts of interest?
     In terms of the potential obligations we could think of, it would mostly be compliance assessment, but not too much. What we see at the moment, especially in Quebec, is a tendency to put impact assessments in pretty much any legislation. That's a problem. We see that it's not feasible for most organizations. There is also potentially the idea of more accountability documentation or having some procedures and processes internally. Again, the idea of policies and procedures is amazing. Even though I'm a lawyer and I'm a big fan of these documents, it's not sufficient.
    Finally, at the end, I think it's more about training within organizations. That's what we see more and more, at least with my clients. There's a goal to really ensure that the staff know and understand the potential AI risks, as well as the potential benefits for their own organization.
    It's a rapidly evolving space, then.
    We've seen companies like Nvidia finance customer purchases of their own products to accelerate their adoption. Do you see similar risks if individuals with major AI investments are advising on Canada's AI policy when their financial interests could shape the rules? That's what I'm driving at.
     I'm not sure I follow the question. I apologize.
    I got part of an answer in the first half, so I'm going to move to my next one, if that's okay. If you reflect on it afterwards, and you want to submit it to the committee in writing, I'd appreciate that.
    Mr. Bourgon, your institute warns about catastrophic AI risks, though you did temper some of that in your opening statement. You said that it might not all be bad.
    There is an absence of binding legislation, and we have a reliance on voluntary codes. Do you think this approach is sufficient to prevent the worst-case scenario, or do you think that to prevent the worst-case scenario, the most extreme effort should be undertaken, which would be a moratorium on development until robust and conflict-free regulations are put in place?
(1645)
     That's a great question. When I think about AI, I often try to separate the applications of AI and the current systems we have today. Many should be regulated the way most normal and new technologies are regulated.
    The thing I focus most of my time on is thinking about where the technology is going, and what the risk will be from these very advanced systems. In that case, it is, unfortunately, a very challenging coordination problem.
    As for any actors deciding that they think the risk is too great, their slowing down does not really help avert anyone else building a system that would pose those risks. The main area of focus should be on trying to have those conversations with partners to figure out an agreement they could come to in order to stop pushing the frontier.
    However, I think that our best chance of not succumbing to a catastrophe is finding some way to agree with international partners on which frontiers we aren't going to push.
    Thank you.
    Thank you, sir.

[Translation]

    Mr. Sarai for six minutes.

[English]

     If you need your headphones, make sure you have them on. Make sure you're on the proper channel.

[Translation]

    Thank you very much, Mr. Chair.
    I thank the witnesses for their very inspiring and interesting testimony.
    Before I ask any questions, I would like to make a clarification about the history of AI.
    What would be very dangerous would be a revolution, or demands for regulatory changes around the world in relation to technology that has seen a resurgence in use recently, but which has been around for a long time.
    There is a resurgence in the use and exploitation of AI because it has been popularized; however the first article in which the term “artificial intelligence” was mentioned dates back to before 1954. It is very important to remember this. Today is my birthday, and I have been using artificial intelligence in predictive analytics for over 30 years. This is a very important point, and it leads me to ask you both a question.
    We are not just talking about generative AI or machine learning whether there is oversight or not. I do not see how regulations could be applied in either case. That is why I would like to benefit from your expertise in this area.
    Would asking the government to regulate technological development really be a viable solution, or would it be better to regulate the use of technologies to manage the risks? If I own a private company that creates technology, will we regulate how I do it, or rather regulate how that technology will be used afterwards? When I talk about use, I'm talking about the collection, processing, and communication of data, and the cycle is much more important.
    I would really like to hear your thoughts on this.
    Thank you for your question and happy birthday. I am in a good mood too.
    I would like to start by saying that, in this presentation, we place a great deal of emphasis on the risks, and rightly so. There are risks, but there are also good things that come from AI.
    Next, I would like to emphasize something that is sometimes overlooked, namely that there are legal considerations. It is very important to stress this point. I am a lawyer specializing in privacy and cybersecurity, and I can confirm that there are rules. I have work at the moment, so there is no problem on that front.
    That said, I think it is interesting to ask what kind of AI we are talking about. The motion refers to artificial superintelligence. There are nuances. My colleague Mr. Bourgon will also be able to comment on this. From an educational perspective, there are three types of AI.
    First, there is rather limited AI, namely the kind we have today, even though it has increasingly sophisticated and broad capabilities. Second, there is slightly more general AI that would mimic human behaviour. Third, there could be artificial superintelligence.
    However, it is important to understand that it will not happen overnight. Artificial superintelligence is a concept that dates back to the 1950s. Since then, there have been developments, advances, and setbacks. Today, we are where we are, and we are witnessing a growing trend that will not be reversed overnight.
    This study is fundamental, in my opinion. Furthermore, in general, Canada truly has a rather unique approach to AI, both in terms of adoption but also in terms of trying to regulate it. I salute this aspect, but we must not think that it will lead to an immediate change.
    This is my opinion.
(1650)
    Let's talk about trying to regulate it.
    I have been observing the regulatory trend in the European Union for a long time. Earlier, you mentioned there were changes just a week or two ago.
    Let's use another example so that we don't rely solely on the European Union, namely Australia. When the Australians wanted to impose regulatory requirements on companies, first, there was a lot of resistance; second, some companies left the regulated areas.
    Do you think this could be a risk? What would you advise the Canadian government to do to better regulate the use of AI, while keeping skills and knowledge within our country? I do mean “regulate.” I completely agree.
    I do not think legalization will solve all the problems. Regulations are much broader than legislation. Different types of regulations are issued by markets, by social needs, by voluntary codes of conduct. So it is really a very broad concept.
    I would like to talk to you about Quebec. You may have heard of Quebec's very clear Bill 25, an Act to modernize legislative provisions as regards the protection of personal information. Even today, I still provide legal advice and group therapy on this law. I am telling you this because it is an extremely onerous law in terms of obligations that puts small and medium-sized businesses in absolutely untenable situations.
    It is an interesting situation because there has been progress because of new legislation. Is the public better protected? Are organizations comfortable with this law? I can assure you that if we conducted a survey, we would probably get some rather surprising results.
    All this to say that the law is a good tool. Again, don't get me wrong, because the law is my job. However, we still need to think about how the regulator applies it. It's really important to keep that in mind.
    I would like to remind you that Bill 25 only affects how data is compiled, which is fine because it deals with the processing, use, and disclosure of data.
    Thank you for your comment. Perhaps I will have the opportunity to go back to this later.
    Thank you, Mr. Sarai.
    Mr. Thériault for six minutes.
    Thank you, Chair.
    I will give a brief introduction.
    I proposed the study we are undertaking on AI. This standing committee has three priorities: access to information, privacy, and ethics. When I read the “Statement on Superintelligence,” which was signed by leading figures in the field of AI, including Geoffrey Hinton, Yoshua Bengio, and many others. The list is incredible.
    Like you, Mr. Bourgon, I have read the statement. These people are saying that we must mitigate the risk of extinction due to AI, and that this should be a global priority on a par with other societal risks, such as pandemics and nuclear war.
    It blew me away. From there, we can ask ourselves whether this is really serious. If we start looking, we notice a frantic, almost blind rush towards the establishment of artificial superintelligence that seems to favour economic interests, concentration, and control of information over human interests. An entire vision of human beings underlies what we are doing and what the impact of artificial intelligence on human life will be. For example, when it comes to computer engineers, AI can enlighten us and do the job on the spot, which will revolutionize everything.
    I will turn to you in a moment, Mr. Guilmain, but I think things are evolving quite a bit faster than you claim. Canada appointed a minister of Artificial Intelligence, which is not insignificant.
    Mr. Bourgon, do you consider the people you mentioned in your presentation to be alarmists?
(1655)

[English]

     No.

[Translation]

    Have you already heard claims that artificial superintelligence and its development cannot be controlled?

[English]

     Yes, I've heard that.

[Translation]

    I read the document in which you propose building an off-switch. Could you explain a little more about that?

[English]

    Yes, absolutely. In the context I was talking about earlier in terms of how this is a coordination problem, separating the applications and the current things of AI from where things are going.... I can get into that if people are interested in why those risks might present and some of the history around that. I think we need to build a world where we have the ability to make agreements to stop pushing the frontier of AI development and to verify and enforce those agreements. We're not there yet. We certainly don't have the political will to do anything like that, but we need to be able to build in the capability to have the option to do so.
    Branding is hard. Building an off-switch doesn't mean it would shut down all AI development. It's having the technical, institutional and legal capability, should there be the political will, to impose fairly strenuous restrictions on AI development, deployment and diffusion in pushing the frontier of these very powerful general systems going toward superintelligence. Being able to build that capability is essential.
     We've done the work that you read. We recently released some work trying to sketch out what a model international agreement would look like that could prevent the creation of this technology until we can create it safely. We started to enumerate the things that would need to be in place and that make up the components of what we call this off-switch, to be able to enforce and verify it.

[Translation]

    We can certainly concern ourselves with legislative and regulatory issues, but I felt it was necessary for the committee to consider the ethical implications of such a statement. The public deserves our consideration of the matter, because this is not just any issue. It would be as dangerous as nuclear weapons, but as far as I know, nuclear weapons are fairly codified and regulated. What strikes me is that this is not the case with AI. It seems that people are fascinated by the application of AI to various fields, without even being able to conceive of where it could lead us.
    All these applications will be used in fundamental research to create artificial general intelligence, right?

[English]

    Yes, I certainly agree. I often find myself saying that there seems to be—and it's like slang—a general missing mood here when it comes to where the future of AI is going. Setting aside the risk of loss of control, which I'm worried about, the current AI companies that are taking these risks very seriously are basically talking about building a technology—and you can call it AGI or something else—to have computers do the “thinky thing” we do that allows us to build rockets to go to the moon and to develop novel science. You can think of this as automating automation. Setting aside those loss of control risks, this is still something that would make all cognitive labour economically redundant and that would automate automation.
    These companies expect to be able to build the systems that are approaching this in some small number of years. Maybe there's some advancement they don't have that's going to come in the way, and it could be 10 years. Five years ago, we thought AI systems that would be able to talk to us as they do today were 20 years away. Someone came up with a new idea with a transformer, and then all of a sudden, we made bigger AI systems, and they were talking to us. There's a forecasting question here that's hard, but more money is going into these systems than ever before, and we're making advancements.
(1700)
    Thank you, sir.

[Translation]

    Thank you, Mr. Thériault.

[English]

     Mr. Cooper, go ahead for five minutes. They're five-minute rounds now.
     Thank you, Mr. Chair.
    Mr. Guilmain, you referenced the European Union's Artificial Intelligence Act. It's been characterized by some as essentially the gold standard. I take it from your comments that you would not view it as such. Is that fair?
     I don't know yet.
     Okay. You—
    If I may, I mentioned the general data protection regulation in Europe, which is essentially the gold standard. In the world, we started seeing many states changing their laws to essentially mimic this trend. It took a couple of years. It wasn't done in a year.
    At the moment, we know that the GDPR has been a success in terms of its reach beyond the EU, but we don't yet know whether the EU AI Act will gain the same success. Regarding the update I gave you from a week ago, it demonstrates that we are still building the plane while we are flying it, if you will allow me the expression.
    Just to clarify, you would view it as the gold standard.
    I would do what?
    Would you view it as the gold standard or not as the gold standard?
     I don't know.
    You don't know.
    I think it's interesting. If you were to ask me, I think this is really well thought out. They really tried to come up with something similar to what we had in Canada in AIDA. Clearly, it was this idea of proposing something.
    Is it sufficient? Is it needed? I don't know yet.
     You spoke about some of the challenges, though, including the regulatory burden imposed on companies. I saw one assessment done by the EU that indicates it could cost businesses hundreds of thousands of dollars to use just one AI system. That seems to be problematic. I hope you would agree with that.
    Would you care to elaborate on some of the issues around what I think is problematic in terms of the burdens it puts in place that aren't necessarily in line with some of the risks?
     Absolutely. I will give you an example. Let's say you were the owner of a crossfit gym company. Let's say you're based out of Saint-Louis-du-Ha! Ha! and you want to use AI for real-time movement feedback and potentially for performance tracking and predictive PRs—personal records. You think it's a good idea for your members.
    Under the EU AI Act, it would require quite a bit of money to essentially launch and propose these features to your users. You would have to do compliance assessments. You would need to have in place accountability documentation, policies and procedures. You would potentially need to have some record-keeping obligation, a register, in case of an incident. You would need to make sure that there was human oversight. You would potentially have to notify someone if there were a problem. Remember, you are in Saint-Louis-du-Ha! Ha! and you own a crossfit gym.
    I'm not saying I'm against this obligation. I think it makes sense in some situations. The fact is that—and we see this with the new laws at the moment, with a lot of obligation—the massive problem is mostly for small and medium-sized enterprises. It seems that we have not found the response for these layers of organization that are really key for Canada.
    I think that's my take. Again, I'm not against it. I just think it's a lot.
     Well, here in Canada, we're really lagging behind when it comes to adaptation. You cited the EU law. The U.K. has also done some work around AI regulation. It seems to me to be a more flexible approach with sector-specific guidelines.
    Do you have any thoughts on the U.K. approach? Is there anything to learn from what the U.K. is doing?
     Thank you for citing the U.K. example. I'm not pretending to be an expert on this system, but it seems to be reasonable, in the sense that we don't do nothing; rather, we focus on some sectors, which makes a ton of sense to me. That's my initial reaction.
(1705)
     Thank you, Mr. Guilmain and Mr. Cooper.

[Translation]

    Ms. Lapointe for five minutes.
    I would like to welcome the witnesses and thank them for being here. We are going to learn a lot. You are going to help us go further, do better, and to better regulate AI.
    Earlier, Mr. Guilmain, you said that we did not necessarily need to move faster and that we needed to identify the gaps in our current legislation.
    Were you talking about Canada as a whole or were you referring to Quebec? I would like to hear your thoughts on this. Do you believe that there are loopholes in our laws that need to be addressed?
    Thank you for your question.
    Indeed, we have many laws. I myself specialize in cybersecurity and protecting personal information. However, many other sectors directly impact artificial intelligence. I'm thinking of copyright, trademark rights and consumer protection. Within the framework of those laws, we always have supervisory authorities associated with enabling legislation and with provisions often drafted to be technologically neutral. That's always the goal of our legislation. We confer a form of neutrality so that it stands the test of time.
    That's the case, for example, with protecting personal information. We see that requirements are in place. They don't mention specific technology, but they are broad enough to potentially extend the use case, namely for any use of artificial intelligence. I'm setting aside the issue of superintelligence, because I think even I myself have a hard time defining it.
    Once we've said that, it seems to me there's a trend towards passing these laws because it feels good. It's like eating Nutella. It's something that's not bad. It's pleasant. We tell ourselves we've done something. However, what I see the most often down the line is that we have regulators who must apply these laws and have the skills to do so. We have regulators in Canada and Quebec who do extraordinary work. However, they lack the means to truly keep abreast of these changes.
    Once again, the logic I'm applying focuses on maybe passing fewer laws. We have more and more legislation. However, the fact is we have excellent regulators who can conduct analyses for themselves and are also able to sound the alarm.
    Let's talk about Quebec's Commission d'accès à l'information, to cite just one example. The Commission is responsible for applying Law 25, which protects Quebeckers' personal information. However, we see it's already taken a position on artificial intelligence. It tabled briefs. Its representatives explained what they think is the correct application of the law to artificial intelligence.
    In the end, we end up with forms of regulation through existing legislation. That said, we cannot solve this problem today. However, we must also admit that Quebec's Commission d'accès à l'information lacks resources. In my opinion, this observation applies more generally. We can generalize about this kind of thing. We have high-calibre regulators. Perhaps the solution is to give them a real opportunity to dive into the file and keep a steady hand on it.
    I think that's the expression I find interesting. To “keep a steady hand” on something essentially means having an understanding and applying legislation correctly. That's my feeling, at least.
    Therefore, they would help you update legislation if we find loopholes, to make sure nothing about artificial intelligence gets lost.
    You spoke earlier about protecting personal information, but it goes beyond that. It's about data in a broad sense, in the way artificial intelligence is used.
    Absolutely. I’ll say again that this aspect of data is eminently tied to the issue of artificial intelligence. Indeed, this intelligence is never just the result of automatic learning. In other words, data is required to be able to reach a form of artificial intelligence.
    As a result, we see that the problematic data is often personal information, meaning information that could identify you, directly or indirectly; you, or other individuals, or other groups. It means that this is where the risks truly lie the most often. They aren’t the only risks, but the data used to get to a result are currently subject to legislation.
    So that’s my position. I don’t claim to be an artificial intelligence lawyer because I do indeed know my field. I am a personal information protection lawyer. It’s a technology I integrate into my practice. I think that’s really my message today.
(1710)
    Thank you very much.
    I have a brief question for you, Mr. Bourgon. You said earlier that a global conversation needs to start. Were you thinking of a worldwide conversation? Where should we start?

[English]

     Speaking about global and worldwide conversations, I would certainly defer to the experts in various international diplomacy circles about which forums are the best. The United Nations exists, and so does the OECD. I'm not sure which ones would be the best case for this global conversation.
    I'm very happy to speak more to which organizations I think would need to be part of that conversation. The most important thing here is for people in those roles, who have the opportunity to do that, to understand this problem such that they can start having those conversations.

[Translation]

    Thank you.
    Thank you, Ms. Lapointe and Mr. Bourgon.
    Mr. Theriault, you have the floor for five minutes.
    One thing that really struck me during this discussion is the way we can lose control.
    Alain McKenna put out an excellent article, which appeared in La Presse in June. He met with Yoshua Bengio, whom we will have before the committee.
    The subtitle of the article reads: “Headed towards a level of competence comparable or superior to that of humans, artificial intelligence (AI) rebelled and already defied orders given to it.”
    That is what worries Mr. Bengio. He then went on to say this:
For six months, AI has been acting more and more independently, and it's also acting more and more to protect itself […] To save itself, it will hack the system to recopy its own code rather than a new code that would replace it.
    Further down, he gave another example:
Claude Opus 4, the most recent large language learning model by the American company Anthropic, found out by reading private emails that one of its engineers was cheating on their spouse. The AI also discovered it would eventually be replaced with a new version of Claude. To avoid that, it decided to blackmail the engineer.
    That’s rather incredible. Mr. Bengio said it was a simulation, but nonetheless, no one asked it to do this.
    What followed after that is important, because this is the point I’m getting to:
What the Montreal researcher dreads most “is uncontrolled agency”, a loss of control caused by the way the most popular models are currently developed. They’re asked to perform tasks without human intervention. For these AIs, deactivating themselves can be interpreted as a barrier to completing the task.
    I’d like you to comment on that.

[English]

     I'll give some context here. For people who've been thinking about the future of AI, these are all things that we were expecting to see, and now we're seeing the evidence of that.
    There's a very technical concept here. It goes by the term of convergent instrumental drives, or incentives. It's the idea that if you have a sufficiently intelligent mind, artificial or otherwise, that's trying to accomplish a task, there are certain things that aren't necessarily goals that you would train into the system. That's a separate topic: We don't even have a reliable way to train the goals we want in the AI systems. Putting that aside, many goals would come along with this that are just as instrumentally useful for AI systems in accomplishing whatever goal they might be pursuing.
    One goal is self-preservation. It's not some human desire to continue living, but it's very difficult to accomplish a goal if you're not around, or active, to accomplish the goal. Another one would be resource acquisition. It's often easier to pursue the goal you're trying to pursue if you have more resources to do so.
    Another example is resistance to having your objectives changed. If you're trying to accomplish a goal, you're not going to be very good at accomplishing it if you allow someone to change the goal you're trying to accomplish.
    All this was theoretical 10 years ago. It certainly made a lot of common sense. We're now starting to see that, when we have AI systems that are general and capable enough to be situationally aware in this way, these behaviours are starting to manifest.
    They look kind of silly. They make silly mistakes. You see their chains of thought in which they're plotting and saying these things. We say it's kind of dumb and don't see how that would cause a problem. They're thinking out loud about the ways in which they're trying to deceive us or do these dangerous things. These are all test environments.
    The concern is that AI systems won't stop with what we have today. The goal of the whole field, in some sense—and certainly of these companies—is to build very powerful general systems that are much smarter than we are. Not only will they be much more dangerous with these convergent instrumental goals, and whatever goals they are pursuing—which we don't know how to train into them reliably—but as they become more intelligent, which I mentioned in my statement, they will become increasingly situationally aware.
    When we test them for some of these behaviours, we start to hear them say that it seems like a test to them. When they behave in the ways we expect them to, or want them to, in tests, it becomes harder to know whether we've actually reliably created a safe system, because it is situationally aware of how we're expecting it to behave. As these systems become more powerful, have more autonomy and have more control over how the world and the economy work, that could lead to extremely bad outcomes.
(1715)

[Translation]

    13, 12, 11, 10, 9, 8...
    We will try again later.
    Mr. Hardy, you have the floor for five minutes.
    Thank you, Mr. Chair.
    Gentlemen, thank you for being here with us today.
    To put things back in context, there are currently three different points of view on artificial intelligence. First, there are doomers, who see it as a threat, a danger to humanity. I think Mr. Bourgon is part of that group. There are also realists. Finally, there are enthusiasts. I think Mr. Guilmain is rather realistic.
    It’s important to set things in their timeline. We’re currently at the crossroads of a new technology. This isn’t new. We’ve been through this before. Remember that scientists said a human travelling in a car at 100 km an hour would die. In the 2000s, many people saw the internet make sweeping changes, such as the end of work and everything we knew.
    I’d like to know what is so very different today, given the knowledge we have. I’m pretty sure I know your answer. Obviously, we don’t know what will happen in 10 years, any more than we could know how far the internet would go in the 1990s.
    Considering all the technology that has appeared over the last 100 years, what is so different now? Why is it imperative for us to legislate on the matter as quickly as possible?
    My question is for you first, Mr. Guilmain.
    Thank you for your question.
    Right now, current geopolitical events are very peculiar. They’re leading us to raise many questions and accelerate the use of artificial intelligence. Those are the reasons why we’re asking ourselves a ton of questions. Now, the real question is this: must we legislate quickly?
    If I dispassionately look at the equation between both variables, superintelligence and regulation, I’m unable to define superintelligence. I’m not saying I’m not concerned about it. Quite the contrary; I have a family, I’m thinking about the future. I’m very realistic about what this could represent.
    Now, what we really must ask ourselves is if the course of action is to legislate on the issue; to put a moratorium on a type of technology we’re unable to define; or to at least take an interest in what we have right now, namely generative artificial intelligence, general use artificial intelligence systems and slightly riskier artificial intelligence systems in fields like biometrics, employment or justice.
    It’s true that I’m rather down-to-earth when it comes to the priorities we should have right now.
    Mr. Bourgon, the floor is yours.

[English]

     Thank you.
    One thing I'd say is that the world is very large and, unfortunately, we're allowed to have many problems at once. We're allowed to have the problem of pressing regulatory issues with artificial intelligence and to have to worry about the trajectory of the technology and where it's going.
    As for your question about people in the past who were worried about various past general-purpose technologies, I would agree that many of the people who warned about the risks were wrong about the impacts, but some of them were right sometimes. It turned out that nuclear weapons were real. They're very catastrophic, and we treat them very differently from the Internet. For many technologies, though, yes, there are people who are always going to create a big stir about them.
    General-purpose, very powerful artificial intelligence systems are different in a real sense. Having a system that's not just automating a particular cognitive task or a particular physical task but is doing the type of thinking we would be doing, or something similar to it, such that it could automate the process of automation is different in kind. It should be treated as different in kind.
    I heard in your question that there is some speculation about when it will come and what its effects will be. Technological progress and forecasting in that domain are notoriously hard. I agree. That said, it certainly seems that when we look at the history of the field and how much trouble we've had developing this technology.... Even back then, the people who founded the field, Alan Turing and I.J. Good, were already imagining what it would look like if they succeeded. They were already thinking about these risks and what it would take to control a system that is much smarter than we are. I think something has changed in the trajectory of that technology, which I can speak to, and that means we should be thinking about it coming much sooner than we otherwise thought.
(1720)

[Translation]

    Thank you.
    If we look at the matter very dispassionately, artificial intelligence uses the data we provide to it. There is a saying that goes, “junk in, junk out”. We see it in the case of artificial intelligence; hallucinations or false information are common. Sometimes, we even have to compare the data to finally get accurate information.
    We are now at a point where just as much good as bad could come from artificial intelligence. We agree on that point.
    Should we not instead have a system that adapts our legislation as quickly as technology evolves?
    I’ll explain. Many good things will come from artificial intelligence in the fields of medicine, energy or science, for example. Artificial intelligence can think 24 hours a day and advance technologies we humans aren’t even thinking about. So, there is a positive side.
     Mr. Hardy, sorry to interrupt you.
    I was on a roll. I'm sorry. I'm done.
    It was a good statement, but—
    I had a question, of course.
    Sorry, but I want to leave more time for other questions as well. When you take the floor again, we may have a bit more time.

[English]

     Ms. Church, please go ahead for five minutes.
    Thank you, Mr. Chair.
    Good afternoon.
    Some of my interests are in the area of consumer protection and competition; when I think about how AI is emerging, I think a bit about how some of the constraints in traditional areas of innovation don't exist. You have, for one thing, the immense resources that it takes to create, develop and operate AI, which limits the field of who can participate in this to begin with. I think that's an issue.
     You also have a very underdeveloped framework for consumer protection or product liability, if any, so I guess my first question to you, maybe Antoine and then Mr. Bourgon, would be how we build and whether we should build a concept of product liability into a framework that we're looking at. How would you suggest we do that?
    There are two different ways of thinking of product liability. You may think of a stand-alone AI act, but I tend to think that's not the right response. You may also look at different consumer protection acts and potentially the civil code in Quebec; this could be an option to assess what the gaps are. I will tell you, when we look at, for instance, the civil code.... I'm of a civilist tradition in terms of my training, and I like to think that jurists are pretty creative. We've already seen some interpretations of the law being adjusted with some use of AI; in that sense, I am probably quite confident that our current laws can evolve in terms of AI implications.
     I know what I know. I know what I don't know. I'm not pretending to be an expert when it comes to consumer protection. That is not my field of specialty. However, it would potentially be interesting to have a regulatory body in the field raising a finger and saying it realizes that there are gaps, so it wants to tweak an existing act to make sure, essentially, that they are being captured. That is my position when it comes to AI.
     It is much more efficient. It requires everyone at the table to be involved, but that's very different from essentially enacting a potential act on product liability, something we've seen in Europe, for example. It does exist. There are some directives and regulations on product liability, but the regime is fundamentally different in the way it's presented.
     That's my initial reaction to your question.
     I will be a little outside my lane here. I think liability can be a useful tool. I don't think it resolves the big-picture concerns that I'm worried about, but it's certainly on the trajectory to those types of systems. I expect that we'll get increasingly capable general-purpose AI systems that will be difficult to treat with liability in a sector-specific way, because the same system that can be an expert biologist helping with drug discovery can also be used for autonomous hacking and for helping developers find vulnerabilities in their code. That same model that can help with novel drug discovery can also potentially help someone develop dangerous biological compounds.
    There's a sense in which the people making this technology are making something so general and increasingly powerful in its ability to manipulate reality that it makes sense to think about how they should potentially have some liability for ensuring that the technology they're putting out there doesn't bring certain harms.
     Again, that's not going to ultimately solve the incentives for racing for superintelligence, but it certainly seems to make sense on the way there.
(1725)
    It does. As for your point, you raise some very serious possible types of harm that come from the operation of this technology, so how should we look at that as legislators? What types of safeguards or guardrails could we put in place to prevent that harm and not just try to deal with it after the fact, after it happens?
    I think some of the foundations of this are also useful for the stuff that I'm worried about with loss of control, but there's a certain school of thought that we should just let these people cook. The technology will cause a bunch of enormous benefits, and we don't want to limit them.
     I find it hard to imagine that we're going to end up in a stable world if we succeed at creating these systems that are increasingly capable and that have dual-use capabilities with national security implications. We want AI developers to be able to make models that can help with novel drug discovery. Do we want those models that might also help someone create an unprecedentedly powerful bioweapon? Do we want those models to be open-source?
     We should probably have some framework under which we know who is capable of training a genius in a data centre and what we can do with those very powerful technologies. If we just proliferate them openly in perpetuity, it could create a world that's unstable and that we won't be able to control. That's not to say there aren't a bunch of benefits to open sourcing some of these models; we should open source all the ones we can that don't impose those risks.
     Thank you, Ms. Church.

[Translation]

    If the witnesses can stay for three rounds of questions, each lasting two and a half minutes, I think that we can finish the second hour of the meeting quickly. Can we do this? Okay.
    I'll give the floor first to Mr. Thériault, then to Mr. Hardy and then to Mr. Sari.
    I think that we need to discuss ethical issues because ethics are more demanding than law. In a society where values are shared, law is the lowest common denominator. Before we can begin to effectively regulate an area, we must first understand the matter at hand. We can then try to find the best ways of doing so. We mustn't downplay this letter on the risks of superintelligence from 800 artificial intelligence experts by calling these experts alarmists. This technology has good and bad sides.
    The new Minister of Artificial Intelligence and Digital Innovation announced that he would focus less on regulating artificial intelligence and more on harnessing its economic benefits.
    Do you think that this approach is a bit naive? What are your thoughts, Mr. Bourgon?

[English]

     It really depends on which applications and which kind of AI he's talking about. Many people are surprised to hear that there was a recent executive order in the United States about the Genesis mission to have the U.S. government integrate AI, accelerate a bunch of science with AI and remove barriers to doing that. I think that's great. I support that.
    If we're talking about disregarding things and trusting the companies to do whatever they want as they scale to these powerful systems, I would disagree with that part.
    Unfortunately, I don't know exactly what the minister was talking about. There's one way that I could very much agree with what he said, and there's another way that I think it would be an extremely big mistake.

[Translation]

    Yes, we'll invite him to appear. We can ask him the question directly.

[English]

    If he's around, I'm here all night tonight and tomorrow.

[Translation]

    Thank you.
    I kept to my time, didn't I, Mr. Chair?
    Yes, because you have less than 40 seconds left. That works for Mr. Hardy.
    Mr. Hardy, you have the floor.
     We said that artificial intelligence had many positive applications, and that we obviously needed to exercise a certain amount of control. Since Mr. Bourgon just responded to Mr. Thériault, I'll turn to Mr. Guilmain.
    Given that this field is evolving extremely quickly, we need to promote the good and legislate against the bad. We need to leave things alone for a while, look at how this technology is being used from an ethical and logical standpoint and then legislate if we see any straying from the straight and narrow. We'll still maintain the things that benefit humanity in scientific, medical and other ways.
    Do you have any comments on this? Do you think that we should have evolving legislation?
(1730)
    I'm honoured to appear before this committee.
    If we really did create evolving legislation, I think that I would speak every year on each of the nine birthdays of the committee members. Indeed, so many changes are taking place that I myself have trouble keeping up with the technological, legislative and societal changes.
    I won't deny that, even though it's unfortunate, law remains a science of reaction. I agree with you that law isn't ethics. Its very nature is unfortunately imperfect and it will never be perfect. It's a bit like Don Quixote trying to evolve, but always lagging behind the times. However, we can remain visionaries in our approach to drafting legislation, while staying neutral from a technological standpoint.
    At the moment, we're already having trouble defining what constitutes an artificial intelligence system. On a more basic level, more recently in Canada, we weren't sure what constituted a high‑impact artificial intelligence system. Even today, this concept alone is still subject to debate. The thing that really scares me about this type of approach is actually having to celebrate each of your birthdays once a year. It would be nice, but challenging.
    That's why I think that this meeting is vital. There will be others. That said, the reality is that we must consider whether we can say that legislation could evolve over time. I think that this would be difficult. Take fax machines, for example. No one uses them anymore. However, until recently, the Code of Civil Procedure included provisions on this concept. We end up in rather bizarre situations where we have older technologies. The idea of superintelligence may still exist in five years. However, it will no longer be called this, and the legislation will inevitably still refer to this type of concept.
    Thank you, Mr. Hardy.

[English]

     Mr. Saini, you have two and a half minutes. Go ahead, please.
     Thank you for coming.
     This is something very amazing and very alarming.
     My questions are for both of you. Should Parliament use AI? If so, should it be limited to parliamentary work such as voting, the consideration of laws and enacting laws?
     I'd like to hear from both of you.
    Absolutely. It would have taken me a lot longer to make my speech as eloquent as it was if I didn't have an AI model helping me. I talk to people all the time who are in policy and who currently don't have access to these models. They are secretly doing the research on their phones, and the models are helping them with whatever policy brief, because they don't have access to them.
     I think there are enormous.... The models still hallucinate, and there are problems. People have to understand how to use the current models in ways that will help their work, but I think it's going to increasingly be the case that, for the AI systems of today, you're just going to fall behind if you're not finding ways to integrate them into your work life. That's no different for Parliament.
    I have the same response: I think it should be used responsibly. I always like to say that, for AI, what we have at the moment is a bad summer student, a really bad one who can be tested. Still, it's useful for some tasks. That's what we have today.
     It's also good to use it on a daily basis, including by the people who are very skeptical, because at the end of the day, it's not going to go away.
    Are there other governments that Canada could follow as a lead to see if those systems are working and protecting the dignity of...?
    From my perspective, the country that has invested the most in this would be the U.K., with their AI Security Institute. I think that organization is the most well-resourced government body, and it is trying to understand AI very holistically. It is looking at the threats, trying to understand the big alignment problems that I'm talking about and trying to be the place the government feels it has permission to listen to on these issues and that does some of the best work. In fact, when I talk to U.S. policy-makers, I often give them trouble, asking why they are letting it hire all the best Americans to advise it about this technology. There are a bunch of smart Canadians the government could probably be hiring to do the same thing.
    I think the U.K. is doing great work.
(1735)
     Antoine, do you have anything?
    It depends on the topic. When it comes to innovation, you may have seen the recent Genesis mission executive order in the U.S. Clearly, they are interested in innovation. When it comes to regulation, the U.K. is probably a good example.
    I would say that at the moment, any jurisdiction, any country, has something to bring to the table, and that's the core of Canada. We take what's best from the rest of the world. That's, from my perspective, the DNA of this country. I think we can—by watching and by being slow, steady and really organized—potentially get to an amazing place.
     Thank you, Mr. Saini.
     Thank you both, Mr. Guilmain and Mr. Bourgon, for being here for our first meeting.
     As committee members know, the work plan showed that we were going to have the Minister of Artificial Intelligence here. For the benefit of the committee, I'll say that we've reached out—probably eight or nine times, Madam Clerk—to the minister to appear, and that still has not been arranged. If anybody has any ability to make sure the minister comes before the committee, I would strongly suggest that you push him and his office to do that.
    Thank you, gentlemen, for being here and providing to the committee what I thought was fascinating information.
    I'm going to suspend. We'll go in camera to deal with committee business, and we'll return in a few minutes.
     The meeting is suspended.
    [Proceedings continue in camera]
Publication Explorer
Publication Explorer
ParlVU