:
I call the meeting to order.
I want to welcome everyone to meeting number 19 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
[Translation]
That, pursuant to Standing Order 108(3)(h), the committee undertake a study to assess artificial intelligence (AI), the challenges it poses, and how it should be regulated.
[English]
I would like to welcome our witnesses for today. As an individual, we have Antoine Guilmain, who is the partner and co-head of Gowling WLG's national cybersecurity and data protection practice group. From the Machine Intelligence Research Institute, we have Malo Bourgon, who is the chief executive officer.
Mr. Guilmain, I'm going to give you up to five minutes to address the committee. If you want to start, go ahead, please.
:
Thank you very much, Mr. Chair and members of the committee, for inviting me to comment on the challenges related to regulating AI.
Although I will be testifying in English today, I will respond to your questions in English or French.
[English]
I am co-leader of the national cybersecurity and data protection group at Gowling WLG, and I'm an associate professor at the faculty of law at the Université de Sherbrooke. I am a practising lawyer called to the bars of Quebec and Paris. My evidence today represents my own views. I am here as an individual, not representing my law firm, clients or any third parties.
Much of my legal career has focused on comparative analysis of legal regimes across the globe, advising clients on their compliance obligations in the jurisdictions where I am qualified to practise. My practice focuses on data protection and cybersecurity, and it naturally extends to artificial intelligence, given its role as a major data-driven technology.
To me, Canada has always been a model of education, growth and innovation. That's why I chose to pursue my doctorate, start my family and build my life here—recently earning citizenship, which remains one of my proudest moments. I believe that Canada's institutions, diverse economy and culture of innovation create an environment well suited for the effective development, adoption and regulation of AI technologies.
Today I would like to discuss the challenges of AI, not simply as an ever-evolving technology but as a new field of regulation. In my view, grounded in my experience in the current international landscape, there are three key pitfalls that we must not overlook.
The first one is that newer doesn't mean better. There is a natural tendency to respond to new technology by creating new laws. However, consistent with the civil law tradition, leading jurists have long recommended applying ancient law to technological revolutions. This approach is not about doing nothing. Rather, it calls for revisiting existing areas of law and adapting them, case by case, to each new technology.
Today, AI does not exist in a legal vacuum in Canada. A wide range of legislation already applies, including copyright, liability, trademark law and data protection. In this last area, we are already seeing new obligations related to automated decision-making, including in Quebec, to ensure transparency when AI is used. In that sense, prior to tabling bills like the former AIDA, we should assess current laws and identify any gaps before imposing new requirements.
My second message would be that faster doesn't mean better. There is a natural tendency, again, to adopt laws as quickly as technologies evolve. However, in law more than in any other field, slow and steady often proves the wiser approach. A look at both domestic and international developments illustrates why.
In data protection, for example, the GDPR, the general data protection regulation, was adopted in 2016, but it took Quebec five years to amend its own legislation in response, with Law 25, particularly in light of the GDPR's international impact. In the realm of AI, the EU AI Act, which came into force in August 2024, is already facing a form of retrenchment, especially regarding implementation timelines and the regulatory burden on tech companies. Whether it will achieve the same success as the GDPR remains uncertain.
Closer to home, AIDA faced significant changes after its introduction. The most recent version contained no fewer than 70 references to upcoming regulation in just 20 pages—an ambitious effort, but far from a self-contained legislative text.
My last message would be that heavier doesn't mean better. Again, there is a tendency to assume that the greater the burden on organizations, the better the protection for the public. This is not always the case, and, more importantly, it can undermine the competitiveness of small and medium-sized enterprises. AIDA reflected this trend, mandating multiple assessments at various stages of an AI system's life cycle. While theoretically sound, this approach is rarely feasible in practice, at least based on my experience.
In sum, I believe that AI legislation can succeed only through sustained and substantive collaboration with stakeholders in industry, academia and civil society to ensure that any framework, first, reflects a risk-based approach; second, appropriately takes into account the state of AI technology, including its current limitations; third, assigns responsibility along the AI value chain; and finally, harmonizes core concepts with existing international frameworks.
With the chair's permission, I would be pleased to submit a short written brief in French and English on the issues I have addressed in my opening remarks.
Thank you, and I look forward to answering this committee's questions.
Mr. Chair and members of the committee, my name is Malo Bourgon. I'm the CEO of the Machine Intelligence Research Institute, or MIRI, a non-profit founded in 2000 to make sure that the development of powerful AI systems is beneficial for humanity.
I grew up in Ontario, where I studied engineering and computer science at the University of Guelph, and I've worked at MIRI since 2012. Our research helped create the field of AI alignment, the study of how to build AI systems that reliably want—and do—what we want them to.
Governments face many urgent AI concerns, such as disinformation, surveillance, labour displacement and threats to democratic institutions. These are all real and important. However, my focus today is on something different: dangers from AIs that are smarter than the smartest humans at every mental task—what's often termed artificial superintelligence.
The leading AI companies today say that the creation of artificial superintelligence is their explicit goal. OpenAI's CEO, Sam Altman, recently called for “making superintelligence cheap [and] widely available”. Anthropic's Dario Amodei talks of building “a country of geniuses in a datacenter”. These AI companies weren't founded with the intention of making chatbots. To them, chatbots are a stepping stone.
Researchers at MIRI are concerned that if the world continues racing towards superintelligence using anything like today's techniques and understanding, the default outcome is that we'll lose control, likely resulting in human extinction.
Why is there such a big danger? For one thing, AI is unlike traditional software and doesn't behave exactly as its creators intend. Traditional software is written line by line, and a programmer can understand every part. Modern AI systems are grown as enormous neural networks, trained through trial and error with massive computation. Their creators have little insight into what's actually going on inside them. As a result, AIs often exhibit behaviours that nobody asked for and nobody wanted.
For years, MIRI has warned of this eventuality, and now we're starting to see early evidence. Frontier AI systems get caught cheating on evaluations. AIs sometimes drive users into states that clinicians call AI-induced psychosis, even in cases in which the AI systems themselves can readily tell that their responses are harmful to the user. When we look at their chains of reasoning, we see growing signs of attempts at deception. An especially concerning complication is that models are increasingly recognizing when they're being tested; this is called situational awareness, and it threatens the validity of all safety evaluations moving forward.
At current capability levels, these behaviours are concerning, but not catastrophic. The systems are still limited, but we must ask, what happens when they reach the capabilities the companies are aiming for? Will future AI systems start pursuing their own objectives? If so, what will those objectives be? Do these systems endanger us? Can we just pull the plug? Many who have studied these questions have found the answers quite concerning.
Canadians Geoffrey Hinton and Yoshua Bengio are two of the three godfathers of deep learning—which is the paradigm that underlies most of the modern AI systems today, and certainly the most powerful ones. They have publicly warned of the dangers of extinction. In 2023, they joined other top AI scientists, and even the CEOs of OpenAI, DeepMind and Anthropic, some of the leading frontier labs, in this statement: “Mitigating the risk of extinction from AI should be a...priority alongside...pandemics and nuclear war.”
Some of these signatories lead the very companies racing fastest to build superintelligence. Elon Musk called AI a “fundamental risk to the existence of civilization”. Dario Amodei said, “there's a 25% chance that things go really, really badly”, including extinction. This is an unprecedented situation, in which even the creators of the technology are saying it's incredibly dangerous.
Catastrophe, however, is not inevitable. These dangers can be averted. The race that so many see as unstoppable is taking place in a world where most people don't understand the threat. That can change.
What can Canada do? Policy-makers can say what other leaders seem to lack the courage to: that, according to top experts, the race to superintelligence is far too dangerous. Canada can start a global conversation that changes what's possible when it comes to averting this threat. This very House could ask the leaders of those companies to testify under oath about these grave dangers.
As for the motion that initiated the study, the mover said that Canada should not “unnecessarily slow down technological development”. I agree, of course. We can keep the self-driving cars, the chatbots of today and the AI-powered drug discovery tools among many very promising AI applications. Much of the technology is extremely beneficial and promising. The only thing we need to stop is the race to AIs that exceed humans in every way. The extremely specialized chips and enormous data centres essential to that race can be reined in. Canada cannot do this alone, but it can help start the global conversation.
Canadian scientists led the way on this technology. They continue to lead through their efforts to get the world to avert the dangers. My hope is that Canada will use its voice and moral authority to push the world forward so that our best plan is not that we hope we get lucky in the presence of this threat.
Thank you. I look forward to your questions.
:
That's a great question. When I think about AI, I often try to separate the applications of AI and the current systems we have today. Many should be regulated the way most normal and new technologies are regulated.
The thing I focus most of my time on is thinking about where the technology is going, and what the risk will be from these very advanced systems. In that case, it is, unfortunately, a very challenging coordination problem.
As for any actors deciding that they think the risk is too great, their slowing down does not really help avert anyone else building a system that would pose those risks. The main area of focus should be on trying to have those conversations with partners to figure out an agreement they could come to in order to stop pushing the frontier.
However, I think that our best chance of not succumbing to a catastrophe is finding some way to agree with international partners on which frontiers we aren't going to push.
:
Thank you very much, Mr. Chair.
I thank the witnesses for their very inspiring and interesting testimony.
Before I ask any questions, I would like to make a clarification about the history of AI.
What would be very dangerous would be a revolution, or demands for regulatory changes around the world in relation to technology that has seen a resurgence in use recently, but which has been around for a long time.
There is a resurgence in the use and exploitation of AI because it has been popularized; however the first article in which the term “artificial intelligence” was mentioned dates back to before 1954. It is very important to remember this. Today is my birthday, and I have been using artificial intelligence in predictive analytics for over 30 years. This is a very important point, and it leads me to ask you both a question.
We are not just talking about generative AI or machine learning whether there is oversight or not. I do not see how regulations could be applied in either case. That is why I would like to benefit from your expertise in this area.
Would asking the government to regulate technological development really be a viable solution, or would it be better to regulate the use of technologies to manage the risks? If I own a private company that creates technology, will we regulate how I do it, or rather regulate how that technology will be used afterwards? When I talk about use, I'm talking about the collection, processing, and communication of data, and the cycle is much more important.
I would really like to hear your thoughts on this.
:
Thank you for your question and happy birthday. I am in a good mood too.
I would like to start by saying that, in this presentation, we place a great deal of emphasis on the risks, and rightly so. There are risks, but there are also good things that come from AI.
Next, I would like to emphasize something that is sometimes overlooked, namely that there are legal considerations. It is very important to stress this point. I am a lawyer specializing in privacy and cybersecurity, and I can confirm that there are rules. I have work at the moment, so there is no problem on that front.
That said, I think it is interesting to ask what kind of AI we are talking about. The motion refers to artificial superintelligence. There are nuances. My colleague Mr. Bourgon will also be able to comment on this. From an educational perspective, there are three types of AI.
First, there is rather limited AI, namely the kind we have today, even though it has increasingly sophisticated and broad capabilities. Second, there is slightly more general AI that would mimic human behaviour. Third, there could be artificial superintelligence.
However, it is important to understand that it will not happen overnight. Artificial superintelligence is a concept that dates back to the 1950s. Since then, there have been developments, advances, and setbacks. Today, we are where we are, and we are witnessing a growing trend that will not be reversed overnight.
This study is fundamental, in my opinion. Furthermore, in general, Canada truly has a rather unique approach to AI, both in terms of adoption but also in terms of trying to regulate it. I salute this aspect, but we must not think that it will lead to an immediate change.
This is my opinion.
:
Let's talk about trying to regulate it.
I have been observing the regulatory trend in the European Union for a long time. Earlier, you mentioned there were changes just a week or two ago.
Let's use another example so that we don't rely solely on the European Union, namely Australia. When the Australians wanted to impose regulatory requirements on companies, first, there was a lot of resistance; second, some companies left the regulated areas.
Do you think this could be a risk? What would you advise the Canadian government to do to better regulate the use of AI, while keeping skills and knowledge within our country? I do mean “regulate.” I completely agree.
:
I do not think legalization will solve all the problems. Regulations are much broader than legislation. Different types of regulations are issued by markets, by social needs, by voluntary codes of conduct. So it is really a very broad concept.
I would like to talk to you about Quebec. You may have heard of Quebec's very clear Bill 25, an Act to modernize legislative provisions as regards the protection of personal information. Even today, I still provide legal advice and group therapy on this law. I am telling you this because it is an extremely onerous law in terms of obligations that puts small and medium-sized businesses in absolutely untenable situations.
It is an interesting situation because there has been progress because of new legislation. Is the public better protected? Are organizations comfortable with this law? I can assure you that if we conducted a survey, we would probably get some rather surprising results.
All this to say that the law is a good tool. Again, don't get me wrong, because the law is my job. However, we still need to think about how the regulator applies it. It's really important to keep that in mind.
I will give a brief introduction.
I proposed the study we are undertaking on AI. This standing committee has three priorities: access to information, privacy, and ethics. When I read the “Statement on Superintelligence,” which was signed by leading figures in the field of AI, including Geoffrey Hinton, Yoshua Bengio, and many others. The list is incredible.
Like you, Mr. Bourgon, I have read the statement. These people are saying that we must mitigate the risk of extinction due to AI, and that this should be a global priority on a par with other societal risks, such as pandemics and nuclear war.
It blew me away. From there, we can ask ourselves whether this is really serious. If we start looking, we notice a frantic, almost blind rush towards the establishment of artificial superintelligence that seems to favour economic interests, concentration, and control of information over human interests. An entire vision of human beings underlies what we are doing and what the impact of artificial intelligence on human life will be. For example, when it comes to computer engineers, AI can enlighten us and do the job on the spot, which will revolutionize everything.
I will turn to you in a moment, Mr. Guilmain, but I think things are evolving quite a bit faster than you claim. Canada appointed a , which is not insignificant.
Mr. Bourgon, do you consider the people you mentioned in your presentation to be alarmists?
:
Absolutely. I will give you an example. Let's say you were the owner of a crossfit gym company. Let's say you're based out of Saint-Louis-du-Ha! Ha! and you want to use AI for real-time movement feedback and potentially for performance tracking and predictive PRs—personal records. You think it's a good idea for your members.
Under the EU AI Act, it would require quite a bit of money to essentially launch and propose these features to your users. You would have to do compliance assessments. You would need to have in place accountability documentation, policies and procedures. You would potentially need to have some record-keeping obligation, a register, in case of an incident. You would need to make sure that there was human oversight. You would potentially have to notify someone if there were a problem. Remember, you are in Saint-Louis-du-Ha! Ha! and you own a crossfit gym.
I'm not saying I'm against this obligation. I think it makes sense in some situations. The fact is that—and we see this with the new laws at the moment, with a lot of obligation—the massive problem is mostly for small and medium-sized enterprises. It seems that we have not found the response for these layers of organization that are really key for Canada.
I think that's my take. Again, I'm not against it. I just think it's a lot.
:
Thank you for your question.
Indeed, we have many laws. I myself specialize in cybersecurity and protecting personal information. However, many other sectors directly impact artificial intelligence. I'm thinking of copyright, trademark rights and consumer protection. Within the framework of those laws, we always have supervisory authorities associated with enabling legislation and with provisions often drafted to be technologically neutral. That's always the goal of our legislation. We confer a form of neutrality so that it stands the test of time.
That's the case, for example, with protecting personal information. We see that requirements are in place. They don't mention specific technology, but they are broad enough to potentially extend the use case, namely for any use of artificial intelligence. I'm setting aside the issue of superintelligence, because I think even I myself have a hard time defining it.
Once we've said that, it seems to me there's a trend towards passing these laws because it feels good. It's like eating Nutella. It's something that's not bad. It's pleasant. We tell ourselves we've done something. However, what I see the most often down the line is that we have regulators who must apply these laws and have the skills to do so. We have regulators in Canada and Quebec who do extraordinary work. However, they lack the means to truly keep abreast of these changes.
Once again, the logic I'm applying focuses on maybe passing fewer laws. We have more and more legislation. However, the fact is we have excellent regulators who can conduct analyses for themselves and are also able to sound the alarm.
Let's talk about Quebec's Commission d'accès à l'information, to cite just one example. The Commission is responsible for applying Law 25, which protects Quebeckers' personal information. However, we see it's already taken a position on artificial intelligence. It tabled briefs. Its representatives explained what they think is the correct application of the law to artificial intelligence.
In the end, we end up with forms of regulation through existing legislation. That said, we cannot solve this problem today. However, we must also admit that Quebec's Commission d'accès à l'information lacks resources. In my opinion, this observation applies more generally. We can generalize about this kind of thing. We have high-calibre regulators. Perhaps the solution is to give them a real opportunity to dive into the file and keep a steady hand on it.
I think that's the expression I find interesting. To “keep a steady hand” on something essentially means having an understanding and applying legislation correctly. That's my feeling, at least.
:
One thing that really struck me during this discussion is the way we can lose control.
Alain McKenna put out an excellent article, which appeared in La Presse in June. He met with Yoshua Bengio, whom we will have before the committee.
The subtitle of the article reads: “Headed towards a level of competence comparable or superior to that of humans, artificial intelligence (AI) rebelled and already defied orders given to it.”
That is what worries Mr. Bengio. He then went on to say this:
For six months, AI has been acting more and more independently, and it's also acting more and more to protect itself […] To save itself, it will hack the system to recopy its own code rather than a new code that would replace it.
Further down, he gave another example:
Claude Opus 4, the most recent large language learning model by the American company Anthropic, found out by reading private emails that one of its engineers was cheating on their spouse. The AI also discovered it would eventually be replaced with a new version of Claude. To avoid that, it decided to blackmail the engineer.
That’s rather incredible. Mr. Bengio said it was a simulation, but nonetheless, no one asked it to do this.
What followed after that is important, because this is the point I’m getting to:
What the Montreal researcher dreads most “is uncontrolled agency”, a loss of control caused by the way the most popular models are currently developed. They’re asked to perform tasks without human intervention. For these AIs, deactivating themselves can be interpreted as a barrier to completing the task.
I’d like you to comment on that.
:
I'll give some context here. For people who've been thinking about the future of AI, these are all things that we were expecting to see, and now we're seeing the evidence of that.
There's a very technical concept here. It goes by the term of convergent instrumental drives, or incentives. It's the idea that if you have a sufficiently intelligent mind, artificial or otherwise, that's trying to accomplish a task, there are certain things that aren't necessarily goals that you would train into the system. That's a separate topic: We don't even have a reliable way to train the goals we want in the AI systems. Putting that aside, many goals would come along with this that are just as instrumentally useful for AI systems in accomplishing whatever goal they might be pursuing.
One goal is self-preservation. It's not some human desire to continue living, but it's very difficult to accomplish a goal if you're not around, or active, to accomplish the goal. Another one would be resource acquisition. It's often easier to pursue the goal you're trying to pursue if you have more resources to do so.
Another example is resistance to having your objectives changed. If you're trying to accomplish a goal, you're not going to be very good at accomplishing it if you allow someone to change the goal you're trying to accomplish.
All this was theoretical 10 years ago. It certainly made a lot of common sense. We're now starting to see that, when we have AI systems that are general and capable enough to be situationally aware in this way, these behaviours are starting to manifest.
They look kind of silly. They make silly mistakes. You see their chains of thought in which they're plotting and saying these things. We say it's kind of dumb and don't see how that would cause a problem. They're thinking out loud about the ways in which they're trying to deceive us or do these dangerous things. These are all test environments.
The concern is that AI systems won't stop with what we have today. The goal of the whole field, in some sense—and certainly of these companies—is to build very powerful general systems that are much smarter than we are. Not only will they be much more dangerous with these convergent instrumental goals, and whatever goals they are pursuing—which we don't know how to train into them reliably—but as they become more intelligent, which I mentioned in my statement, they will become increasingly situationally aware.
When we test them for some of these behaviours, we start to hear them say that it seems like a test to them. When they behave in the ways we expect them to, or want them to, in tests, it becomes harder to know whether we've actually reliably created a safe system, because it is situationally aware of how we're expecting it to behave. As these systems become more powerful, have more autonomy and have more control over how the world and the economy work, that could lead to extremely bad outcomes.
Gentlemen, thank you for being here with us today.
To put things back in context, there are currently three different points of view on artificial intelligence. First, there are doomers, who see it as a threat, a danger to humanity. I think Mr. Bourgon is part of that group. There are also realists. Finally, there are enthusiasts. I think Mr. Guilmain is rather realistic.
It’s important to set things in their timeline. We’re currently at the crossroads of a new technology. This isn’t new. We’ve been through this before. Remember that scientists said a human travelling in a car at 100 km an hour would die. In the 2000s, many people saw the internet make sweeping changes, such as the end of work and everything we knew.
I’d like to know what is so very different today, given the knowledge we have. I’m pretty sure I know your answer. Obviously, we don’t know what will happen in 10 years, any more than we could know how far the internet would go in the 1990s.
Considering all the technology that has appeared over the last 100 years, what is so different now? Why is it imperative for us to legislate on the matter as quickly as possible?
My question is for you first, Mr. Guilmain.
:
Thank you for your question.
Right now, current geopolitical events are very peculiar. They’re leading us to raise many questions and accelerate the use of artificial intelligence. Those are the reasons why we’re asking ourselves a ton of questions. Now, the real question is this: must we legislate quickly?
If I dispassionately look at the equation between both variables, superintelligence and regulation, I’m unable to define superintelligence. I’m not saying I’m not concerned about it. Quite the contrary; I have a family, I’m thinking about the future. I’m very realistic about what this could represent.
Now, what we really must ask ourselves is if the course of action is to legislate on the issue; to put a moratorium on a type of technology we’re unable to define; or to at least take an interest in what we have right now, namely generative artificial intelligence, general use artificial intelligence systems and slightly riskier artificial intelligence systems in fields like biometrics, employment or justice.
It’s true that I’m rather down-to-earth when it comes to the priorities we should have right now.
One thing I'd say is that the world is very large and, unfortunately, we're allowed to have many problems at once. We're allowed to have the problem of pressing regulatory issues with artificial intelligence and to have to worry about the trajectory of the technology and where it's going.
As for your question about people in the past who were worried about various past general-purpose technologies, I would agree that many of the people who warned about the risks were wrong about the impacts, but some of them were right sometimes. It turned out that nuclear weapons were real. They're very catastrophic, and we treat them very differently from the Internet. For many technologies, though, yes, there are people who are always going to create a big stir about them.
General-purpose, very powerful artificial intelligence systems are different in a real sense. Having a system that's not just automating a particular cognitive task or a particular physical task but is doing the type of thinking we would be doing, or something similar to it, such that it could automate the process of automation is different in kind. It should be treated as different in kind.
I heard in your question that there is some speculation about when it will come and what its effects will be. Technological progress and forecasting in that domain are notoriously hard. I agree. That said, it certainly seems that when we look at the history of the field and how much trouble we've had developing this technology.... Even back then, the people who founded the field, Alan Turing and I.J. Good, were already imagining what it would look like if they succeeded. They were already thinking about these risks and what it would take to control a system that is much smarter than we are. I think something has changed in the trajectory of that technology, which I can speak to, and that means we should be thinking about it coming much sooner than we otherwise thought.
If we look at the matter very dispassionately, artificial intelligence uses the data we provide to it. There is a saying that goes, “junk in, junk out”. We see it in the case of artificial intelligence; hallucinations or false information are common. Sometimes, we even have to compare the data to finally get accurate information.
We are now at a point where just as much good as bad could come from artificial intelligence. We agree on that point.
Should we not instead have a system that adapts our legislation as quickly as technology evolves?
I’ll explain. Many good things will come from artificial intelligence in the fields of medicine, energy or science, for example. Artificial intelligence can think 24 hours a day and advance technologies we humans aren’t even thinking about. So, there is a positive side.
Good afternoon.
Some of my interests are in the area of consumer protection and competition; when I think about how AI is emerging, I think a bit about how some of the constraints in traditional areas of innovation don't exist. You have, for one thing, the immense resources that it takes to create, develop and operate AI, which limits the field of who can participate in this to begin with. I think that's an issue.
You also have a very underdeveloped framework for consumer protection or product liability, if any, so I guess my first question to you, maybe Antoine and then Mr. Bourgon, would be how we build and whether we should build a concept of product liability into a framework that we're looking at. How would you suggest we do that?
:
There are two different ways of thinking of product liability. You may think of a stand-alone AI act, but I tend to think that's not the right response. You may also look at different consumer protection acts and potentially the civil code in Quebec; this could be an option to assess what the gaps are. I will tell you, when we look at, for instance, the civil code.... I'm of a civilist tradition in terms of my training, and I like to think that jurists are pretty creative. We've already seen some interpretations of the law being adjusted with some use of AI; in that sense, I am probably quite confident that our current laws can evolve in terms of AI implications.
I know what I know. I know what I don't know. I'm not pretending to be an expert when it comes to consumer protection. That is not my field of specialty. However, it would potentially be interesting to have a regulatory body in the field raising a finger and saying it realizes that there are gaps, so it wants to tweak an existing act to make sure, essentially, that they are being captured. That is my position when it comes to AI.
It is much more efficient. It requires everyone at the table to be involved, but that's very different from essentially enacting a potential act on product liability, something we've seen in Europe, for example. It does exist. There are some directives and regulations on product liability, but the regime is fundamentally different in the way it's presented.
That's my initial reaction to your question.
:
I'm honoured to appear before this committee.
If we really did create evolving legislation, I think that I would speak every year on each of the nine birthdays of the committee members. Indeed, so many changes are taking place that I myself have trouble keeping up with the technological, legislative and societal changes.
I won't deny that, even though it's unfortunate, law remains a science of reaction. I agree with you that law isn't ethics. Its very nature is unfortunately imperfect and it will never be perfect. It's a bit like Don Quixote trying to evolve, but always lagging behind the times. However, we can remain visionaries in our approach to drafting legislation, while staying neutral from a technological standpoint.
At the moment, we're already having trouble defining what constitutes an artificial intelligence system. On a more basic level, more recently in Canada, we weren't sure what constituted a high‑impact artificial intelligence system. Even today, this concept alone is still subject to debate. The thing that really scares me about this type of approach is actually having to celebrate each of your birthdays once a year. It would be nice, but challenging.
That's why I think that this meeting is vital. There will be others. That said, the reality is that we must consider whether we can say that legislation could evolve over time. I think that this would be difficult. Take fax machines, for example. No one uses them anymore. However, until recently, the Code of Civil Procedure included provisions on this concept. We end up in rather bizarre situations where we have older technologies. The idea of superintelligence may still exist in five years. However, it will no longer be called this, and the legislation will inevitably still refer to this type of concept.
Thank you both, Mr. Guilmain and Mr. Bourgon, for being here for our first meeting.
As committee members know, the work plan showed that we were going to have the here. For the benefit of the committee, I'll say that we've reached out—probably eight or nine times, Madam Clerk—to the to appear, and that still has not been arranged. If anybody has any ability to make sure the minister comes before the committee, I would strongly suggest that you push him and his office to do that.
Thank you, gentlemen, for being here and providing to the committee what I thought was fascinating information.
I'm going to suspend. We'll go in camera to deal with committee business, and we'll return in a few minutes.
The meeting is suspended.
[Proceedings continue in camera]