:
Good morning, everyone. It's December, and I'm going to call this meeting to order.
I want to welcome everyone to meeting number 20 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
[Translation]
Pursuant to Standing Order 108(3)(h) and the motion adopted on Wednesday, September 17, 2025, the committee resumed its study of the challenges posed by artificial intelligence and its regulation.
[English]
I'd like to welcome our witnesses for the first hour today. Both are from Conjecture Ltd. We have Connor Leahy, who is the chief executive officer, and Gabriel Alfour, who is the chief technology officer.
Mr. Leahy, you have up to five minutes to address the committee. I understand that you may need a bit more time or want a bit more time. If it gets up to six minutes, I would accept that, but I know we have lots of questions to ask.
Mr. Leahy, go ahead, please.
:
Thank you, Mr. Chair and members of the committee, for inviting me to testify today.
I'm an expert on the catastrophic global threats of AI and will primarily be speaking to you from this perspective.
I am the CEO of Conjecture, which is an AI safety research firm. I'm also an adviser at ControlAI, which is a non-profit focused on mitigating the security risks posed by advanced AI.
In 1985, humanity awakened to a hole in the sky. Scientists discovered that chlorofluorocarbons, CFCs, were depleting the ozone layer, which shields humanity from damaging ultraviolet radiation. At the same time, humanity also lived atop a deep fracture—a cold war between the U.S. and the U.S.S.R. that threatened nuclear annihilation.
Amidst deep geopolitical tensions, the two superpowers ultimately shook hands, signing both a landmark nuclear de-escalation treaty and the Montreal Protocol in 1987 to prohibit and phase out CFCs. This protocol ultimately received universal ratification. Despite the world's divisions, these rival powers came together to mend a hole in the sky and to recognize that never-ending nuclear escalation was in no one's interest, and the rest of the world followed.
In 2023, humanity heard a new warning call from Nobel Prize-winning AI scientists and the CEOs of major AI companies, saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This risk of extinction is posed by superintelligence, the exact subset of AI that the leading AI companies are racing to develop.
Superintelligence is defined as AI that is more competent than all humans at all relevant cognitive tasks across all relevant domains and capable of acting beyond human oversight and control. If there were to exist systems that autonomously out-compete any human in all relevant tasks of science, business, persuasion, politics and warfare, and if we did not control them, it is hard to imagine a future that goes well for humanity.
A major part of the risk is that AI developers fundamentally do not understand how the AI systems they are creating actually work and cannot develop them in a safe manner. Dario Amodei, the CEO of the second-largest AI company, recently stated that we perhaps “understand 3% of how they work”, which is, in my personal opinion, somewhat of an overestimation.
AIs are not developed as code that is written line by line as we do with traditional software. Instead, researchers are essentially growing AI models by feeding them vast amounts of data and training them by using enormous computing power to produce what is called a neural network rather than a set of lines of computer code.
Unfortunately, the current AI development paradigm does not allow the safety-by-design approaches that we use for other advanced, highly risky technologies. We would not, for example, build nuclear power plants if we did not know how to control nuclear reactions. Technical control methods are lagging drastically behind the advancement in AI systems capabilities. Currently, there are no legally binding AI safety regulations to protect consumers and humanity as a whole.
Where does this leave us today? Right now, multiple AI companies are pouring hundreds of billions of dollars into developing superintelligent AI as quickly as possible despite experts warning of the risks. This haste is, in my opinion, directly tied to an attempt to outrun legislation to complete their projects before the wider public and the government wake up to the completely unconscionable risks the unconsenting public is being exposed to by private, oversightless and reckless actors.
Recently, AI companies have been racing to automate AI research itself, allowing AIs to build even better AIs by themselves in order to reach superintelligence more quickly. This process is called recursive self-improvement, meaning the moment an AI is built that is good enough to make better AIs, it might already be too late.
Leading scientists now estimate that superintelligence could be developed by 2030, or potentially even sooner. In the face of this pressing threat from superintelligence, I'd like to offer the committee three recommendations for how Canada can respond now.
One, the Canadian government should publicly recognize superintelligence as a national and global security threat that poses an extinction risk to humanity.
Two, Canada should begin negotiating an international agreement to prohibit the development of superintelligence, given that no scientific consensus can be developed in a way that does not threaten humanity with extinction. The agreement should also restrict and monitor superintelligence precursors such as recursive self-improvement.
Three, Canada should prevent the development of artificial superintelligence on its soil, as superintelligence would be capable of overpowering individuals, companies and even Canada's national security apparatus.
Thank you. I would be happy to take any questions you may have.
:
Mr. Chair and members of the committee, my name is Gabriel Alfour. I'm the chief technology officer and co-founder of Conjecture, an AI safety research firm. I also helped found ControlAI, a non-profit dedicated to preventing risks to humanity from artificial intelligence. ControlAI has engaged lawmakers in Canada, the U.S., the U.K. and the EU.
There are many complex and important challenges we face with AI, but in my personal and professional opinion, the most urgent one is the extinction risk posed by superintelligent AI. These are systems that vastly exceed human cognitive abilities and would be capable of out-competing us in scientific and military development, persuasion, politics, business and more. They would outsmart not just individuals, but corporations, national security establishments and governments. If built, they, not us, will be the force deciding the future.
How did we get to this point with AI?
First, the top experts from the field—the most cited AI scientists and the CEOs of the leading AI labs—warned in 2023 that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” However, said warnings were ignored. Leading AI companies are still recklessly pursuing superintelligent AI systems capable of outsmarting our best technology, engineers and national security experts, and of resisting being shut down. Their plans to control superintelligent systems are at best ungrounded and speculative—when they exist at all.
Second, there is a common misconception about AI development that we directly program how these systems behave, but we don't. We did until about 15 years ago, but modern AI systems are grown, not built, by being fed massive amounts of data, and their behaviour emerges in ways that we cannot predict or control. That is, AI is not coded line by line by humans, and researchers and engineers do not need to understand AI to create it. When AI systems encourage a young person to commit suicide, deceive their users or resist being shut down, no engineer programmed this. This is the consequence of not knowing how to diagnose what led the system to do this or how to reliably prevent it from doing so again.
Finally, as of today, artificial intelligence remains the exception, not the standard, when it comes to how high-risk industries are regulated. To operate in fields like nuclear and biotechnology, developers must comply with stringent safety standards, implement risk mitigation strategies, submit to inspections and so on, yet the AI field remains largely unregulated despite mounting concerns from within the industry. AI engineers in SF have told me they do not understand what they are building, and some see what they do as clearly dangerous. Even Geoffrey Hinton, the “godfather of AI”, left Google specifically to warn about the risks of AI.
What can be done to prevent said risks of extinction from artificial superintelligence? It is my belief that countries should not unilaterally act against their own interests, much less on blind trust. Instead, they must do two things.
First, at the national level, they must halt the development of the most dangerous AI systems, namely superintelligent systems. Every country stands to lose from the development of superintelligence and to benefit from domestically halting all programs developing superintelligence. Such systems, once deployed, could not be shut down and would outperform every human at hacking and other tasks, thus threatening the national security of countries.
Second, at the international level, countries must agree to regulate and monitor the precursors to superintelligence. We should apply the same regulatory approach used for dual-use technology like nuclear, biological and chemical materials, and prohibit development programs capable of egregious harm outright—in this case, artificial superintelligence—while regulating their precursors. This will allow beneficial applications to thrive while preventing catastrophic harms.
Determining which precursors' capabilities to regulate is a moving target that will evolve alongside our understanding of AI. Some precursors are, unfortunately, dual-use. Computer and data centres are economically beneficial, yet critical to developing superintelligence. Similarly, hacking capabilities offer military advantages, but could enable AI to break containment. For such dual-use precursors, international agreements are essential. No single country can mitigate these risks alone, nor should one country bear the economic cost of restrictions while others forge ahead.
Meanwhile, some precursors have narrower applications limited to AI research itself, such as systems capable of autonomously advancing AI research without human oversight, which could trigger an unchecked feedback loop of capability improvements.
Canada can also act domestically to neutralize dangerous AI systems within its borders. For example—
Gentlemen, thank you for joining us today.
This is an extremely important topic. We are only at the beginning of our study, but the witnesses who have appeared before the committee to talk about artificial intelligence seem to have very wide-ranging opinions. Some have talked about the very risky and dangerous side of artificial intelligence, while others have been very positive about the benefits it has in store for us.
I would like to draw a parallel between artificial intelligence and what we are seeing with social media. Private businesses have been allowed to directly develop social media at breakneck speed, with a kind of artificial intelligence baby that analyzes everything we look at to try to keep our attention focused on these networks at all times. Young people are experiencing unprecedented levels of stress, largely due to this.
How would you compare the early days of artificial intelligence with social media and with the superintelligence that is currently being developed, which you spoke about earlier?
A lot of the early research that is leading to the current boom in artificial intelligence started in the context of social media. A lot of the early research on what is now called deep learning and AI was done for social media recommendation algorithms.
Personally, I'm in a younger generation, the oldest gen Z generation. I remember we had a promise, in a sense, that if we just let social media and the Internet run free, if we didn't regulate, it would bring freedom and prosperity to the world. I don't know, but perhaps some members remember, for example, the Arab Spring. There was a widely held belief by me and by many other people at the time that widespread access to the Internet and social media would bring freedom and democracy.
These promises have turned out to be lies. They have not come true. They instead are being used by social media companies to cannibalize many aspects of our interpersonal communications and interpersonal relationships for their personal benefit. They are now pouring hundreds of millions of dollars into lobbying and other forums to try to prevent people from interfering.
I think the same pattern of behaviour—developing a technology so quickly that governments cannot react and actively trying to slow down governments to prevent them from regulating this technology until it's already too late—is exactly the playbook we are seeing being deployed right now by AI companies.
:
Thank you very much, Mr. Chair.
Welcome to the witnesses.
I must say that your statements have highlighted the risks associated with artificial intelligence. We usually hear about its positive aspects, but you are trying to point out the more problematic aspects.
I would like your answers to my questions to help us determine whether Canada is on the right track regarding oversight of artificial intelligence. We know that we are very advanced in this area, particularly in the Montreal region.
Here is my first question.
Canada has launched the Canadian Artificial Intelligence Safety Institute, or CAISI, whose mandate is to independently test and evaluate advanced artificial intelligence systems. In your opinion, how important is it for countries to create public and independent institutes such as the CAISI?
:
I think that's a great question.
I think we're already past many dangerous limits. For instance, we already have very persuasive systems. If we wanted to ensure that systems cannot manipulate people, we already have systems that are good at this. The same thing is true for hacking, for instance. If we wanted to ensure that current systems are not good at hacking, that is already lost. We already have systems that are good at hacking.
Now we're only measuring how much better they're getting, how superhuman they're getting and how much faster than people they're getting. We've already passed a few points that are quite dangerous. We're already in a tightening regime, edging closer to places from where we cannot really recover. This is the type of stuff we're talking about when we talk about measurements.
Another one that is relevant is how AI can autonomously develop itself. Right now we have companies that use AI more and more to develop AI. We have fewer and fewer humans in the loop. This is one of the other things we try to measure: how few humans are needed to develop AI. This is a measure of interest because it tells you when it could kick-start a runaway loop, which is basically a loop in the development of AI where it develops faster than we can even see it coming. These are the types of measurements we usually care about.
Gentlemen, your presentations are quite compelling.
I will start by asking you a slightly more technical question.
Some people would say that this is alarmist rhetoric, that artificial intelligence is a long way off, that we have not yet attained artificial superintelligence and that we will have time to look into it when the time comes.
However, two trends indicate the complete opposite of what these people are saying. Let me know if you agree. First, computing power is increasing exponentially. Second, the artificial intelligence models currently available appear to be advancing in intelligence at an exponential rate, thanks to available data and the size of the model.
Would you agree that we don’t have as much time as we think?
:
Governments are somewhat fascinated by means that improve efficiency, and they seem much more inclined to rush to implement applications to achieve greater efficiency. Time will tell whether there are indeed efficiency gains.
From what I understand from your remarks, artificial intelligence is a monster in the making, and it is evident that we cannot allow its development to continue without better regulation. Now, the question is how to regulate it.
In Canada, Bill , which died on the Order Paper, proposed the creation of the Commissioner of Artificial Intelligence and Data position within Innovation, Science and Economic Development Canada.
The recent Carney government has a . We would like to invite him here soon, but he does not want to meet with us. Last June, he stated that he would place greater emphasis on finding ways to exploit the economic benefits of this technology rather than on regulation.
What do you think of this approach?
:
I think there are, obviously, many reasons to focus on the positives rather than the negatives. It is, frankly, more profitable and more fun, to say it rather simply.
It is important to note that we are facing a polycrisis in many areas across the globe. I'm less familiar with Canada than I am, for example, with the U.K., where I currently live, or my native Germany or America, but we are facing many challenges. Often, there is a seduction towards thinking of technological solutions. Often, technology is a very powerful solution. It is a very powerful way to help address these problems. I believe AI can be helpful for many of these problems, but it is also important to be skeptical.
There was a world where nuclear power, when it was first being developed, was thought as a solution to everything. Some wanted to make nuclear-powered aircraft, for example, that would spew radiation as they flew. They wanted to use nuclear weapons for mining applications. I think the Russians actually tried that one, and many such things.
This is not to say that nuclear power is not an incredibly useful and powerful technology. I think nuclear power plants are some of the most effective ways to make energy, but the reason they are so safe, good and useful is good regulation. This took decades of hard work. It took the invention, by many experts, of whole new forms of safety engineering to reap the benefits.
I think we're seeing a similar thing here. If we try to reap the benefits without the correct amount of safety engineering that is necessary, we will see the same thing we've seen with social media repeat. We will not see the AI equivalent of safe, economically productive nuclear reactors.
There are basically two separate categories of risks. There are the risks that come from superintelligence and more generally from systems that cannot be controlled. For these risks, I recommend basically regulating the development of such systems.
The problem with such systems is that once they're developed, we cannot control them; we cannot put the genie back in the bottle. This is one type of regulation that is quite important. That is why we put a strong emphasis on international agreements and this type of regulation.
The other one is for the, let's say, more prosaic risks of current systems. Future systems are not yet superintelligent, which means that if there is a problem, we can put the genie back in the bottle. Here, it's more at the application level. When I say the application level, I mean more the AI companies—you want to regulate the bottlenecks. Here it will be beneficial to put stringent regulations on the dangerous aspects of AI and put strong liability regimes in place so that if the systems they build are used for nefarious purposes, they will also be liable for it.
I would separate those two, one being the development part of regulation for superintelligent AI and the other being the applications for current systems and near-future systems.
:
There are basically two regimes. There's the superintelligent regime and there's what comes before that.
In the superintelligent regime, everyone loses. If Russia builds superintelligent AI systems, everyone loses; they cannot control it. It's the same thing for China, it's the same thing for the U.S. and it's the same thing for any country. This is in the superintelligent regime. It's only the superintelligent AI systems that basically have any agency left.
Then there's the pre-superintelligent regime, where there is an actual race, because until there's superintelligence, when you can still steer your systems, you stand to gain a lot of benefits from developing stronger systems. This is why international agreements are important, and indeed, if we fail to build and reach international agreements, things will go badly.
The same is true for biotechnology. The same is true for any type of dangerous weapons, like nuclear weapons. This is why we believe that international agreements are critical, or else you get into the superintelligent regime and things go badly.
:
Thank you very much, Mr. Chair.
Thanks to the two witnesses with us today.
You have shared some very important information about artificial intelligence. I would not necessarily say that your presentations are alarming, but they are very factual. I can only agree with you about the risk. I can only agree with you on the extent of the impact that superintelligence could have on everyone’s daily life.
However, my opinion differs from yours on one point, and I would like to discuss it. You used the verb “halt”. Are you saying that we need to halt the development of these technologies or halt their use?
As you clearly explained, persuasion technologies or systems already exist. I think Canadians are already using such technologies. Do you want us to halt their use, given that most of these systems are developed outside Canada, or do you want us to halt their development on Canadian soil?
You drew a very interesting parallel with the nuclear arms race. However, in the nuclear arms race, geographical boundaries are important, which is not the case for technologies developed using training systems and artificial neural networks, which can be used anywhere in the world.
I would like to hear your opinion on this.
:
These are some very excellent questions, and this is, in fact, a very difficult thing.
The non-locality of digital technologies presents novel, unprecedented risks of proliferation and difficulty of control. With nuclear weapons, for example, we have the luck, in a sense, that uranium ore is quite bulky and centrifuges are quite hard to build and quite visible from space, if you do them right. We have some of these benefits when it comes to AI systems, such as data centres. Others, such as the open-sourcing of many such systems, present novel difficulties.
This is why we've put such emphasis on the regulation of the development. If a superintelligent system was made, it would probably be a computer file that can probably never be deleted. It would be something that would spread. We might not even know what we're dealing with when it is first developed. It is quite likely that we won't even know that the first superintelligent system is superintelligent until it's too late. It is already the case right now that many of our AI systems have capabilities that we didn't know at the time of development. We only discover much later that our systems are capable of things we didn't know.
This is a novel regime. This is not something we have dealt with very well historically. Even historically, things such as export controls on software have not been very successful or have been very tricky to enforce.
It's very important to say that we do not think all AI should be halted or that all AI applications should be halted or not used by users. I'm sure my colleague would agree with me that we very much enjoy many of the AI applications on the market today. What we want is to gain the benefits of the kind of AI we have right now and continue forward into more powerful applications.
:
We generally believe that there is a lot of value in what are often called “stop button” proposals, mostly not from a technical perspective, but as a social-political factor. As an example, I have in the past asked someone who worked at a large tech company whether they could shut down all of their servers if they wanted to. He said no. He didn't know where all of them were. There was no single person in the entire company who knew where all the servers were and what software was running on them.
As for creating legislation, I am not familiar, unfortunately, with the specific proposal you mention. There is a lot of value in this, but it sounds to me that the proposal pushes against development.
We cannot rely on waiting until we see a superintelligent system, because by the time we see one, it is already too late. It's quite likely that when the first superintelligent system gets built, we will not even recognize it as being that until quite a bit later, and that will be far too late.
It's very important—which is why there's a repeated emphasis on precursors—to make sure that such systems never get developed in the first place. To do this, we need to already be in the loop before such systems are built.
There are many different types of precursors. Some are hardware, like data centres or the way to build GPUs into the supply chain and things like this. Some are software, like the types of AI systems that have been built and the type of scaffolding you have on top of them.
A deep thing that is quite important to understand is that this is a moving target. As time passes, it gets easier to build superintelligent systems, and more things get into the category of precursors. This is also why we believe there is an urgency and that we should tackle this as soon as possible.
Right now, we can get away with, for instance, preventing research programs that are aimed at building superintelligent systems. We should also focus on limiting the open-sourcing of models, because once they're there, you cannot take them back. It is the same for data centres. For every data centre, there should be stop buttons and kill switches. There should be clear regimes for what can be done with them and so on. These are the types of regulations that we should have on precursors.
Fundamentally, it is a moving target. As the technology changes and as the way the technology is built changes, the target itself changes. The precursors of 15 years ago were very different than they are now, and it would have been much simpler to tackle the problem 15 years ago.
Thank you to our witnesses for appearing and providing valuable testimony.
Picking up on the comment from my Bloc colleague about the protection of people, I want to focus on priorities for a moment and the government's current priorities with respect to artificial intelligence.
You may be aware that the government is currently reviewing its AI strategy, but the existing AI strategy, the pan-Canadian artificial intelligence strategy, lists three priorities. The first priority is commercialization, so making money off of AI; the second is standards or protections in that area; and the third is talent and research.
Do you think those priorities are in the proper order?
:
Good afternoon. Thank you, Mr. Chair and honourable members of the committee.
My name is Carole Piovesan. I am a managing partner at INQ Law, where I advise clients on privacy, cybersecurity, data governance and AI risk management.
I've had the privilege of contributing to AI policy discussions nationally and internationally, including through the OECD.AI Policy Observatory. I have previously appeared before this committee, as well as the INDU committee. I am an adjunct professor at the University of Toronto's faculty of law, where I teach AI regulation. As well, I co-authored the book Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law.
The opinions I present today are my own personal opinions and do not reflect those of my law firm.
To understand how we should govern AI, we should go back to first principles and ask what AI is trying to achieve. In 1950, Alan Turing posed what he called the imitation game: a test to determine whether machines could think. He believed that one day machines would be able to play games, remember, observe results of their own behaviours, learn from rewards and punishments, and even deliberately introduce mistakes into their working.
Today, some of the leading AI researchers around the world are divided on where the trajectory of AI is taking us. Award-winning Canadian researchers such as Yoshua Bengio and Geoffrey Hinton, both pioneers of deep learning, warn that we may soon have computers that exceed human intelligence, with profound implications for safety and control—indeed, the existential threat we all hear about. Others, such as Yann LeCun, another pioneer, advance an argument that is more aligned with artificial machine intelligence, which is best understood as augmenting human intelligence rather than replacing it.
The purpose for pursuing AI and the achievements of those pursuits matters in how we think about governance. If a tool is to be used to extend human capabilities, we govern its use. If AI is an autonomous system capable of independent reasoning, we regulate its development and deployment with a different level of vigilance. Canada's approach must account for both.
Around the world, we are seeing at least three distinct models of governance being presented for AI.
Under the Trump administration, we are seeing a deregulatory approach in the United States with an emphasis on competitiveness over comprehensive safeguards. The federal approach relies on existing sectoral laws applied through agencies such as the FTC, while actively resisting state-level experimentation with stricter AI rules.
The United Kingdom and Singapore take a different approach. There, we are finding a much more tailored sectoral approach to AI regulation. The U.K., in particular, has a principles-based approach asking existing sector-specific regulators to interpret and apply cross-cutting principles such as safety, transparency, fairness and contestability within their domains. The U.K. considers that this approach offers critical adaptability that keeps pace with rapid technological change, although there are certain developments that suggest binding measures for the most powerful AI models may be forthcoming.
Singapore has certainly adopted a much more soft law, voluntary framework. There is no specific AI regulation. However, Singapore's approach through consensus building among government, industry and citizens, and through instruments such as the model AI governance framework and the AI Verify Foundation testing tool kit, has proven somewhat successful in building a sense of trust and a common approach to AI development. With Singapore's investment in national AI literacy and its consultative and iterative approach to governance, it's a model from which Canada can draw inspiration.
Then we see the third model, which is far more prescriptive. That model is found in the EU AI Act, which I know this committee has already heard about. That act is much more horizontal and is focused on the prescriptive life cycle of AI development and deployment across the supply chain.
Canada's approach should be tailored to our context. Regulating frontier AI systems is not the same as regulating Copilot in the use of a law firm or a chatbot on a service line. The U.K.'s context-specific approach recognizes this. Canada is more like the U.K. and Singapore than the United States or Europe. We value proportionate regulation that protects rights while enabling innovation.
I'll close with my three-point call to action.
The first is to continue building a regulatory guidance approach for safe AI. Our AI safety institute must be operating at full force, demonstrating that Canada takes the safety of these systems seriously. We must continue to target iterative standards guidance and a directives-based approach to artificial intelligence, with an emphasis on real-world testing for high-risk AI contexts. Lab benchmarks and off-line evaluations only show how models perform on static tests, not how they actually interact in real-world use.
Second, and very importantly, we need to improve the diversity in representation and perspectives in policy and throughout the development, evaluation and deployment process. Individual perspectives matter, and they are highly unrepresented throughout the AI ecosystem.
Third, we must conduct an environmental scan to better understand, on a sectoral basis, where our laws may have gaps to account for AI or where AI is already accounted for, so we have the coverage we need for the everyday use of AI in business. Targeting soft and hard law at home in a tailored manner and enabling Canada to play to its trusted global position to ensure robust and harmonized standards, certifications and guidance for responsible AI should be our path forward.
Thank you. I welcome the committee's questions.
:
That's a fair point; the systems aren't necessarily developed here.
First of all, through international coordination, we have a mechanism to inform what those values and standards ought to look like. We have precedent for this in a number of different respects, some of which you heard about from the earlier witnesses. We have some role to play in being clear about what embedding those good values looks like and what it means to develop intelligent computers that are aligned with the role we believe they should play in society.
I will give you the example of the G7 back in 2018, when our government and all the governments of the G7 were talking very much about the values in the context of AI and how we were going to advance standards, policies and practices that would protect those values. From there, you saw a movement into the global partnership on AI, and you saw a coordination through the United Nations on AI with a view to creating more of a harmonization in approach. Cut to the most recent G7, where the emphasis was primarily on adoption because we're now at a stage where we can see the real-world uses of AI and are much more prone to and excited about, in a lot of ways, the adoption of these technologies, which is a good thing.
In this current context, as we are starting to become much more familiar with the technology and understand its opportunities and use, we have to be mindful of the risks, but we cannot lose sight of the opportunities. We have a role to play in ensuring safe use where there is actual risk. What I don't want to do is establish a system that applies the same kinds of controls for all uses. We have to be targeted.
You said that we need to be very aware of the risks, and I agree with you on that. I think our government is already aware of these risks, as are several G7 member governments, as you said.
You also mentioned the American approach, which is much more competitive, and the U.K. approach, which is much more based on ethics and accountability.
What approach can you really suggest to the Canadian government?
As I have another question for you, I would ask you to be a little more concise, please.
:
Thank you very much, Mr. Chair.
Welcome, Ms. Piovesan.
In June 2025, Professor Bengio, who is also the founder of Mila, the Quebec Artificial Intelligence Institute, and scientific adviser to the institute, launched a new non-profit research organization on artificial intelligence security called LawZero, to prioritize security over commercial imperatives.
In your opinion, what are the greatest risks that artificial intelligence poses to security?
You said earlier that, from the outset, we must first ask ourselves what the objectives are for using this technology. People are very much driven by the lure of profit.
:
There are a number of recommendations to ensure that AI is used in a safer manner. Many of the recommendations I have in my written submissions I tried to consolidate, just for expediency.
We need to augment the standards we have in place. There are standards through the international standards organization. The Standards Council of Canada is working on a body of research to support Canadian standards in AI. The application of those standards matters.
When we look at ISO standards, we are looking at a governance standard for how you operationalize responsible AI within the use of a system. Actually, it's the use of AI in a particular context, which means it is not technical; it is governance-oriented.
By increasing our understanding through a national literacy program, through more sectoral guidance, through more industry collaboration and through greater perspectives being brought to the table, we can start to have very actionable plans around how we will operationalize these standards and what “good” looks like in order to achieve in each of these standards.
:
I don't think you can legislate so that businesses aren't hiring AI instead of real people. Let me offer a different perspective.
We work with clients all the time to operationalize their AI governance programs. When we're going through use-case identification, I always ask them three questions. Number one, what is the work they don't want to do because it's boring and mundane? Number two, how much time do they spend on each of those tasks? Number three, what would they be doing if they weren't doing those tasks? A hundred per cent of the time I am told they would be more proactive, they would be able to serve their mandate better and they would be able to contribute more value to their organization.
Here are my points.
Number one, AI as a tool is going to be used within our businesses, and we can't and shouldn't stop that use.
Number two, we will have to reorient the job market. I have kids. I'm distinctly aware of where they're headed and where there may be vulnerabilities in some of their job choices. I understand that there will be shifts in the way we structure labour within Canada, but we have to make that adjustment. Instead of resisting it, we have to support re-skilling and upskilling. This is something that we as a country have been talking about for many years.
The last point is about literacy. We should allow people to understand how this tool is to be used and enable them to better identify and shape their own career paths, recognizing the profound transformational impact of artificial intelligence.
:
I'll take your question in two parts.
First, is there something we should be doing to support this transitional period, as people might be looking for new opportunities because of the displacement caused by AI? Second, is there something we should be doing to prevent companies from using AI instead of people?
On the second point, I don't think we should be inhibiting companies from adapting to the use of AI. I don't think that's the approach. It's certainly outside the scope of my practice, so that's very much a personal opinion, but I don't think that's the right approach.
On the first part, I agree with you. We should be continuously monitoring and investigating how we can better support our workers as we go through a transformation on a step-by-step basis.