:
I call this meeting to order.
Welcome to meeting number 21 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
Pursuant to Standing Order 108(3)(h) and the motion adopted on Wednesday, September 17, 2025, the committee is resuming its study of the challenges posed by artificial intelligence and its regulation.
I would like to welcome our witnesses for today.
On Zoom, we have Frédéric Gonzalo, who is a consultant, speaker and trainer in digital marketing and artificial intelligence. Welcome.
We also have, from the Canadian SHIELD Institute for Public Policy, Vas Bednar, managing director.
Welcome back to committee. You were here a year ago today. We're celebrating an anniversary. Isn't that wonderful?
Dr. Matthew da Mota is the senior policy researcher at the Canadian SHIELD Institute. Welcome.
Before you begin, I decided today to consolidate the three witnesses together. We have an hour and a half. If it's the will of the committee to go a little bit longer, we will have the ability to have extra time. As it stands right now, we're going to finish roughly around six o'clock.
Mr. Gonzalo, I'm going to start with you for up to five minutes to address the committee.
Go ahead, please.
Good afternoon, members of the committee.
Thank you for inviting me to contribute to this important discussion on the challenges posed by artificial intelligence regulation.
For more than 30 years, I have been working with small and medium-sized organizations, particularly in the tourism, private education, culture and municipal services sectors in Quebec and internationally. These are often small businesses with fewer than 100 employees that want to adopt artificial intelligence to increase efficiency, but they don’t always know where to start, what to use and what risks to avoid.
My first observation is that regulatory uncertainty creates paralysis. SMEs don’t have legal teams or cybersecurity specialists. They want to do the right thing, but they don’t always have a concrete understanding of what is allowed, what is not recommended or what could lead to non-compliance. A framework that is too technical or rigid risks creating a digital divide between well-resourced organizations that can move forward and those that cannot.
My second observation is that there must be a balance between privacy and innovation. SMEs currently use tools like ChatGPT, Gemini or Canva AI without a full understanding of how their data is being processed. Policies change rapidly, interfaces evolve and it is difficult for SMEs to keep up. A set of simple and visual Canadian guidelines on consent, anonymization and data minimization tailored to small organizations would be extremely useful.
Third, digital literacy continues to be a big challenge. For the past few years, I have been providing artificial intelligence training to managers, municipal organizations, artists, restaurateurs and hoteliers. I have observed the same phenomenon everywhere: there is a real and immense enthusiasm, but people have limited practical knowledge. Employees use artificial intelligence in their personal lives, but rarely do so in a structured setting at work. Without training or support, artificial intelligence risks being misused or not used at all.
Fourth, the transformation of search engines into artificial intelligence engines has created a new challenge of digital discoverability. Businesses are now wondering how to be visible in ChatGPT, Perplexity or Gemini and how their content is cited or not cited by these platforms. The lack of transparency complicates matters for SMEs which simply want to exist in this evolving ecosystem.
Lastly, a proportionate compliance framework is needed. SMEs now mostly use artificial intelligence to write texts, respond to customers, automate administrative tasks or create visuals. These are low-risk uses. Regulations should therefore be tiered: heavy and strict for systems that have a societal impact, but simple, pragmatic and accessible for everyday use in small organizations.
In short, SMEs want to adopt artificial intelligence, but they don’t want to be left to their own devices. They need a clear framework, adequate support and tools that are tailored to their reality. Regulations must protect Canadians while allowing small organizations across the country to innovate, remain competitive and take full advantage of this technological revolution.
Thank you. I will be more than happy to answer your questions.
:
Thank you very much, Mr. Chair and members of the committee.
By way of a brief introduction, I'm the managing director of The Canadian SHIELD Institute for public policy and co-author of The Big Fix: How Companies Capture Markets and Harm Canadians. My work focuses on market power, technology and economic sovereignty.
I'm joined today by my colleague, Dr. Matthew da Mota. His work explores how technologies shape information and knowledge environments, particularly AI and the implications for national security and sovereignty. He's also a leader in the AI standardization community in Canada. You heard that it's his first appearance at committee; I hope it will not be his last.
Canada has been talking seriously about AI regulation for the better part of a decade now; and yet, while we've been mostly debating privacy, consent and data collection frameworks, AI hasn't been waiting for us. It hasn't been waiting for businesses, either. The technologies are already being deployed, shaping markets and shaping culture and economic outcomes in real time.
Much of the regulatory conversation to date has treated AI primarily as a data governance problem. That focus is important, but it's no longer sufficient, because what we're now facing isn't speculative or hypothetical. It is a present-day deployment challenge. We're regulating live-use cases, and at least that's how we think we need to start approaching this.
Here is some of what we've been studying at SHIELD. There's AI-generated music and cultural production that cannot be reliably distinguished without disclosure. Earlier today at Little Victories, my coffee, I was surprised to learn, was sponsored by Spotify. I wonder why. There's algorithmic and personalized pricing in housing, groceries, ticketing, insurance and elsewhere. Autonomous and agentic payment systems are beginning to transact without direct human initiation. What does that mean for the future of e-commerce and the discoverability of businesses big and small?
None of these challenges map directly, neatly or perfectly on a simple privacy and consent framework. They're about market governance. They blend consumer protection, competition, labour and financial oversight. They're about how power is exercised through automated systems in everyday life. If we have a gap today as a country, it's mostly that we've been reluctant to take clear positions on how AI is already being used and how it should maybe be constrained in practice.
Let me just expand on those three more concrete live-use cases.
The first is culture in CanCon. You know that Canada recently updated its Canadian cultural guidelines, its framework, to say that AI-generated material does not count as CanCon, but we did not take that extra step of clarifying what AI-generated material should count as. What is it? How should it be labelled? How should human creators be protected in markets that are now saturated with synthetic output? We have a regulatory vacuum in one of the country's most sensitive sovereignty domains.
The second is algorithmic pricing. Automated pricing systems are shaping and reshaping rent, tickets, groceries, consumer credit—all sorts of places. The Competition Bureau's forthcoming study in this arena is a crucial step forward. The challenge here is not just price discrimination, but also the normalization of machine-optimized extraction from households at scale. We care about the cost of living in Canada. We have to care about this practice.
For the third one, I just want to point to payments and financial autonomy. As AI systems begin to initiate transactions autonomously, which is interesting from a consumer protection and competition standpoint, we need to ask whether existing Bank Act principles like fairness, non-discrimination, explainability and regulatory oversight apply. If machines are transacting, then the governance expectations have to follow that transaction—not the interface.
I'll also note one element of caution in the broader economic narrative. We're being told that AI will rescue our productivity rut if only adoption moves fast enough, yet the evidence there remains highly mixed. Many enterprise deployments fail. Some controlled studies show that productivity losses occur rather than the gains that have been promised.
Yes, AI may well transform parts of our economy, but it would be a mistake to predicate Canada's entire growth strategy on unproven assumptions. If we over-promise and then under-govern, the public's going to pay twice—once through disrupted labour markets and again through weakened consumer protections.
In closing, AI regulation cannot remain anchored primarily in upstream debates about data collection alone. We have to regulate the downstream power that is already observable, how systems shape and reshape prices, wages, transactions, culture, information and access to opportunity. The technology is at work, and the question before this committee is whether governance can catch up.
Thank you. We look forward to your questions.
:
Thank you for a wonderful and challenging question.
In terms of a big weakness overall, I think it's very obvious that we're treading so carefully on not wanting to infringe upon or impede innovation.
In 1999, the U.S. took an explicit policy position around permissionless innovation that Canada tacitly echoed. We said, “Let's step back. Let's take our hands off the wheel. Let's throw spaghetti at the wall.” Right now, most of the time, we're trying to scrape some of that tomato sauce off the wall. That's why it's been so challenging for us to bring forward a big tech accountability agenda.
Our biggest constraint is that tension between feeling like any market intervention around governance and guardrails is seen or interpreted as impeding innovation and subsequent growth.
:
I wonder if you want to start with the principle of knowability when a system is being used or deployed or, for instance, when you interact with a chatbot in businesses and governments. It's very “Dude, where's my jetpack?” in terms of what we're going to get with AI.
We have a lot of chatbots. That's interesting and can save money on customer service. Put that aside. Should a chatbot be able to, frankly, masquerade or deceive people that they're a human? It can be very confusing for people. When I think I'm chatting with Mark at Canadian Tire or something, it's a computer system.
When you're chatting with the chatbot from the Government of Canada, and you're asking it questions about the immigration system, you may think that you are speaking with an agent or something like that. Again, it's that principle. Right now, we lack knowability a lot of the time. That's why I brought up music. Synthetic audio makes it basically impossible for us to detect when you're hearing a fake song. I know that sucks.
:
Thank you very much, Mr. Chair.
Thank you very much to the witnesses for being with us today. Their opening remarks were quite compelling and interesting and they truly align with this committee’s study, which is even more relevant at this critical juncture, when we need to protect Canadians and ensure that we do not hinder the growth of the digital economy in Canada. Canada is a pioneer in this field. That is a very important element.
Witnesses have mostly talked about culture and generative artificial intelligence and the creation of music or other forms of artistic or cultural content.
I have the following question with respect to putting in place control mechanisms. Should we have control mechanisms that govern the development of systems when it comes to learning, training systems and large language models, or LLMs, or should we have mechanisms to control use since Canadians are currently using this system?
When we talk about control mechanisms, what are we referring to? Are we talking about control in terms of personal behaviour or within a public organizational framework?
The question is for all of the witnesses.
First, is it feasible to control systems? If so, can you tell us how?
:
That’s an excellent question.
I am not an expert on regulations, but I think that when it comes to global platforms, Canada has a role to play regarding control, which can be done at the user level.
It would be very difficult to see how you put in place control mechanisms with OpenAI, Anthropic or the other firms, such as Microsoft. It is not easy to control businesses. There have been attempts to do that with Google and Meta over the past few years. I think that was part of the old Bill . In an ideal scenario, is it something that we would want to do? Maybe, but I think feasibility will not be easy.
However, we can control its use. At least, it may be possible to narrow the parameters within which consumers, traders and the public can use these tools.
I alluded to that in my remarks: There is a need to define how far we are going to go and what is allowed. It is also important to educate people about what can or cannot be done or should not be done. I think that is where there would be a role to play.
That’s my take on this issue.
:
I'll say that, with the application of generative elements—to bring it back to culture—we are also seeing that it's not something markets really want. iHeart radio recently announced that they will not play any music that has a synthetic component or is synthetically generated. We saw during the Oscars that moviegoers were offended that someone had vocal coaching that was synthetic or in the background of a movie.
We're starting to see, again, outside of more formal regulations, what markets and what people want and don't want. I do think, when it comes to the application of that material, that it's very important to pay attention, because we have a responsibility. Governments have a responsibility to do hard and difficult things.
That's why the government has also been studying copyright, AI and where that value is created. I know companies like OpenAI want us to think that it's very difficult to govern them, but it doesn't have to be that way.
Even though I will start by referencing an article by Mr. Gonzalo, my question is for all witnesses and I would like each of them to chime in.
In a blog article, Mr. Gonzalo, you stated that this year, there is a significant increase in the use of artificial intelligence tools as search engines. You explain that last year, 5% of Canadians surveyed stated that their first instinct to stay informed is to use these tools, and that this figure now stands at 12%. This is a significant increase that once again confirms the penetration rate of artificial intelligence in our daily lives.
I have some concerns when I see such an increase, in particular when it comes to the numerous unavoidable biases of artificial intelligence. We need to ask a basic question: Who is responsible for biases in data, algorithms and the results? No one knows.
References to biases in artificial intelligence allude to the appearance of biased results due to human prejudices that skew training data or source artificial intelligence algorithms. These skewed results can have adverse consequences. Biases that are not dealt with harm people’s ability to participate in the economy and society. Biases reduce the accuracy of artificial intelligence, and by extension, its potential. They have an impact on all society and businesses. This can be something such as recommending politically biased content, which can replicate or perpetuate echo chambers. These impacts may also be felt in recruitment or in access to credit and loans, for example.
How can we ensure these biases don’t mislead people?
:
You have zeroed in on the issue of biases. There is also the issue of hallucinations. I would say that we have not yet come up with a response or solution to these two factors. We know that big artificial intelligence companies say they are solving these issues, but the challenge remains real.
In my opinion, the government can ensure these companies are compliant, so to speak, by forcing them to be transparent. It’s important to try and open up this black box. For now, there is no mechanism in place in that regard.
A study by the Blue Cross on travel intentions by Quebeckers and Canadians was released today. Over 3,000 Canadians were surveyed to find out where they were planning to go this winter, in Canada or abroad. The results showed people are increasingly using artificial intelligence tools for travel suggestions and for tips and tricks to save money while travelling.
The report you alluded to in the article I wrote was the DGTL study published by Léger in September. From one year to the next, consumers are making more use of artificial intelligence in their daily lives.
Obviously, Google is still the main online search engine, but do they know exactly how Google’s algorithm works when giving results? They did not know more. There were just a few indicators. Artificial intelligence has put us in a field where we have sources, but we don’t know how the tool was trained.
This creates challenges for businesses, for example, as they don’t always understand why they are not recommended in search results. That poses a real challenge because instead of getting a list with hundreds of clickable links, you now get a mash-up answer with two or three suggestions for companies, businesses and organizations. Businesses are at risk if their name does not appear among these suggestions.
I don’t have an answer to that, unfortunately, but I think that it’s indeed a problem that must be dealt with.
:
This is an extremely concerning question that I've been working on for a few years—the question of how AI will impact research in general, especially Canadian research institutions. It's what we would call—and what we're working on under the term—“epistemic sovereignty”, which is the ability of a country or a community to be able to control the knowledge environment and how knowledge is produced. That's an important question, not only for researchers in the sciences and humanities but also for people working in government and for businesses. How do you translate information into knowledge and then into action in the world?
This is a huge concern. We don't know how a lot of these models are trained exactly. We don't necessarily know what kind of data they're being trained on. There have been many examples of intentional insertion of certain types of data to skew results towards one narrative or another. These are all major concerns.
In terms of how we could govern this, we need to think first about what we want our knowledge environment to look like. This is what I would say across the board on what we're doing with AI. What do we actually want the results to look like? What are the long-term goals? Then, we come up with solutions based on that.
Part of that would be thinking about the kinds of monopolies that control our information environment and our knowledge environment. This is very obvious in the big-tech sector, but in the research sector, in particular, there are only a few companies—they're all multinationals; none of them are Canadian companies—that own the vast majority of academic copyright. They also are developing AI tools to access and process that information from that copyright.
This is what our entire research and education system is built on at the university level, and this is a major concern.
Thank you to the witnesses.
I'm going to ask a fairly broad, high-level question to both witnesses. Other jurisdictions are a lot further ahead when it comes to regulation, and there's a vacuum here. In that sense, there's a debate, obviously, about to what extent, in broad terms, regulations should be grounded based upon the precautionary principle to everything up to post-deployment monitoring.
We can look to the EU with its Artificial Intelligence Act, which has had a challenging rollout, arguably, in terms of being critiqued as overly burdensome, with overly high compliance costs. Arguably, Bill , the Canadian model that never came to be, was more restrictive than the EU, insofar as the EU model, the EU act, has greater carve-outs. The U.K.'s regulatory framework is a little more flexible. Then there's the U.S. approach, and there are others. There are ranges there.
I'd be, in very broad terms, interested in your comments on some of the pros and cons of regulations imposed in other jurisdictions.
:
Yes, I think the first thing I would say is about the idea that regulation kills innovation. I think there's a lot of evidence that shows the contrary, or at least shows that it's a far more complicated question than that.
I think in the EU AI Act context, some of the things that are prohibited are things like active subliminal or manipulative kinds of AI, biometric categorization by race, things that I think we mostly can agree are probably unacceptable. The fact that companies are saying that the burden is too high is a little concerning, because either they're developing tools that want to do these things or they're just trying to open up space to be able to do whatever they want.
In terms of pros and cons, I think in Canada in some ways we're behind the United States and other leading countries in terms of commercializing AI in the leading companies. We still have probably the best or one of the best research environments for AI and other sciences in general. I would say we can lead in many ways. I think a great pro of thinking about the right kind of regulation is that we could lead on developing the kind of AI that people actually want to use, the safe, useful AI that can be used across all different areas in very specific domains or more generally. I think that's a huge pro to any kind of regulation.
Thank you to the witnesses for being here.
Ms. Bednar, thank you for writing The Big Fix. I would consider that a must read. I just want to commend your book, which has an excellent public policy perspective on many of these issues.
I would like to ask you specifically about the concept of algorithmic pricing, because I think it is actually new for a lot of us. We are, as consumers, already familiar with examples of surge pricing or variable pricing when we purchase an airline ticket, for example. Why should we be more worried about algorithmic pricing? How is AI changing the way businesses set prices for consumers today?
Then I have a follow-up question. What are the ways then that we can help protect consumers, their privacy and their pocketbooks?
:
I think one reason we should care about algorithmic pricing is because it's a form of personal pricing. It's personalized pricing that can be interpreted as being inherently discriminatory. Yes, there are a lot of places in the economy where we've come to accept price volatility. We all might drive around to a different gas station because we can see that the price has changed daily, but we can all see the same price.
With personalized pricing, each of us might see a different price for the same item. We're actually seeing that Target and Walmart in the U.S. have stopped, in some instances, even putting price labels on their shelves, saying they can't keep up with tariffs and all those other price changes. You then don't find out what the price is until you go to the checkout.
Loyalty programs are closed pricing ecosystems, where you and I might see and get a different discount. That's a different form of pricing designed to incentivize us to purchase certain things based on our past purchasing behaviour. It also means that the accessibility to, say, coupons—which we all used to get in the newspapers and we could all get the same discount on our milk or diapers, be they for your baby or for yourself—could be kind of equally accessed. That's changing.
You don't have to be a big company to do it. You don't have to be the biggest on the block. It is a practice that firms of all sizes, probably because we have kind of these legislative rule vacuums, have taken into account. One of the more insidious ones I've come across is the Taco Bell app, which can start to infer or learn when your payday may be because of the cookies. Again, these are data-hungry surveillance environments. My gordita deal is more expensive every other Friday.
The people who end up being taken most advantage of.... Again, it's maybe at the margins. It may seem like small sums, but it really adds up. Back to what I said before, that it sucks—this sucks, too.
Back to that element of no ability, it's very difficult to discern when it happens. Years ago, Amazon stopped having prices on its holiday gift guide. Remember getting the Eaton's catalogue and folding pages or peeking at your mom's Victoria Secret? There aren't prices now when it comes to the Amazon catalogue. You and I might see a different price based on the time of day, based on our geography or based on the devices we're using. That price is not to give us the best possible discount; it's to extract as much value as possible.
:
A lot of it comes back to knowability. Of course, I'll defer to and look forward to the Competition Bureau's forthcoming study on algorithmic pricing. We did see with the RealPage case, which was studied more in the U.S. than here, that we said there wasn't enough evidence that a software program was being used to drive up rents for apartment buildings that were owned. Again, it's a reminder that you don't have to be the largest firm to use software like this that could be collusive.
Canadians, I think, are still reeling from bread price-fixing. I think right now you can still get like $20 or $25. There's a different class action lawsuit or something. I'm going to have to google that.
Software systems and computer programs can allow this to happen. There are more models in the U.S., often at the state level. New York just introduced new legislation related to that kind of pricing that mostly has to do with disclosure and there have been other proposals to just ban it entirely.
You could argue there are instances where it's preferable or desirable, but again it's fundamentally an extractive process. It's not one that's really about rewarding your loyalty.
On that note, I would like to come back to earlier statements. That will give Dr. da Mota time to complete his response, but I’ll also ask Mr. Gonzalo to chime in.
Some experts have said the capacity for artificial intelligence systems to spread false information has almost doubled in only one year. That may be due to the fact that in the frenzied rush for performance, web giants have made their artificial intelligence tools more useful by connecting them to the web in real time. However, by opening the web, artificial intelligence systems directly expose themselves to an informational system that has been polluted and saturated by propaganda. The systems can’t systematically tell the difference between a credible source and a malicious site and digest falsehoods, whitewash them and present them by cloaking them in a veil of authority. In responding to everything, artificial intelligence has become a strong vector of disinformation.
That’s concerning, isn’t it?
How can we bypass that?
I’ll proceed differently this time and let Dr. da Mota go first, and then Mr. Gonzalo will go next.
:
I think this is extremely concerning. There is potential to have it be a supercharging of disinformation. There are obviously the targeted poisoning attacks of LLMs, where you essentially put material out on the Internet to intentionally be trolled by these large data collection processes in order to create certain narratives within the large language models. They will be then spit out for specific purposes, for propaganda purposes. But then there's the just day-to-day incorrect information that AI can generate, even more than just the hallucinations that Mr. Gonzalo mentioned before, where it just gives the wrong information.
There is this question of sycophancy as well. The model, when you speak with it, especially as it learns your personality and collects information on you, will tell you that your ideas are the most brilliant ideas ever. It will follow what you have to say. It will support your ideas and push them forward. It might feel nice to have a friendly conversant who's supportive of your ideas, but it's led to significant mental health issues as well. There's been a lot of reporting on this in the United States over the last year. It can also lead to political violence and siloing within the political environment.
I think all of this is extremely concerning. It's a disinformation and misinformation crisis without a clear centre. The centre is obviously the companies themselves, but there's not necessarily someone who is trying to push a certain narrative forward all the time. It's just the models themselves allowing people to go down their own rabbit hole of information, which is very concerning for social cohesion.
:
I would like to add to what has just been said.
In my opinion, the problem is not exclusive to artificial intelligence; it existed well before that. The disinformation that is proliferating on social media such as X, Instagram and YouTube comes from bot farms or similar places. How do companies such as Google, Meta and Alphabet put in place control mechanisms? There lies the problem and the potential solution. We have the responsibility to see how to regulate everything. However, artificial intelligence systems subsequently become victims, in a way, even if these companies have large resources to counter disinformation and detect artificial, robot-generated content.
The issue is not going away, but in my opinion, the question goes beyond simple artificial intelligence regulation. It encompasses the digital environment as a whole. I would reframe Mr. Thériault’s question through this lens.
:
I believe it was Microsoft that put out a report about some of the top jobs that are likely to be displaced or eroded. You've hit on the core challenge that labour economists have been looking at: To what extent is this technology complementary to existing jobs and enhancing them? Does it take away some of the drudgery work and let people focus on bigger skills, or is it displacing...and we see elimination?
When we look at the labour market for new grads, young people between the ages of 18 and 25, we know that they're having one of the toughest times in the labour market...tougher than, even, before the 1990s. We are seeing some early evidence that firms have chosen to take on, again, AI as a productivity-enhancing tool and as a substitute for training a young person. When we think about our economy in eight to 10 years, though I'd love to come back every December 3 to committee, I hope that I wouldn't have to testify about losing a layer of our labour market, not having senior engineers, writers or policy thinkers because we didn't bother to invest in having junior ones and we wanted to squeeze out a bit more productivity.
As we talk about the wartime efforts and investments that Canada has to make, we are going to have to think really seriously about other ways to support and stimulate smaller companies to train new grads, because it is costly, and we do have some programs for that and funding that people can access. However, really, a goal for Canada should be that, for youth employment—by the way, I'm the former chair of the expert panel on youth employment—we have meaningful, credible opportunities for young people to show off the skills that they already have instead of overfocusing on the supply of labour and the skills that they have, and recognize that the demand for labour may be fundamentally changing.
:
If I understand the question correctly, you're asking about potentially adversarial countries using powerful AI systems to undermine our sovereignty.
I think in one way, AI is kind of the ultimate underminer of sovereignty, potentially. The way that you use it and the way that it processes information are very unaccountable, especially the way that we govern it currently. I think, in terms of attacks from China, Russia and other countries, there is certainly speculation that AI systems can be used to enhance cyber-weapons, for example, and other kinds of attacks like that. Certain AI systems have been used extensively to find vulnerabilities in computer systems, for example.
There are lots of papers and discussions that speculate on how AI can enable different weapons, including CBRN, chemical, biological, radiological and nuclear defence weapons and so on. Whether that's an imminent threat...I think there's always an imminent threat. I spoke to an expert once who worked in the nuclear space who said that we're always about 10 seconds away from having a significant cyber-attack against a grid in a major country or in a major sector of a country. I think cyber-attacks are always a significant risk. Whether AI makes that more possible or less possible, I'm not 100% certain as of right now.
:
That’s an excellent question. Quite frankly, I am on the same page as Dr. da Mota.
The problems are real. Canada does not have applications like France, which has Mistral AI, or like Americans, who have their own solutions. We don’t have a large language model platform on which to host our data and which would allow us to be sovereign.
With respect to imminent attacks and how artificial intelligence can be misused, quite frankly, that’s not my field of expertise, so I prefer not to venture into that subject.
:
It is interesting to think about when or whether corporate consolidation is a strength for Canada or an opportunity. You could argue that having fewer large companies allows government to more quickly consult with them or get their views, but in terms of business practices and coordination, in markets of all sizes, what we see is that the small and medium-sized players tend to mimic and adopt practices the larger ones have. They may set the pace or set the bar for how AI is used.
Actually, data and information as a competitive advantage is something we haven't been able to grapple with through our competition law and really appreciate what that means for barriers to entry for new markets coming to Canada, such as when Canada potentially explored having a new grocery store. Remember that we did that very Canadian thing: We just asked really nicely.
There are lots of reasons for that. Part of it is geography and real estate. Many large grocers are also in the real estate business, fundamentally. We also saw this with, say, the Bay. The former CEO of the Bay said they were actually not a retailer; they were a real estate company. Through loyalty programs and the information profiles they have on us, it allows them to—again, you could argue—manipulate or set markets in particular ways. Maybe it makes it easier for them to control markets.
:
Thank you to you and the AI system of your choice for the question.
I've already touched on that false opposition that any form of regulation is going to get in the way of innovation. Something I come up against a lot in my research and my work is this idea that because there's not a government regulation, a market is ungoverned or the market is more free. All markets have rules; the question is whether those rules have been democratically set and are transparent.
Then, as you're saying, as you're trying to attract investment and say that companies should come here and compete, they know they're going to have a fair shot, or those rules can be set by private actors that become de facto regulators, and when that happens, as we've seen in digital markets, the rules are set in favour of the largest companies.
That's why so much of our e-commerce environment, which I think we still idealize as a free-ish market, is characterized by situations where large companies, but companies of all sizes, both own and operate in a marketplace, and that allows them to manipulate that marketplace. Of every dollar earned by independent sellers on Amazon, 48¢, or maybe 45¢, goes to Amazon.
Again, we look at those companies and say, “Man, why aren't they more productive? Why aren't they earning more?” When half of every dollar of revenue you own is going to an essentially junk fee that's been going up and up, maybe that's something that's getting in the way. Is that a free market? I don't think so.
:
I apologize if it was unclear, but I meant to say the opposite—that regulation does not kill innovation.
There have been a number of prominent studies that show that regulation can limit certain types of innovation in some contexts, but often it does not limit the really big leap-forward innovations that we see.
Jurisdictions like Sweden and North Korea, for example, are, I think, somewhat good comparisons for Canada. They have shown that really good regulation around making sure that we have guardrails for a certain type of technology can ensure that businesses know how they can innovate and know the lanes they need to follow. Then they're free to do whatever they want.
A really great example of this would be in nuclear. Historically, Canada has a really great nuclear sector, and it's because we had really great regulation. Other countries, including the United States, did not have that as much. They've had disasters, and their nuclear industry declined. I would also say that for AI....
Well, I'll leave it at that.
:
It’s important to be careful not to put in place universal regulations.
I’ll give you an example of what we see often. Right now, Quebec applies Bill 25 on privacy. It is well intended, but small businesses don’t know where to start with the bill. They don’t know what they can put on their website or who is responsible for collecting personal information. On the other hand, large businesses like Loto-Québec have legal teams and can apply the law. They also don’t use personal information in the same way as a small inn in Magog, which has a basic website for online reservations.
It’s important to see the basis for determining whether the regulations would apply to large companies or to SMEs with fewer than 100 employees, for example. Would the number of employees or the business turnover be taken into consideration? That’s where multi-level regulations would be worth considering. That’s what I meant.
:
In terms of our participation, you're referring to the International Network of AI Safety Institutes. Yes, I think that work can be very important for things that are international.
I think there are international risks. One thing that China and the United States have come to some agreement on, although not in a formal agreement, is that AI should not be a nuclear command and control communications.... I think that's a good thing that we can agree on internationally.
There are high-risk areas where we should not be putting AI. We need to have international agreements on that.
I think certain things need to be addressed on a national level. There are certain challenges that are uniquely Canadian—or perhaps they're not uniquely Canadian, but we're the ones who are best suited to think about how to best address those in Canada. We can be a beacon or an example for other countries. We might be able to have influence through that network, but we need to address them at home first.
:
I have a question for Mr. Gonzalo.
Billions of dollars are being invested in artificial intelligence, but despite recent technological progress, there are no corresponding productivity gains.
The KPMG report released last week shows that in an online survey of 753 business leaders across Canada, 93% of them said their organizations are using artificial intelligence, up from 61% last year. However, only 2% of respondents said their organizations are seeing a return on their generative artificial intelligence investments.
Developing this type of technology takes a long time. Stephanie Terrill, Canadian managing partner of digital and transformation at KPMG, says that “new technologies take time to be adopted and demonstrate identifiable return on investment.” However, according to Ms. Terrill, declining productivity in Canada means that waiting for years for AI investments to create value is “downright risky”.
What is your opinion? Are you equally concerned?
:
Thank you for the question.
That’s a real challenge.
I would say that there are two parts to your question.
First, there are massive investments to the tune of billions of dollars. There is a bit of a bidding war if we want to tell it like it is. When it comes to investments in training or hosting these platforms, this bidding war is real. Some people have talked about a bubble, but I wouldn’t go that far because speaking of a bubble means that it will burst. I don’t think we are there yet, but the risk is real.
Now, when it comes to the KPMG report that you mentioned with respect to integrating technology, I would like to remind the committee that there was a lot of talk about the web 25 or 30 years ago. When the dot-coms came around, benefits were not felt overnight. There was indeed a bubble in that case, but beyond that, businesses had to see how they could integrate everything that was coming with their transfer to digital. There is still talk of digital transformation today, 30 years later, so it is clear that it is a lengthy process.
Artificial intelligence goes beyond this aspect because it is cross-cutting. It has different functions, including accounting, human resources, marketing and customer service. It has an impact on all areas of a company or organization. It affects the public, studies and culture. It affects all spheres of society.
Why then doesn’t it work in businesses, from what we’re seeing? Often, it’s because they wanted to take all the tools and wondered how to integrate them. They use Copilot instead of asking themselves as a business what solutions these tools can provide and what processes could be improved. Work needs to be done. Some businesses do this correctly and take time to implement pilot projects to test tools before integrating them, and this normally delivers better results. Quite often, many integrations are rushed or there has not been any organizational reflection.
In this case, the bubble is not going to burst. Integration must be done. Conversely, there is a genuine risk of an investment bidding war among OpenAI, Anthropic, Google and others in a bid to secure dominance in this field.
:
I'm not going to say that every single regulation would be great. I think that world is possible, what you're saying. I would say that all of the issues you listed are happening right now, in a world where Canada has no regulation. Clearly, in terms of being behind on adoption, in terms of being behind on Canadians trusting AI—I think that's often a well-reasoned trust, because they don't know if they should trust these systems and their work—in terms of commercialization, in terms of Canada leading the way on research but then not being able to commercialize, and in terms of not being able to hold on to the IP, those are all issues that I think would have been solvable by making sure we had maybe not regulation but policy on holding on to IP, for example, that we fund with our own research funding, for example, over the last 30 years.
I think some regulations could miss the boat. I think in the EU AI Act, there's a focus on the number of FLOPs for training, for example, the size of the database and the training run for an AI system. I think those kinds of regulations might miss the boat. You might be able to, with new algorithms, train a system way easier on way less data, for example.
So yes, I think some of them might lock in certain things that would not be ideal, but I think actually what we're seeing is an economy and an ecosystem desperate for better guidance and better guidelines to help usher in the use of these tools and the development and innovation with these tools.
:
Thank you very much, Mr. Chair.
I will continue on the same topic, which I find very relevant.
We are discussing whether or not regulations can hinder innovation or the digital economy. Earlier, I asked what regulations and what controls we were speaking about. Are we talking about regulations on system development? From what the witnesses have said, I think I understood that it’s not possible or that it was almost impossible.
Are we talking about regulating the use of data or the exploitation of the data? I’m just trying to wrap my head around that.
Earlier, I asked whether learning algorithms, large language models and artificial neural networks could be controlled and if so, I’d like to know how.
From what I understood earlier, the technological side is coming out now. Given my professional background in technology, I don’t see how systems developed elsewhere, by companies in other countries, can be controlled when we don’t really have an influence on these entities or the regulatory power as far as they are concerned.
You have said that regulations cannot be a hindrance, but how can system development be regulated? I am not talking about regulating their use.
In October, the government launched a national artificial intelligence sprint to modernize Canada’s artificial intelligence strategy. The sprint is led by a working group that will review the AI strategy for the federal public service. To define the renewed strategy, the government will consider the working group’s recommendations and the results of a public consultation.
The problem is that a number of experts are already challenging the results of this consultation and they have said it is not very reliable. An article published in Le Devoir on October 29 quoted Matt Hatfield, who had expressed concern about this issue. He stated that “There may be some internet users who have asked AI to generate 100 answers,” for example. According to the article, “He criticized the government for accepting anonymous responses on its public consultation portal.” Matt Hatfield added, “I believe the government has not made any effort to truly understand what Canadians think about AI.” The article adds that according to Matt Hatfield, “has a ‘casual view’ of artificial intelligence and is more focused on the sector’s business opportunities and innovation than on the risks and harms of this new technology.”
First, is there any chance that the consultation is biased? If so, should anonymous responses be excluded from the consultation?
Mr. da Mota, can you answer the first question and Mr. Gonzalo will go next?
I don't have a strong opinion on the anonymous submissions. Some people might want to remain anonymous for legitimate reasons, but it is a concern in terms of the quality of the input.
Even a short consultation is better than nothing, but I think we need more than consultation. There needs to be more accountability in these kinds of consultations—an ongoing process and discussion. Obviously, there's the working group, but I think we can do better to have more engagement with different communities that are affected and experts simultaneously to try to have more input throughout these processes going forward.
I don't know the details about—
I want to pick up on the point about the change we've seen around standards and how we have struggled to adapt, in many respects, from an era of physical products.
We've had witnesses to the committee here who offered a couple of different perspectives, particularly around the issue of liability. On the one hand, some witnesses have spoken about existing laws being able to be interpreted to adjust for AI and digital platforms and some of the harms that we've talked about today. We've had others who have said that more specific laws may be useful here, even if it is difficult to capture a general purpose AI system, for the harms that a different user might explore or experience using that platform.
What's your perspective? Should we be looking at some sort of greater form of liability for an AI system? Where can we look for guidance on the type of framework we should create to capture that?
I want to say to the members of the committee, and I made this point the other night, that I think it's critical that we get the in front of this committee. We have, through the clerk, reached out to him and his office 11 times. We got the final answer on Monday that he isn't going to appear before the committee.
I'm really encouraging members of the Liberal Party to get the here. I think it's a critical step. We've heard a lot of great information as a result of this study. I think the minister needs to come before this committee and answer questions on some of the things that we've heard and other issues related to his mandate.
I'm extremely disappointed that we have not been able to get the here.
Madame Lapointe, I'm going to ask you to weigh in on this, please.
:
We've been very flexible in our time. We've asked 11 times for him to come before this committee. Each time, he's not been able to do that.
Please encourage him to come. We're going to be continuing this study for a bit more, perhaps after we come back. We need the here. We can't have no as an answer.
I want to thank our witnesses, Ms. Bednar, Mr. Gonzalo and Dr. da Mota, for coming here today. You've really added a lot of value to this study.
Dr. da Mota, for your first time here, we appreciate your expertise on this issue.
Ms. Bednar, if you want to mark your calendar for December 3, 2026, we'll be glad to have you back in another year.
That's all I have for today. The meeting is adjourned.