Skip to main content

ETHI Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Access to Information, Privacy and Ethics


NUMBER 021 
l
1st SESSION 
l
45th PARLIAMENT 

EVIDENCE

Wednesday, December 3, 2025

[Recorded by Electronic Apparatus]

(1630)

[English]

     I call this meeting to order.
    Welcome to meeting number 21 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
    Pursuant to Standing Order 108(3)(h) and the motion adopted on Wednesday, September 17, 2025, the committee is resuming its study of the challenges posed by artificial intelligence and its regulation.
    I would like to welcome our witnesses for today.
    On Zoom, we have Frédéric Gonzalo, who is a consultant, speaker and trainer in digital marketing and artificial intelligence. Welcome.
    We also have, from the Canadian SHIELD Institute for Public Policy, Vas Bednar, managing director.
    Welcome back to committee. You were here a year ago today. We're celebrating an anniversary. Isn't that wonderful?
    Dr. Matthew da Mota is the senior policy researcher at the Canadian SHIELD Institute. Welcome.
    Before you begin, I decided today to consolidate the three witnesses together. We have an hour and a half. If it's the will of the committee to go a little bit longer, we will have the ability to have extra time. As it stands right now, we're going to finish roughly around six o'clock.
    Mr. Gonzalo, I'm going to start with you for up to five minutes to address the committee.
    Go ahead, please.

[Translation]

    Good afternoon, members of the committee.
    Thank you for inviting me to contribute to this important discussion on the challenges posed by artificial intelligence regulation.
    For more than 30 years, I have been working with small and medium-sized organizations, particularly in the tourism, private education, culture and municipal services sectors in Quebec and internationally. These are often small businesses with fewer than 100 employees that want to adopt artificial intelligence to increase efficiency, but they don’t always know where to start, what to use and what risks to avoid.
    My first observation is that regulatory uncertainty creates paralysis. SMEs don’t have legal teams or cybersecurity specialists. They want to do the right thing, but they don’t always have a concrete understanding of what is allowed, what is not recommended or what could lead to non-compliance. A framework that is too technical or rigid risks creating a digital divide between well-resourced organizations that can move forward and those that cannot.
    My second observation is that there must be a balance between privacy and innovation. SMEs currently use tools like ChatGPT, Gemini or Canva AI without a full understanding of how their data is being processed. Policies change rapidly, interfaces evolve and it is difficult for SMEs to keep up. A set of simple and visual Canadian guidelines on consent, anonymization and data minimization tailored to small organizations would be extremely useful.
    Third, digital literacy continues to be a big challenge. For the past few years, I have been providing artificial intelligence training to managers, municipal organizations, artists, restaurateurs and hoteliers. I have observed the same phenomenon everywhere: there is a real and immense enthusiasm, but people have limited practical knowledge. Employees use artificial intelligence in their personal lives, but rarely do so in a structured setting at work. Without training or support, artificial intelligence risks being misused or not used at all.
     Fourth, the transformation of search engines into artificial intelligence engines has created a new challenge of digital discoverability. Businesses are now wondering how to be visible in ChatGPT, Perplexity or Gemini and how their content is cited or not cited by these platforms. The lack of transparency complicates matters for SMEs which simply want to exist in this evolving ecosystem.
    Lastly, a proportionate compliance framework is needed. SMEs now mostly use artificial intelligence to write texts, respond to customers, automate administrative tasks or create visuals. These are low-risk uses. Regulations should therefore be tiered: heavy and strict for systems that have a societal impact, but simple, pragmatic and accessible for everyday use in small organizations.
    In short, SMEs want to adopt artificial intelligence, but they don’t want to be left to their own devices. They need a clear framework, adequate support and tools that are tailored to their reality. Regulations must protect Canadians while allowing small organizations across the country to innovate, remain competitive and take full advantage of this technological revolution.
    Thank you. I will be more than happy to answer your questions.
(1635)
    Thank you for your opening remarks, Mr. Gonzalo.
    I now give the floor to Ms. Bednar.

[English]

     Ms. Bednar, you have up to five minutes to address the committee. Please start.
    Thank you very much, Mr. Chair and members of the committee.
    By way of a brief introduction, I'm the managing director of The Canadian SHIELD Institute for public policy and co-author of The Big Fix: How Companies Capture Markets and Harm Canadians. My work focuses on market power, technology and economic sovereignty.
    I'm joined today by my colleague, Dr. Matthew da Mota. His work explores how technologies shape information and knowledge environments, particularly AI and the implications for national security and sovereignty. He's also a leader in the AI standardization community in Canada. You heard that it's his first appearance at committee; I hope it will not be his last.
    Canada has been talking seriously about AI regulation for the better part of a decade now; and yet, while we've been mostly debating privacy, consent and data collection frameworks, AI hasn't been waiting for us. It hasn't been waiting for businesses, either. The technologies are already being deployed, shaping markets and shaping culture and economic outcomes in real time.
    Much of the regulatory conversation to date has treated AI primarily as a data governance problem. That focus is important, but it's no longer sufficient, because what we're now facing isn't speculative or hypothetical. It is a present-day deployment challenge. We're regulating live-use cases, and at least that's how we think we need to start approaching this.
    Here is some of what we've been studying at SHIELD. There's AI-generated music and cultural production that cannot be reliably distinguished without disclosure. Earlier today at Little Victories, my coffee, I was surprised to learn, was sponsored by Spotify. I wonder why. There's algorithmic and personalized pricing in housing, groceries, ticketing, insurance and elsewhere. Autonomous and agentic payment systems are beginning to transact without direct human initiation. What does that mean for the future of e-commerce and the discoverability of businesses big and small?
    None of these challenges map directly, neatly or perfectly on a simple privacy and consent framework. They're about market governance. They blend consumer protection, competition, labour and financial oversight. They're about how power is exercised through automated systems in everyday life. If we have a gap today as a country, it's mostly that we've been reluctant to take clear positions on how AI is already being used and how it should maybe be constrained in practice.
    Let me just expand on those three more concrete live-use cases.
    The first is culture in CanCon. You know that Canada recently updated its Canadian cultural guidelines, its framework, to say that AI-generated material does not count as CanCon, but we did not take that extra step of clarifying what AI-generated material should count as. What is it? How should it be labelled? How should human creators be protected in markets that are now saturated with synthetic output? We have a regulatory vacuum in one of the country's most sensitive sovereignty domains.
    The second is algorithmic pricing. Automated pricing systems are shaping and reshaping rent, tickets, groceries, consumer credit—all sorts of places. The Competition Bureau's forthcoming study in this arena is a crucial step forward. The challenge here is not just price discrimination, but also the normalization of machine-optimized extraction from households at scale. We care about the cost of living in Canada. We have to care about this practice.
    For the third one, I just want to point to payments and financial autonomy. As AI systems begin to initiate transactions autonomously, which is interesting from a consumer protection and competition standpoint, we need to ask whether existing Bank Act principles like fairness, non-discrimination, explainability and regulatory oversight apply. If machines are transacting, then the governance expectations have to follow that transaction—not the interface.
    I'll also note one element of caution in the broader economic narrative. We're being told that AI will rescue our productivity rut if only adoption moves fast enough, yet the evidence there remains highly mixed. Many enterprise deployments fail. Some controlled studies show that productivity losses occur rather than the gains that have been promised.
    Yes, AI may well transform parts of our economy, but it would be a mistake to predicate Canada's entire growth strategy on unproven assumptions. If we over-promise and then under-govern, the public's going to pay twice—once through disrupted labour markets and again through weakened consumer protections.
    In closing, AI regulation cannot remain anchored primarily in upstream debates about data collection alone. We have to regulate the downstream power that is already observable, how systems shape and reshape prices, wages, transactions, culture, information and access to opportunity. The technology is at work, and the question before this committee is whether governance can catch up.
    Thank you. We look forward to your questions.
(1640)
     Thank you, Ms. Bednar. I appreciate your opening statement.
    We're going to start with our six-minute rounds of questions.
    Mr. Barrett is going to kick things off.
    Go ahead, Mike.
     Ms. Bednar, from your perspective, as someone who studies digital market failures and governance, what is the single biggest structural weakness in Canada's current AI strategy, and what's the effect of that on public accountability and our economic sovereignty?
     Thank you for a wonderful and challenging question.
     In terms of a big weakness overall, I think it's very obvious that we're treading so carefully on not wanting to infringe upon or impede innovation.
    In 1999, the U.S. took an explicit policy position around permissionless innovation that Canada tacitly echoed. We said, “Let's step back. Let's take our hands off the wheel. Let's throw spaghetti at the wall.” Right now, most of the time, we're trying to scrape some of that tomato sauce off the wall. That's why it's been so challenging for us to bring forward a big tech accountability agenda.
    Our biggest constraint is that tension between feeling like any market intervention around governance and guardrails is seen or interpreted as impeding innovation and subsequent growth.
     What should it look like? What should those guardrails look like?
    Did you have the opportunity to see any of the previous committee hearings or any of the testimony from our most recent meeting, for example?
     No; we looked a little bit at who was appearing and into companies and background.
     It's not required homework, though I do encourage all Canadians to regularly watch the proceedings of the Standing Committee on Access to Information, Privacy and Ethics.
(1645)
    Of course.
    However, the question I have posed to other witnesses is about the challenge, or the instinct, to regulate and to put up as many guardrails as we can and prevent the runaway freight train of AI superintelligence, and everything will then be okay.
    Of course, that has to be done in concert with peer countries or even with a global compact, but if you have any other actors—let's say, bad actors—who are the state sponsors, currently, of cyber-attacks on Canada, how are we able to balance regulation while also allowing ourselves to progress? We're going to need to deploy AI in some form, I would expect, to defend against AI weapons.
     One thing, historically, we have tried to do in one piece of legislation is regulate both the composition of these systems and their application. You can view that as an opportunity to separate some of those thoughts, which is why we're putting forward using use cases to understand where and how this technology is being disruptive or deceiving people. Where do we not understand where it is and how it is distorting markets?
    The second fundamental challenge for Canada is how, in complementary trade agreements, we're constrained through the digital chapter in CUSMA from achieving what many people would want us to be able to do, such as, for instance, mandating data residency or auditing algorithms to even try to start to understand them. We cannot do that because we're constrained. As we look forward to what we want to be able to do when it comes to interpreting, understanding, appreciating, governing or having the right oversight or auditability of those algorithmic systems, we are currently unable to do that.
     What would institutional reforms need to look like that would insulate our AI oversight as a country from political cycles, inconsistency or, let's just say, knowledge deficits at the political level?
    For example, a minister responsible for artificial intelligence is a new thing, so what is the mandate of that minister? What's that ministry responsible for?
    That's going to evolve, change, cycle in and, potentially, cycle out with changes in the ministry and in the federal cabinet. How do we insulate against the cyclical nature of the political element so that we have consistency and stable regs?
     I wonder if you want to start with the principle of knowability when a system is being used or deployed or, for instance, when you interact with a chatbot in businesses and governments. It's very “Dude, where's my jetpack?” in terms of what we're going to get with AI.
    We have a lot of chatbots. That's interesting and can save money on customer service. Put that aside. Should a chatbot be able to, frankly, masquerade or deceive people that they're a human? It can be very confusing for people. When I think I'm chatting with Mark at Canadian Tire or something, it's a computer system.
    When you're chatting with the chatbot from the Government of Canada, and you're asking it questions about the immigration system, you may think that you are speaking with an agent or something like that. Again, it's that principle. Right now, we lack knowability a lot of the time. That's why I brought up music. Synthetic audio makes it basically impossible for us to detect when you're hearing a fake song. I know that sucks.
     Thank you.
     Thank you. I'm sure that will get recorded in the blues, “that sucks”. You can say it. It's all good.

[Translation]

    Mr. Sari, you have the floor for six minutes.
    Thank you very much, Mr. Chair.
    Thank you very much to the witnesses for being with us today. Their opening remarks were quite compelling and interesting and they truly align with this committee’s study, which is even more relevant at this critical juncture, when we need to protect Canadians and ensure that we do not hinder the growth of the digital economy in Canada. Canada is a pioneer in this field. That is a very important element.
    Witnesses have mostly talked about culture and generative artificial intelligence and the creation of music or other forms of artistic or cultural content.
    I have the following question with respect to putting in place control mechanisms. Should we have control mechanisms that govern the development of systems when it comes to learning, training systems and large language models, or LLMs, or should we have mechanisms to control use since Canadians are currently using this system?
    When we talk about control mechanisms, what are we referring to? Are we talking about control in terms of personal behaviour or within a public organizational framework?
    The question is for all of the witnesses.
    First, is it feasible to control systems? If so, can you tell us how?
(1650)
    Mr. Gonzalo can answer the first question.
    That’s an excellent question.
    I am not an expert on regulations, but I think that when it comes to global platforms, Canada has a role to play regarding control, which can be done at the user level.
     It would be very difficult to see how you put in place control mechanisms with OpenAI, Anthropic or the other firms, such as Microsoft. It is not easy to control businesses. There have been attempts to do that with Google and Meta over the past few years. I think that was part of the old Bill C‑18. In an ideal scenario, is it something that we would want to do? Maybe, but I think feasibility will not be easy.
    However, we can control its use. At least, it may be possible to narrow the parameters within which consumers, traders and the public can use these tools.
    I alluded to that in my remarks: There is a need to define how far we are going to go and what is allowed. It is also important to educate people about what can or cannot be done or should not be done. I think that is where there would be a role to play.
    That’s my take on this issue.

[English]

     I don't know who wants to address that, Ms. Bednar or Mr. da Mota.
    I'll say that, with the application of generative elements—to bring it back to culture—we are also seeing that it's not something markets really want. iHeart radio recently announced that they will not play any music that has a synthetic component or is synthetically generated. We saw during the Oscars that moviegoers were offended that someone had vocal coaching that was synthetic or in the background of a movie.
    We're starting to see, again, outside of more formal regulations, what markets and what people want and don't want. I do think, when it comes to the application of that material, that it's very important to pay attention, because we have a responsibility. Governments have a responsibility to do hard and difficult things.
    That's why the government has also been studying copyright, AI and where that value is created. I know companies like OpenAI want us to think that it's very difficult to govern them, but it doesn't have to be that way.

[Translation]

    I’d like to continue the discussion on OpenAI, but I have another question about Quebec culture.
     I really believe in raising awareness to address many societal challenges. It is important to educate Canadians about artificial intelligence so they can better understand it.
    Some people don’t even realize that the music they are listening to has been generated using artificial intelligence, be it on Spotify where all algorithms focus on music that is now generated by artificial intelligence, or even on YouTube, for example.
    Do you think increasing public awareness could be more effective than control?

[English]

     Absolutely not. This isn't an education failure. It's impossible. It's intentional deceit. It is companies that want to extract value from real artists and musicians who have already depreciated the payouts that they receive and are training computer systems. Calling it AI sometimes makes it a bit fancier than it is. They're actively training systems to take artists and real bands out of the equation altogether and earn more for themselves on this fake music.
    I find it deeply offensive that we can be in elevators, at work or in a hotel room and listening to something that's frankly not real. It's just a bunch of sounds.
    I like your words “fake music”.
    There's a new word you can use now—fake music. Do you call generative music fake music?
     I call it fake news. It's fake music. It's fake sounds. It's fake.

[Translation]

    Can I chime in?
    You may, but briefly.
    Algorithms do play a powerful role. We know that today, about 70% of the content consumed on Netflix comes from the platform’s recommendations. On Spotify, 50% to 60% of music shown is driven by playlists, your tastes and your listening habits.
    Some education does come into play, but it’s important to recognize the role and the strength of these algorithms.
    Thank you.

[English]

    Well, there goes my music career after I retire. I was hoping to have a synthetic music career, but that may not work now.

[Translation]

    You have six minutes, Mr. Thériault.
(1655)
     Thank you, Mr. Chair.
    Even though I will start by referencing an article by Mr. Gonzalo, my question is for all witnesses and I would like each of them to chime in.
     In a blog article, Mr. Gonzalo, you stated that this year, there is a significant increase in the use of artificial intelligence tools as search engines. You explain that last year, 5% of Canadians surveyed stated that their first instinct to stay informed is to use these tools, and that this figure now stands at 12%. This is a significant increase that once again confirms the penetration rate of artificial intelligence in our daily lives.
    I have some concerns when I see such an increase, in particular when it comes to the numerous unavoidable biases of artificial intelligence. We need to ask a basic question: Who is responsible for biases in data, algorithms and the results? No one knows.
    References to biases in artificial intelligence allude to the appearance of biased results due to human prejudices that skew training data or source artificial intelligence algorithms. These skewed results can have adverse consequences. Biases that are not dealt with harm people’s ability to participate in the economy and society. Biases reduce the accuracy of artificial intelligence, and by extension, its potential. They have an impact on all society and businesses. This can be something such as recommending politically biased content, which can replicate or perpetuate echo chambers. These impacts may also be felt in recruitment or in access to credit and loans, for example.
    How can we ensure these biases don’t mislead people?
    May I answer that question?
    Yes, please go first, Mr. Gonzalo.
    You have zeroed in on the issue of biases. There is also the issue of hallucinations. I would say that we have not yet come up with a response or solution to these two factors. We know that big artificial intelligence companies say they are solving these issues, but the challenge remains real.
    In my opinion, the government can ensure these companies are compliant, so to speak, by forcing them to be transparent. It’s important to try and open up this black box. For now, there is no mechanism in place in that regard.
    A study by the Blue Cross on travel intentions by Quebeckers and Canadians was released today. Over 3,000 Canadians were surveyed to find out where they were planning to go this winter, in Canada or abroad. The results showed people are increasingly using artificial intelligence tools for travel suggestions and for tips and tricks to save money while travelling.
    The report you alluded to in the article I wrote was the DGTL study published by Léger in September. From one year to the next, consumers are making more use of artificial intelligence in their daily lives.
    Obviously, Google is still the main online search engine, but do they know exactly how Google’s algorithm works when giving results? They did not know more. There were just a few indicators. Artificial intelligence has put us in a field where we have sources, but we don’t know how the tool was trained.
    This creates challenges for businesses, for example, as they don’t always understand why they are not recommended in search results. That poses a real challenge because instead of getting a list with hundreds of clickable links, you now get a mash-up answer with two or three suggestions for companies, businesses and organizations. Businesses are at risk if their name does not appear among these suggestions.
    I don’t have an answer to that, unfortunately, but I think that it’s indeed a problem that must be dealt with.
    Mr. Thériault, I know that Dr. da Mota would also like to chime in.
    Yes, of course.

[English]

     Mr. da Mota, do you want to respond to that?
     This is an extremely concerning question that I've been working on for a few years—the question of how AI will impact research in general, especially Canadian research institutions. It's what we would call—and what we're working on under the term—“epistemic sovereignty”, which is the ability of a country or a community to be able to control the knowledge environment and how knowledge is produced. That's an important question, not only for researchers in the sciences and humanities but also for people working in government and for businesses. How do you translate information into knowledge and then into action in the world?
     This is a huge concern. We don't know how a lot of these models are trained exactly. We don't necessarily know what kind of data they're being trained on. There have been many examples of intentional insertion of certain types of data to skew results towards one narrative or another. These are all major concerns.
    In terms of how we could govern this, we need to think first about what we want our knowledge environment to look like. This is what I would say across the board on what we're doing with AI. What do we actually want the results to look like? What are the long-term goals? Then, we come up with solutions based on that.
     Part of that would be thinking about the kinds of monopolies that control our information environment and our knowledge environment. This is very obvious in the big-tech sector, but in the research sector, in particular, there are only a few companies—they're all multinationals; none of them are Canadian companies—that own the vast majority of academic copyright. They also are developing AI tools to access and process that information from that copyright.
    This is what our entire research and education system is built on at the university level, and this is a major concern.
(1700)
     Thank you, Dr. da Mota.

[Translation]

    Thank you, Mr. Thériault.
    Mr. Gonzalo, I apologize for cutting your answer short earlier, but I noticed that someone else in the room wanted to contribute.

[English]

     Mr. Cooper, you have five minutes. Go ahead, please.
     Thank you, Mr. Chair.
    Thank you to the witnesses.
    I'm going to ask a fairly broad, high-level question to both witnesses. Other jurisdictions are a lot further ahead when it comes to regulation, and there's a vacuum here. In that sense, there's a debate, obviously, about to what extent, in broad terms, regulations should be grounded based upon the precautionary principle to everything up to post-deployment monitoring.
    We can look to the EU with its Artificial Intelligence Act, which has had a challenging rollout, arguably, in terms of being critiqued as overly burdensome, with overly high compliance costs. Arguably, Bill C-27, the Canadian model that never came to be, was more restrictive than the EU, insofar as the EU model, the EU act, has greater carve-outs. The U.K.'s regulatory framework is a little more flexible. Then there's the U.S. approach, and there are others. There are ranges there.
    I'd be, in very broad terms, interested in your comments on some of the pros and cons of regulations imposed in other jurisdictions.
     Of course. I will turn it to Matthew.
    I'll just share that something I've been noticing is the language around regulatory harmonization being used now. I think it's the new way we signal a kind of deregulation or lower regulatory environment. It's a way to suggest to Canada that because we don't have our own path forward we should continue to wait and to follow others.
    But yes, there are other models that are instructive in various ways.
     Yes, I think the first thing I would say is about the idea that regulation kills innovation. I think there's a lot of evidence that shows the contrary, or at least shows that it's a far more complicated question than that.
    I think in the EU AI Act context, some of the things that are prohibited are things like active subliminal or manipulative kinds of AI, biometric categorization by race, things that I think we mostly can agree are probably unacceptable. The fact that companies are saying that the burden is too high is a little concerning, because either they're developing tools that want to do these things or they're just trying to open up space to be able to do whatever they want.
    In terms of pros and cons, I think in Canada in some ways we're behind the United States and other leading countries in terms of commercializing AI in the leading companies. We still have probably the best or one of the best research environments for AI and other sciences in general. I would say we can lead in many ways. I think a great pro of thinking about the right kind of regulation is that we could lead on developing the kind of AI that people actually want to use, the safe, useful AI that can be used across all different areas in very specific domains or more generally. I think that's a huge pro to any kind of regulation.
(1705)
     Mr. Gonzalo.

[Translation]

    I think it boils down to what I said earlier. I think we can’t be against regulation. On the contrary. The only thing that I would recommend, which I discuss with the businesses I work with, would be to adopt graduated regulations.
    Many businesses make fairly basic use of generative artificial intelligence in general, whereas bigger organizations integrate artificial intelligence on a larger scale. Both types of businesses therefore do not use artificial intelligence in the same way. Unfortunately, there is a tendency to want to introduce uniform regulations that apply to all types of businesses. The only thing that I would recommend would be to tread carefully. I think it’s good to adopt a form of regulation, but it should not be applied too broadly.

[English]

     Thank you, Mr. Cooper.

[Translation]

    Thank you, Mr. Gonzalo.

[English]

    Ms. Church, you have five minutes.
    Go ahead, please.
    Thank you, Mr. Chair.
    Thank you to the witnesses for being here.
    Ms. Bednar, thank you for writing The Big Fix. I would consider that a must read. I just want to commend your book, which has an excellent public policy perspective on many of these issues.
    I would like to ask you specifically about the concept of algorithmic pricing, because I think it is actually new for a lot of us. We are, as consumers, already familiar with examples of surge pricing or variable pricing when we purchase an airline ticket, for example. Why should we be more worried about algorithmic pricing? How is AI changing the way businesses set prices for consumers today?
    Then I have a follow-up question. What are the ways then that we can help protect consumers, their privacy and their pocketbooks?
    I think one reason we should care about algorithmic pricing is because it's a form of personal pricing. It's personalized pricing that can be interpreted as being inherently discriminatory. Yes, there are a lot of places in the economy where we've come to accept price volatility. We all might drive around to a different gas station because we can see that the price has changed daily, but we can all see the same price.
    With personalized pricing, each of us might see a different price for the same item. We're actually seeing that Target and Walmart in the U.S. have stopped, in some instances, even putting price labels on their shelves, saying they can't keep up with tariffs and all those other price changes. You then don't find out what the price is until you go to the checkout.
    Loyalty programs are closed pricing ecosystems, where you and I might see and get a different discount. That's a different form of pricing designed to incentivize us to purchase certain things based on our past purchasing behaviour. It also means that the accessibility to, say, coupons—which we all used to get in the newspapers and we could all get the same discount on our milk or diapers, be they for your baby or for yourself—could be kind of equally accessed. That's changing.
    You don't have to be a big company to do it. You don't have to be the biggest on the block. It is a practice that firms of all sizes, probably because we have kind of these legislative rule vacuums, have taken into account. One of the more insidious ones I've come across is the Taco Bell app, which can start to infer or learn when your payday may be because of the cookies. Again, these are data-hungry surveillance environments. My gordita deal is more expensive every other Friday.
    The people who end up being taken most advantage of.... Again, it's maybe at the margins. It may seem like small sums, but it really adds up. Back to what I said before, that it sucks—this sucks, too.
    Back to that element of no ability, it's very difficult to discern when it happens. Years ago, Amazon stopped having prices on its holiday gift guide. Remember getting the Eaton's catalogue and folding pages or peeking at your mom's Victoria Secret? There aren't prices now when it comes to the Amazon catalogue. You and I might see a different price based on the time of day, based on our geography or based on the devices we're using. That price is not to give us the best possible discount; it's to extract as much value as possible.
(1710)
     I take from this that our legal frameworks right now are insufficient.
    How do we get to the bottom of this? How do we make sure that a business isn't setting a personalized price on a discriminatory basis based on what they can infer, presumably, of my background, my financial situation, my geography and this whole constellation of data points that they presumably have access to now through AI?
     A lot of it comes back to knowability. Of course, I'll defer to and look forward to the Competition Bureau's forthcoming study on algorithmic pricing. We did see with the RealPage case, which was studied more in the U.S. than here, that we said there wasn't enough evidence that a software program was being used to drive up rents for apartment buildings that were owned. Again, it's a reminder that you don't have to be the largest firm to use software like this that could be collusive.
     Canadians, I think, are still reeling from bread price-fixing. I think right now you can still get like $20 or $25. There's a different class action lawsuit or something. I'm going to have to google that.
    Software systems and computer programs can allow this to happen. There are more models in the U.S., often at the state level. New York just introduced new legislation related to that kind of pricing that mostly has to do with disclosure and there have been other proposals to just ban it entirely.
    You could argue there are instances where it's preferable or desirable, but again it's fundamentally an extractive process. It's not one that's really about rewarding your loyalty.
    That's fascinating.
    Thank you, Ms. Church.

[Translation]

    Mr. Thériault, you have the floor for five minutes.
    Thank you, Mr. Chair.
    On that note, I would like to come back to earlier statements. That will give Dr. da Mota time to complete his response, but I’ll also ask Mr. Gonzalo to chime in.
    Some experts have said the capacity for artificial intelligence systems to spread false information has almost doubled in only one year. That may be due to the fact that in the frenzied rush for performance, web giants have made their artificial intelligence tools more useful by connecting them to the web in real time. However, by opening the web, artificial intelligence systems directly expose themselves to an informational system that has been polluted and saturated by propaganda. The systems can’t systematically tell the difference between a credible source and a malicious site and digest falsehoods, whitewash them and present them by cloaking them in a veil of authority. In responding to everything, artificial intelligence has become a strong vector of disinformation.
    That’s concerning, isn’t it?
    How can we bypass that?
    I’ll proceed differently this time and let Dr. da Mota go first, and then Mr. Gonzalo will go next.

[English]

    I think this is extremely concerning. There is potential to have it be a supercharging of disinformation. There are obviously the targeted poisoning attacks of LLMs, where you essentially put material out on the Internet to intentionally be trolled by these large data collection processes in order to create certain narratives within the large language models. They will be then spit out for specific purposes, for propaganda purposes. But then there's the just day-to-day incorrect information that AI can generate, even more than just the hallucinations that Mr. Gonzalo mentioned before, where it just gives the wrong information.
    There is this question of sycophancy as well. The model, when you speak with it, especially as it learns your personality and collects information on you, will tell you that your ideas are the most brilliant ideas ever. It will follow what you have to say. It will support your ideas and push them forward. It might feel nice to have a friendly conversant who's supportive of your ideas, but it's led to significant mental health issues as well. There's been a lot of reporting on this in the United States over the last year. It can also lead to political violence and siloing within the political environment.
    I think all of this is extremely concerning. It's a disinformation and misinformation crisis without a clear centre. The centre is obviously the companies themselves, but there's not necessarily someone who is trying to push a certain narrative forward all the time. It's just the models themselves allowing people to go down their own rabbit hole of information, which is very concerning for social cohesion.
(1715)

[Translation]

    I would like to add to what has just been said.
    In my opinion, the problem is not exclusive to artificial intelligence; it existed well before that. The disinformation that is proliferating on social media such as X, Instagram and YouTube comes from bot farms or similar places. How do companies such as Google, Meta and Alphabet put in place control mechanisms? There lies the problem and the potential solution. We have the responsibility to see how to regulate everything. However, artificial intelligence systems subsequently become victims, in a way, even if these companies have large resources to counter disinformation and detect artificial, robot-generated content.
    The issue is not going away, but in my opinion, the question goes beyond simple artificial intelligence regulation. It encompasses the digital environment as a whole. I would reframe Mr. Thériault’s question through this lens.
    What tools could solve this problem? There must be some tools.
     Platforms are trying to implement tools. For example, YouTube requires users to say whether or not their video content was generated using artificial intelligence. People are expected to be transparent when they publish their content on some platforms, such as Meta, Facebook, Instagram and so on. However, this mechanism almost always relies on people’s goodwill.
     In our reflection on the tools that we need, we need to ask ourselves if we want to force the issue. Members will recall that a person is expected to be at least 13 years old to have a social media account, even though we know very well the reality is quite different. We have seen that some countries are starting to introduce regulations to manage things better. Perhaps these platforms should be forced to apply their user policies or terms and conditions.
    Thank you, Mr. Gonzalo and Mr. Thériault.

[English]

    Mr. Gill, you have five minutes, sir.
    Thank you, Chair.
    These days, AI is emerging very fast. What effect will AI have on the job market? Will AI create more jobs or replace them? As well, which specific jobs are most at risk from AI?
     I believe it was Microsoft that put out a report about some of the top jobs that are likely to be displaced or eroded. You've hit on the core challenge that labour economists have been looking at: To what extent is this technology complementary to existing jobs and enhancing them? Does it take away some of the drudgery work and let people focus on bigger skills, or is it displacing...and we see elimination?
     When we look at the labour market for new grads, young people between the ages of 18 and 25, we know that they're having one of the toughest times in the labour market...tougher than, even, before the 1990s. We are seeing some early evidence that firms have chosen to take on, again, AI as a productivity-enhancing tool and as a substitute for training a young person. When we think about our economy in eight to 10 years, though I'd love to come back every December 3 to committee, I hope that I wouldn't have to testify about losing a layer of our labour market, not having senior engineers, writers or policy thinkers because we didn't bother to invest in having junior ones and we wanted to squeeze out a bit more productivity.
    As we talk about the wartime efforts and investments that Canada has to make, we are going to have to think really seriously about other ways to support and stimulate smaller companies to train new grads, because it is costly, and we do have some programs for that and funding that people can access. However, really, a goal for Canada should be that, for youth employment—by the way, I'm the former chair of the expert panel on youth employment—we have meaningful, credible opportunities for young people to show off the skills that they already have instead of overfocusing on the supply of labour and the skills that they have, and recognize that the demand for labour may be fundamentally changing.
(1720)
    These days there are so many self-driving cars in the market, so who is responsible if a self-driving car causes an accident? Should humans still learn how to drive if cars become fully automated?
     Bruce Holsinger has a wonderful book called Culpability—it was one of Oprah's book club's picks this summer—that starts to help with exploring that. We've seen that, in many recorded instances, when a self-driving vehicle has been in an accident, the software actually turns itself off a second or milliseconds before the point of collision. This allows companies to skirt culpability and say that the driver was actually at fault. Again, this is an instance in which we have seen a computational system come to the market not fully tested but, rather, like these other generative systems we've been talking about, relying on us as use testers. Right now I would say that, yes, there are self-driving vehicles on the market, as moderated by our provincial vehicular standards around where they can be. However, as for the credibility of the software and safety, I think that, when we get into a vehicle like that, we are all testing it.
     There will be too much dependency on the artificial intelligence: Is that not right? In our social structure, how will it affect humans? Will they be socially isolated if they use artificial intelligence?
     To go back to some of Matthew's earlier points about our post-secondary system, the strength of that system and the source of Canadian pride that we have, we're seeing evidence that, when students, young people and workers of all kinds use these algorithmic systems to do or support their work, they retain only about 20%, at best, so one-fifth of the information. They don't even actively remember what they were writing. It decreases brain activity.
    I would put aside social isolation and think about this myth that this technology can help us be self-driving as humans, take away our agency or that there are shortcuts to things. I may not have had the opportunity to study closely the previous testimony of the guests and witnesses you've had, but I'm not going to show up here with material that an algorithmic system has generated and not take the time to put my own thoughts together. That's one of the core questions we have when it comes to not just outsourcing the labour and work of thinking but whether we are going to need to think about this, like we did in the nineties, when we knew that labour was being actively offshored. Are we seeing instances in which labour is now going to be “AI-offshored”, and the job isn't actually going anywhere else but to a computer program?
     Thank you.
    Mr. Saini, you have five minutes. Go ahead, please.
     Thank you for coming.
     I'm going to talk a little bit on a different issue.
     We had witnesses who said that the uncontrollable use of AI could be a danger to a country's sovereignty. Countries like Russia, China, India and U.S.A. are preparing those things.
     Could you elaborate on that part of it?
    If I understand the question correctly, you're asking about potentially adversarial countries using powerful AI systems to undermine our sovereignty.
    I think in one way, AI is kind of the ultimate underminer of sovereignty, potentially. The way that you use it and the way that it processes information are very unaccountable, especially the way that we govern it currently. I think, in terms of attacks from China, Russia and other countries, there is certainly speculation that AI systems can be used to enhance cyber-weapons, for example, and other kinds of attacks like that. Certain AI systems have been used extensively to find vulnerabilities in computer systems, for example.
    There are lots of papers and discussions that speculate on how AI can enable different weapons, including CBRN, chemical, biological, radiological and nuclear defence weapons and so on. Whether that's an imminent threat...I think there's always an imminent threat. I spoke to an expert once who worked in the nuclear space who said that we're always about 10 seconds away from having a significant cyber-attack against a grid in a major country or in a major sector of a country. I think cyber-attacks are always a significant risk. Whether AI makes that more possible or less possible, I'm not 100% certain as of right now.
(1725)
     Mr. Gonzalo, would you be able to share your viewpoint on that?

[Translation]

    That’s an excellent question. Quite frankly, I am on the same page as Dr. da Mota.
    The problems are real. Canada does not have applications like France, which has Mistral AI, or like Americans, who have their own solutions. We don’t have a large language model platform on which to host our data and which would allow us to be sovereign.
    With respect to imminent attacks and how artificial intelligence can be misused, quite frankly, that’s not my field of expertise, so I prefer not to venture into that subject.

[English]

    Thank you.
     Ms. Bednar, in your opening remarks, you said that there is also loss of production in some parts of industries.
    Could you elaborate a little bit on that? Which industries are the ones that, in your view, are suffering from the use of AI?
     Suffering from the use? The applications of proprietary algorithmic systems can do really interesting and amazing things for supply chain optimization and moving elements around, and some of the ports work that we've seen in Quebec is all really encouraging. I would say more that we need to be careful about being dazzled and impressed by successful applications of the technology, thinking that it means that we should continue to hesitate when it comes to design.
    One of the things I mentioned that no one's asked me about is the future of commerce with agentic payments, asking essentially a chatbot, a computer system, to make a purchase on your behalf. What that could mean is large multinationals preferencing their own companies over our own. The other witness mentioned smaller companies being challenged with how Google search is changing, information asymmetries and their ability to even connect with customers. If that ability to be discovered is becoming more dependent or interdependent on a model like ChatGPT to help you find a store, then that represents a real constraint in terms of access to markets for all kinds of businesses.
     Thank you, Mr. Saini.
    Mr. Barrett, you have five minutes. Go ahead, please.
    Ms. Bednar, to pick up where we left off before, does corporate consolidation make it easier for Canada to regulate something like knowability as a right? This idea about the illusion of choice is something you've talked about a lot in your writing.
    I think some of that is there's a little bit more transparency for those who are looking and for those who have been presented with that information, but does it make it easier if we're dealing with a smaller number of really big players that are controlling many things? Does that make it easier, or does that have the opposite effect? Is it a bigger challenge for us?
     It is interesting to think about when or whether corporate consolidation is a strength for Canada or an opportunity. You could argue that having fewer large companies allows government to more quickly consult with them or get their views, but in terms of business practices and coordination, in markets of all sizes, what we see is that the small and medium-sized players tend to mimic and adopt practices the larger ones have. They may set the pace or set the bar for how AI is used.
    Actually, data and information as a competitive advantage is something we haven't been able to grapple with through our competition law and really appreciate what that means for barriers to entry for new markets coming to Canada, such as when Canada potentially explored having a new grocery store. Remember that we did that very Canadian thing: We just asked really nicely.
    There are lots of reasons for that. Part of it is geography and real estate. Many large grocers are also in the real estate business, fundamentally. We also saw this with, say, the Bay. The former CEO of the Bay said they were actually not a retailer; they were a real estate company. Through loyalty programs and the information profiles they have on us, it allows them to—again, you could argue—manipulate or set markets in particular ways. Maybe it makes it easier for them to control markets.
(1730)
     I have a question for you, and it's not from me. It's from an AI model. I asked it what I should ask you.
     I used a model I don't normally use for any purpose so it didn't have any or much context about me or why I'm asking you the question.
    The question it has—I'm sure it's listening—is this: Which widely believed narrative about AI in Canada do you think is most misleading right now, and what risks does that misconception create for policy-makers or the public?
     Thank you to you and the AI system of your choice for the question.
    I've already touched on that false opposition that any form of regulation is going to get in the way of innovation. Something I come up against a lot in my research and my work is this idea that because there's not a government regulation, a market is ungoverned or the market is more free. All markets have rules; the question is whether those rules have been democratically set and are transparent.
    Then, as you're saying, as you're trying to attract investment and say that companies should come here and compete, they know they're going to have a fair shot, or those rules can be set by private actors that become de facto regulators, and when that happens, as we've seen in digital markets, the rules are set in favour of the largest companies.
    That's why so much of our e-commerce environment, which I think we still idealize as a free-ish market, is characterized by situations where large companies, but companies of all sizes, both own and operate in a marketplace, and that allows them to manipulate that marketplace. Of every dollar earned by independent sellers on Amazon, 48¢, or maybe 45¢, goes to Amazon.
    Again, we look at those companies and say, “Man, why aren't they more productive? Why aren't they earning more?” When half of every dollar of revenue you own is going to an essentially junk fee that's been going up and up, maybe that's something that's getting in the way. Is that a free market? I don't think so.
    What's the solution? What is the policy proposal you would recommend? Is this about awareness? Are the changing prices in grocery stores based on the time of day or based on who is nearby? On sites like Amazon, my five-year-old circled everything in the Amazon book.
     Mr. Barrett, I just—
    My 12-year-old asked how much does this one cost, because the price wasn't there.
    I just put into Copilot, “Is Mr. Barrett's time up?” It said, “Yes, it is.”
    Voices: Oh, oh!
    It was a hallucination. We still have time.
     No, it wasn't.
    I know.
    Can we come back to you on that, Mr. Barrett? Okay, thank you.

[Translation]

     Ms. Lapointe, you have the floor.
    Thank you very much, Mr. Chair.
    Thank you to the witnesses for being here. Their insight is very compelling.
    Dr. da Mota, earlier, you said that regulation kills innovation. Indeed, the European Union, which implemented regulatory measures, is now going to reverse course.
     I would like you to tell us about that. What did you mean exactly?
(1735)

[English]

    I apologize if it was unclear, but I meant to say the opposite—that regulation does not kill innovation.
    There have been a number of prominent studies that show that regulation can limit certain types of innovation in some contexts, but often it does not limit the really big leap-forward innovations that we see.
    Jurisdictions like Sweden and North Korea, for example, are, I think, somewhat good comparisons for Canada. They have shown that really good regulation around making sure that we have guardrails for a certain type of technology can ensure that businesses know how they can innovate and know the lanes they need to follow. Then they're free to do whatever they want.
     A really great example of this would be in nuclear. Historically, Canada has a really great nuclear sector, and it's because we had really great regulation. Other countries, including the United States, did not have that as much. They've had disasters, and their nuclear industry declined. I would also say that for AI....
     Well, I'll leave it at that.

[Translation]

    I also have a question for you, Mr. Gonzalo.
    You spoke about having graduated regulation, meaning small and medium-sized businesses would have different regulations from larger businesses. I would like to hear your suggestion on how this regulation could have different tiers.
    It’s important to be careful not to put in place universal regulations.
    I’ll give you an example of what we see often. Right now, Quebec applies Bill 25 on privacy. It is well intended, but small businesses don’t know where to start with the bill. They don’t know what they can put on their website or who is responsible for collecting personal information. On the other hand, large businesses like Loto-Québec have legal teams and can apply the law. They also don’t use personal information in the same way as a small inn in Magog, which has a basic website for online reservations.
     It’s important to see the basis for determining whether the regulations would apply to large companies or to SMEs with fewer than 100 employees, for example. Would the number of employees or the business turnover be taken into consideration? That’s where multi-level regulations would be worth considering. That’s what I meant.
    Thank you, Mr. Gonzalo.
    My next question is for all the witnesses.
    Canada is one of the founding members of the International Network of AI Safety Institutes. How can this type of international leadership contribute to the development of global security standards for cutting-edge models?
    Do you think we need to work internationally to provide a framework for artificial intelligence?

[English]

    In terms of our participation, you're referring to the International Network of AI Safety Institutes. Yes, I think that work can be very important for things that are international.
    I think there are international risks. One thing that China and the United States have come to some agreement on, although not in a formal agreement, is that AI should not be a nuclear command and control communications.... I think that's a good thing that we can agree on internationally.
     There are high-risk areas where we should not be putting AI. We need to have international agreements on that.
     I think certain things need to be addressed on a national level. There are certain challenges that are uniquely Canadian—or perhaps they're not uniquely Canadian, but we're the ones who are best suited to think about how to best address those in Canada. We can be a beacon or an example for other countries. We might be able to have influence through that network, but we need to address them at home first.

[Translation]

    Thank you.
    Mr. Gonzalo, would you like to add anything to that?
     No, I completely agree with Dr. da Mota’s remarks. I think our role as a global leader helps, even if it is just to be able to share information and to see what is being done elsewhere. This gives us a front row seat to what is taking place elsewhere and to see how we can develop our intelligence in this equation. While we can base ourselves on what is happening abroad, a big part of the regulations should be put in place here in Canada, and so I think the two complement each other well.
(1740)
    Thank you very much to all of you.
    Thank you, Ms. Lapointe.
    Mr. Thériault, you have the floor for five minutes.
    I have a question for Mr. Gonzalo.
     Billions of dollars are being invested in artificial intelligence, but despite recent technological progress, there are no corresponding productivity gains.
    The KPMG report released last week shows that in an online survey of 753 business leaders across Canada, 93% of them said their organizations are using artificial intelligence, up from 61% last year. However, only 2% of respondents said their organizations are seeing a return on their generative artificial intelligence investments.
    Developing this type of technology takes a long time. Stephanie Terrill, Canadian managing partner of digital and transformation at KPMG, says that “new technologies take time to be adopted and demonstrate identifiable return on investment.” However, according to Ms. Terrill, declining productivity in Canada means that waiting for years for AI investments to create value is “downright risky”.
    What is your opinion? Are you equally concerned?
    Thank you for the question.
    That’s a real challenge.
    I would say that there are two parts to your question.
     First, there are massive investments to the tune of billions of dollars. There is a bit of a bidding war if we want to tell it like it is. When it comes to investments in training or hosting these platforms, this bidding war is real. Some people have talked about a bubble, but I wouldn’t go that far because speaking of a bubble means that it will burst. I don’t think we are there yet, but the risk is real.
     Now, when it comes to the KPMG report that you mentioned with respect to integrating technology, I would like to remind the committee that there was a lot of talk about the web 25 or 30 years ago. When the dot-coms came around, benefits were not felt overnight. There was indeed a bubble in that case, but beyond that, businesses had to see how they could integrate everything that was coming with their transfer to digital. There is still talk of digital transformation today, 30 years later, so it is clear that it is a lengthy process.
    Artificial intelligence goes beyond this aspect because it is cross-cutting. It has different functions, including accounting, human resources, marketing and customer service. It has an impact on all areas of a company or organization. It affects the public, studies and culture. It affects all spheres of society.
    Why then doesn’t it work in businesses, from what we’re seeing? Often, it’s because they wanted to take all the tools and wondered how to integrate them. They use Copilot instead of asking themselves as a business what solutions these tools can provide and what processes could be improved. Work needs to be done. Some businesses do this correctly and take time to implement pilot projects to test tools before integrating them, and this normally delivers better results. Quite often, many integrations are rushed or there has not been any organizational reflection.
    In this case, the bubble is not going to burst. Integration must be done. Conversely, there is a genuine risk of an investment bidding war among OpenAI, Anthropic, Google and others in a bid to secure dominance in this field.
    What about declining productivity in the country? That is real. How long will this integration take if it is done properly?
    Yes, it is quite real. However, honestly, we can and should ask ChatGPT or Perplexity the question to see what they come up with. We are not psychic. It’s very difficult to answer that question. There are some indicators that can help us and which can point to certain things based on certain trends, but nevertheless, we are looking at an innovation. Everything that falls under artificial intelligence, and generative artificial intelligence in particular, will fundamentally change aspects of our daily lives. We are dealing with a big unknown. It’s therefore difficult to know how this will turn out.
    We know that it will increase productivity. Will it justify the investments? Will there be a return on investment? In the short term, it may be difficult to answer in the affirmative. It is most likely to be more profitable in the medium and long term.
     On the other hand, it’s impossible not to invest and at the same time see the possibilities. I think everyone agrees on that. That was alluded to in the remarks you cited.
(1745)
    Speaking of—
    Thank you, Mr. Thériault. Your five minutes are up.
    You’ll have another two and a half minutes later to ask questions.

[English]

     Mr. Cooper, you have five minutes. Go ahead.
     I just want to ask Mr. da Mota to elaborate on a comment he made earlier.
    If I misheard you, then please state so, but I understood you to say that regulations do not impede innovation. Did I hear that correctly?
    From a lot of studies in terms of innovation in technology, regulations do not necessarily impede innovation. Sometimes they do, sometimes they don't. There's no evidence that it is a sure thing that if we put in regulations, AI will be impeded in terms of innovation in Canada.
     You also noted that Canada really led the way and is a leader when it comes to research, but we haven't been such a leader when it comes to commercialization. We're certainly not a leader when it comes to adoption. We're far behind the United States.
    In this broad debate about regulations, isn't there a real risk that taking a risk-averse approach will result in a regulatory framework that in some respects misses the boat in terms of actually addressing real risks? In other words, there's an overreach that is counterproductive and in the process stifles innovation, stifles commercialization and stifles adoption as the rest of the world moves ahead.
    I'm not going to say that every single regulation would be great. I think that world is possible, what you're saying. I would say that all of the issues you listed are happening right now, in a world where Canada has no regulation. Clearly, in terms of being behind on adoption, in terms of being behind on Canadians trusting AI—I think that's often a well-reasoned trust, because they don't know if they should trust these systems and their work—in terms of commercialization, in terms of Canada leading the way on research but then not being able to commercialize, and in terms of not being able to hold on to the IP, those are all issues that I think would have been solvable by making sure we had maybe not regulation but policy on holding on to IP, for example, that we fund with our own research funding, for example, over the last 30 years.
    I think some regulations could miss the boat. I think in the EU AI Act, there's a focus on the number of FLOPs for training, for example, the size of the database and the training run for an AI system. I think those kinds of regulations might miss the boat. You might be able to, with new algorithms, train a system way easier on way less data, for example.
    So yes, I think some of them might lock in certain things that would not be ideal, but I think actually what we're seeing is an economy and an ecosystem desperate for better guidance and better guidelines to help usher in the use of these tools and the development and innovation with these tools.
     Mr. Gonzalo, do have any comments?

[Translation]

    Once again, I believe that regulations do not hinder innovation. In fact, they offer guidelines that support clear navigation and more effective work.
    However, as I mentioned earlier, this framework should be as gradual as possible in order to give small businesses an equal opportunity. As we know, 80% of the economy is based on small businesses or small family-owned stores, especially in rural areas. Regulations should not be too strict for these businesses and prevent them from keeping up with larger companies.
(1750)
    Thank you, Mr. Gonzalo.

[English]

    Thank you, Mr. Cooper.

[Translation]

    Mr. Sari, you have the floor for five minutes.
    Thank you very much, Mr. Chair.
     I will continue on the same topic, which I find very relevant.
    We are discussing whether or not regulations can hinder innovation or the digital economy. Earlier, I asked what regulations and what controls we were speaking about. Are we talking about regulations on system development? From what the witnesses have said, I think I understood that it’s not possible or that it was almost impossible.
    Are we talking about regulating the use of data or the exploitation of the data? I’m just trying to wrap my head around that.
    Earlier, I asked whether learning algorithms, large language models and artificial neural networks could be controlled and if so, I’d like to know how.
     From what I understood earlier, the technological side is coming out now. Given my professional background in technology, I don’t see how systems developed elsewhere, by companies in other countries, can be controlled when we don’t really have an influence on these entities or the regulatory power as far as they are concerned.
    You have said that regulations cannot be a hindrance, but how can system development be regulated? I am not talking about regulating their use.
    Would you like me to respond?
    Certainly.
    I will let the other witnesses speak to the issue of large language model platforms because, as mentioned earlier, I believe it would be very difficult to regulate what Anthropic, OpenAI and others may or may not do.
    On the other hand, let’s take the example of data if we are talking about regulations. For example, the general public does not always know where the data they input in an Excel file that they save goes. Where does this data go? Is there a risk that the data will fall into the hands of—
    That is however—
    Just a moment, Mr. Sari. Mr. Ganzalo’s microphone is still causing some issues for the interpreters. I will stop the clock while we look into that.
    Can you hear me better now?
    Is it okay like that? I think so.
    Go ahead, Mr. Sari.
    I just want to clarify that what you spoke about is much more than raising people’s awareness of what is going on, for example when they submit an Excel file to a chatbot or similar system.
     I have a simple question: Can the development of large language and learning model algorithms be controlled? That’s my question. Do we have that capacity?
    Perhaps we don’t have the capacity to control what users can do, but we can raise their awareness. That’s my understanding.
    Awareness, yes, but we can have regulations that tell people not to do certain things because a data breach could lead to legal violations.
     I will defer to the other witnesses on large language model platforms.

[English]

     I mean, why can't we fathom governing companies like OpenAI? Is it because they appear to be dominant? Is it because we're afraid of them bullying us? We see situations right now where Canadian publishers and authors are being bullied by Google, which has decided, so that they can innovate, to tie the practice of their indexing so that you can show up in search. They're saying that if you want to show up in search, you have to let them take all the data on your site for their model. It's not something that we should say is inevitable and that we have to take. I would argue, and I have argued, that this is an abuse of their dominance. If we could signal in Canada that we won't let this practice happen, what innovation can we attract?
    You know, earlier—although I'm saying to you that it's not inevitable—I was a bit of a Debbie Downer. I pointed to that digital chapter in CUSMA and said that there's a lot of stuff there that constrains us. That actually is an opportunity for Canada. There is no better time when we think about our sovereignty: Is the real “enemy” or bogeyperson here President Trump, or are we really talking about being subjugated by the Magnificent Seven companies? That is the opportunity for Canada to decide what markets we want to build and have here. That is what we're going to do through regulation. We need to do that without fear of that retaliation and retribution.

[Translation]

    We are on the same page there. You also touched on another angle and spoke about a kind of CLOUD Act and having some digital sovereignty. I agree with you on digital sovereignty. In my opinion, this can be achieved. We have the capacity to be digitally sovereign. However, do we have this digital sovereignty in Canada? That’s why I’m asking this question. Right now, we don’t have the digital sovereignty that we could potentially have.
(1755)

[English]

     We have work to do there. I think the whole challenge—a core challenge—with the digital economy is that with other products, we had standards and processes before they came to a marketplace. When we saw the digital economy, mobile applications and websites, what was really cool and exciting was that they could come to the market really quickly.
    It means that app stores like Google's and Apple's, you could argue, are stronger regulators of the digital economy than states or countries like Canada. They're deciding what comes to the marketplace and under what terms, and that is what helps create and exacerbate this gap between our legislative ability to keep up and make sure that legislation and regulatory realities reflect what people are experiencing in their everyday lives.
    Perversely, when that doesn't happen and continues not to happen and it feels like the state doesn't have our back as consumers and as citizens, you get worse trust and more unrest.
    Thank you, Ms. Bednar.

[Translation]

    Thank you, Mr. Sari.
     Mr. Thériault, you have the floor for two and a half minutes.
    I’ll do my best.
    In October, the government launched a national artificial intelligence sprint to modernize Canada’s artificial intelligence strategy. The sprint is led by a working group that will review the AI strategy for the federal public service. To define the renewed strategy, the government will consider the working group’s recommendations and the results of a public consultation.
    The problem is that a number of experts are already challenging the results of this consultation and they have said it is not very reliable. An article published in Le Devoir on October 29 quoted Matt Hatfield, who had expressed concern about this issue. He stated that “There may be some internet users who have asked AI to generate 100 answers,” for example. According to the article, “He criticized the government for accepting anonymous responses on its public consultation portal.” Matt Hatfield added, “I believe the government has not made any effort to truly understand what Canadians think about AI.” The article adds that according to Matt Hatfield, “Minister Evan Solomonhas a ‘casual view’ of artificial intelligence and is more focused on the sector’s business opportunities and innovation than on the risks and harms of this new technology.”
    First, is there any chance that the consultation is biased? If so, should anonymous responses be excluded from the consultation?
    Mr. da Mota, can you answer the first question and Mr. Gonzalo will go next?

[English]

     That is one idea.
    I don't have a strong opinion on the anonymous submissions. Some people might want to remain anonymous for legitimate reasons, but it is a concern in terms of the quality of the input.
    Even a short consultation is better than nothing, but I think we need more than consultation. There needs to be more accountability in these kinds of consultations—an ongoing process and discussion. Obviously, there's the working group, but I think we can do better to have more engagement with different communities that are affected and experts simultaneously to try to have more input throughout these processes going forward.
    I don't know the details about—

[Translation]

    We are talking about 10,000 respondents within a very short time.
    Mr. Gonzalo, what do you think about that?
    Well, I think anonymous results are never the best approach when conducting surveys. There is a quantitative aspect and a qualitative aspect, and they both resonate in a certain way. That said, numerous studies, reports and more qualitative surveys provide an insight into the state of businesses and organizations.
    Anonymous online surveys have some value, but I think an appropriate balance is required as well as weighing all considerations from your committee. A number of actions have already been taken. All the elements can then be weighed to get the big picture.
    Thank you, Mr. Thériault, Mr. Gonzalo and Dr. da Mota.

[English]

    Ms. Church, you have two and a half minutes. Go ahead, please.
    Thank you.
    I want to pick up on the point about the change we've seen around standards and how we have struggled to adapt, in many respects, from an era of physical products.
    We've had witnesses to the committee here who offered a couple of different perspectives, particularly around the issue of liability. On the one hand, some witnesses have spoken about existing laws being able to be interpreted to adjust for AI and digital platforms and some of the harms that we've talked about today. We've had others who have said that more specific laws may be useful here, even if it is difficult to capture a general purpose AI system, for the harms that a different user might explore or experience using that platform.
    What's your perspective? Should we be looking at some sort of greater form of liability for an AI system? Where can we look for guidance on the type of framework we should create to capture that?
(1800)
     I think it's very important. In something I wrote earlier this year, I said that if the toaster burns you or if something happens, you have recourse. You have a warranty. You can sue. There are standards. What do you do when a chatbot encourages you to hurt yourself or hurt someone else?
    That question of liability probably comes back to a lot of the platform regulation questions that have been so difficult for Canada because, through trade law, we enshrined the U.S.'s section 230, so we've accepted that it's very difficult or not appropriate for us to hold digital platforms accountable for the material that they put forward. Now, with LLMs, we see this accelerated.
    In terms of jurisdictions for inspiration, do you just want to jump in?
    In terms of specific jurisdictions, I don't know if there's an example that I could jump to right away.
    One suggestion that I've heard previously is to attach training to licensing programs, like engineering. Make sure there's a licensed engineer in the jurisdiction attached to training or attached to deployment that makes sure that person now takes on liability for the deployment of that tool.
    That's one example. It's making sure that you are ensuring that individuals involved in the development and deployment of these tools are actually going to be held accountable under existing programs, like licensing and so on.
     Thank you.
    I want to say to the members of the committee, and I made this point the other night, that I think it's critical that we get the minister in front of this committee. We have, through the clerk, reached out to him and his office 11 times. We got the final answer on Monday that he isn't going to appear before the committee.
    I'm really encouraging members of the Liberal Party to get the minister here. I think it's a critical step. We've heard a lot of great information as a result of this study. I think the minister needs to come before this committee and answer questions on some of the things that we've heard and other issues related to his mandate.
    I'm extremely disappointed that we have not been able to get the minister here.
     Madame Lapointe, I'm going to ask you to weigh in on this, please.

[Translation]

    I think that he’s appearing before a committee today. Next week, I believe he’s going to be travelling for the G7. That’s why he hasn’t been able to come here.

[English]

    We've been very flexible in our time. We've asked 11 times for him to come before this committee. Each time, he's not been able to do that.
    Please encourage him to come. We're going to be continuing this study for a bit more, perhaps after we come back. We need the minister here. We can't have no as an answer.
     I want to thank our witnesses, Ms. Bednar, Mr. Gonzalo and Dr. da Mota, for coming here today. You've really added a lot of value to this study.
     Dr. da Mota, for your first time here, we appreciate your expertise on this issue.
     Ms. Bednar, if you want to mark your calendar for December 3, 2026, we'll be glad to have you back in another year.
    That's all I have for today. The meeting is adjourned.
Publication Explorer
Publication Explorer
ParlVU