:
Welcome to meeting number five of the House of Commons Standing Committee on Canadian Heritage.
Today's meeting is taking place in a hybrid format pursuant to the Standing Orders. Members are attending in person here in the room and remotely using the Zoom application.
All witnesses have completed the required connection tests in advance of this meeting.
Please wait until I recognize you by name before you speak. All comments should be addressed through the chair.
Pursuant to Standing Order 108(2), the committee is meeting for a briefing on the effects of technological advances in AI on the creative industries.
It is now my pleasure to welcome our witnesses.
From the Coalition for the Diversity of Cultural Expressions, we have Marie-Julie Desrochers, executive director. From Copibec, we have Arezki Raab, assistant general manager, and Maryse Beaulieu, adviser, legal and public affairs. From the International Observatory on the Societal Impacts of AI and Digital Technologies, we have Dave Anctil, affiliated researcher. He is by video conference. From The Hub, we have Rudyard Griffiths, publisher.
We will start with Ms. Desrochers for five minutes.
Please go ahead.
:
Thank you, Madam Chair.
I'm the executive director of the Coalition for the Diversity of Cultural Expressions. I will now switch to French, but I wanted to state clearly that our organization is a Canadian one. We have members in Quebec, in English Canada and also in the minority linguistic community.
[Translation]
Representing more than 350,000 creators, artists, and over 3,000 cultural enterprises, the Coalition for the Diversity of Cultural Expressions is as concerned with the economic health of the cultural sector as with the vitality of creation. More specifically, it focuses on the treatment of culture in trade agreements and the impact of the digital environment on the diversity of cultural expressions, ensuring in particular that public policies actively protect and support our cultural ecosystem.
Twenty years ago, Canada, alongside civil society, played a decisive role in the adoption of the UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions. As the first country to ratify it, Canada has since continued to act as a leader on this.
Today, a major technological transformation is upon us: generative artificial intelligence. As in many other sectors, its development is profoundly disrupting the cultural ecosystem—from the weakening of jobs to the redefinition of artistic practices, its impact is felt at every level.
The rise of generative AI, digital technology and online streaming platforms is reshaping the conditions of creation and circulation of works, generating unprecedented challenges for cultural diversity. Today’s market is flooded with machine-generated content, often without clear indication of its nature. Art cannot be reduced to a mere chain of algorithmic operations: It is inseparable from human experience, emotion and collective memory. AI can be a tool in the service of creation, but it cannot and must not replace it.
Obviously, these issues are not limited to Canada; they extend far beyond our borders. Recently, the international community has increased its commitments on technology and culture. I will be happy to provide more details on this during question period since I have just returned from the world’s largest conference on cultural policies and sustainable development, MONDIACULT, organized by UNESCO. More than 100 ministers of culture met at this conference held in Barcelona from September 29 to October 1, including the .
At the conclusion of this conference, the ministers of culture adopted an outcome document “in response to the urgent and complex challenges of our time.” Among other commitments, they pledged to: “Promot(e) a human-centric and human rights based approach to a digital environment that respects cultural rights, fosters equity and accessibility, promotes diversity of cultural expressions…”.
The protection of the rights of artists, creators and rights holders in the digital environment, combatting unethical uses of AI, recognition of human creativity, support for the discoverability of multilingual cultural content on digital platforms, involvement of the cultural sector in the development of AI-related policies, protection of copyright: These are clear commitments confirming that regulating artificial intelligence is a global issue and that Canada must continue to play a leadership role.
In Canada, the Copyright Act already prohibits the use of protected works and productions without the authorization of rights holders. We should be proud of that.
Among the 50 organizations represented by the Coalition for the Diversity of Cultural Expressions, the CDCE, a strong consensus is emerging. In the face of technological advances in generative AI affecting creative industries, three pillars are essential: authorization, remuneration and transparency, or the acronym ART. It is essential to respect these three conditions, otherwise our cultural environment will be weakened to the benefit of multinational corporations and to the detriment of Canadians.
The CDCE has been actively engaged in consultations on the Copyright Act in the context of generative AI, as well as on former Bill —part 3, the Artificial Intelligence and Data Act, which died on the Order Paper. In these consultations, the coalition made specific requests that I will be happy to elaborate on during the question period: no exception for text and data mining, no copyright for purely AI-generated content, and transparency for training data and for identifying synthetic content.
Recently, we also took note of Minister ’s announcement regarding the creation of an expert panel tasked with developing a new Canadian AI strategy. However, we deeply regret that no voices from the cultural industries are represented there, even though AI’s impacts on creation are immense and immediate.
In conclusion, Canada is engaged in a race toward innovation. But innovation must not come at the expense of culture. Canada must commit “to promoting a responsible and human-centered approach to AI and digital transformation in which culture is a powerful driver of innovation, inclusion, and economic growth”, as stated by at MONDIACULT.
:
Thank you, Madam Chair.
I am Arezki Raab, assistant general manager at Copibec. I am here with Maryse Beaulieu, advisor, legal and public affairs.
I would like to begin by thanking the committee for inviting us here to discuss the impact of technological advances in artificial intelligence on the creative industries.
Allow me to briefly introduce our organization. We are a non-profit rights management collective specializing in copyright management. Founded in 1997 by the Union des écrivaines et des écrivains québécois, or UNEQ, and the Association nationale des éditeurs de livres, or ANEL, our organization now represents more than 30,000 authors and over 1,400 publishers. We facilitate legal access to a vast repertoire of works protected under the Copyright Act through a licensing and authorization service. During the last fiscal year, we distributed more than $13 million in royalties to rights holders, bringing our total to over $300 million since our inception.
Copibec had the opportunity to express its views on artificial intelligence during the last consultation on generative artificial intelligence, which ended on January 15, 2024. We also worked with the book industry to submit a brief on the amendments we wanted to see made to part 3 of former Bill . We did the same with the Coalition for the Diversity of Cultural Expressions. We also submitted a joint brief with Access Copyright at the end of August as part of the pre-budget consultations, in which we addressed generative artificial intelligence.
Needless to say, not a week goes by without new developments in generative artificial intelligence. However, we can already say that rights holders are extremely concerned. We would even say that this is an existential threat. A work protected under the Copyright Act is a work created by a human being. That is how the law was conceived. It is important to establish the fundamental principles and to reiterate them.
I will now hand it over to Maryse Beaulieu who will take it from here.
:
Good afternoon, everyone.
Furthermore, it is of the utmost importance that the Copyright Act be respected. It remains the flagship law for creators, and we submit that the advent of generative artificial intelligence should not obliterate the Copyright Act—quite the contrary. It remains relevant and constitutes the foundation on which we must rely. It is a law that confers exclusive rights on the copyright holder.
Authorization is generally required when using a protected work. However, large language models have been trained without such authorization being granted. This goes against the very foundation of the act. We feel it is important to add that no exceptions should be made in this regard.
Creators, who offer sought-after content, are absent from the business model. No remuneration is granted. This cannot be tolerated. Through licensing, collective management organizations are well positioned to meet market needs.
Let us add that it is important for us to have normative texts. The former bill enacting the Artificial Intelligence and Data Act died on the Order Paper. Canada needs to have regulations, particularly with regard to transparency. This message is being hammered home by the entire cultural community, and it is important that it be heard.
Legal proceedings are under way in Canada. Waiting for the courts to rule is not the way to legislate. We submit that it is important for the government to be proactive in this area. Our cultural sovereignty must be assured, since our identity is at stake. Jeopardizing what defines us cannot be the subject of negotiations of any kind.
Generative artificial intelligence made a sensational debut in November 2022 with ChatGPT. We have barely had time to integrate a whole new lexical field, which, it should be noted, comes from another industry. This significantly increases the difficulty coefficient. We are still in the process of developing adequate literacy.
Let us be clear: We are not against artificial intelligence. Creators are used to using new tools. That is not what the debate is about. Rights holders are part of an economic model from which they do not reap the benefits. Creators and creative industries are also the jewels in the crown of our culture and identity. From both these perspectives, we cannot leave things as they are.
Culture is part of Canada's collective and economic wealth. The impact of artificial intelligence is real, and it is up to our elected officials to respond—
:
Thank you. I will try to be brief and provide a complement to the previous speeches so as not to go over the five minutes I have been given.
My expertise is not in culture, but in AI, or artificial intelligence. I chose to help organizations and unions that defend culture in Quebec and Canada because I am in a good position to know how threatened these industries are. Beyond the industries themselves, it is artists and creators in particular who are the main subject of our discussion today.
Non-artists and studios can simulate works and market them using generative AI and, in particular, agentive AI. For example, there has been a lot of talk about the fake band The Velvet Sundown, whose music has garnered millions of plays on Spotify. It is absolutely not a band, in fact. Nor is it a group that used an artificial intelligence system to produce music. It is simply the product of an individual who simulated fictional members and generated music based on training data that was itself plundered by harvesting the web.
What is more, there are now synthetic studios that create fake actresses and actors, such as Tilly Norwood from the Particle6 studio. These studios are essentially creating a lie, namely the idea that there is such a thing as synthetic actresses or actors. Again, these are just unfounded claims. Of course, these things cause confusion among the public and represent unfair competition for artists and creators, who spend a large part of their lives developing their talent, studying their art and thereby investing in our culture.
In my opinion, the government has a critical responsibility to preserve the existence of arts and culture by protecting citizen artists against theft and exploitation of their works and against unfair competition from artificial intelligence, which already exists and will continue to grow.
For very simple reasons, there is a difference between artists and professions such as mine or those of doctors, lawyers, engineers, or electricians: these professions are protected by law. If, for example, a doctor uses ChatGPT to analyze a patient's medical condition, they remain ultimately responsible for making a diagnosis.
However, generative AI is capable of creating fake works and synthetic productions. Unlike many other professions, there are no professional standards guaranteeing the exclusivity of artistic production. As a result, artists are left to their own devices, and it is the market that dictates, based on consumer preferences, who will truly be considered an artist and who will not. Of course, many economic players have understood this. It will therefore be very easy to replace living artists with synthetic “artists,” in other words, synthetically generated products that are not artists. This will contribute to further destabilizing artistic professions.
I would therefore like to speak to the committee about the importance of labelling synthetic products in Canada. There is no legislative framework or bill on this subject, but it would be possible to create a regulatory framework very quickly to ensure that synthetic products are clearly identified. This will enable the public to at least make an informed judgment about music and audiovisual content generated by artificial intelligence systems. They will be aware that these productions are not works of art and do not meet the basic requirements of artistic or cultural production. Of course, one could also argue that such regulation, which would aim to ensure transparency in cultural production, would also serve as a safeguard against misinformation originating from a multitude of actors capable of using these generative artificial intelligence systems.
I would like the committee to understand that, even though I am not an artist, I could start working tomorrow morning in all kinds of fields such as graphic illustration, audiovisual media, or music, without mastering any of the codes of these professions, but simply by being able to use generative artificial intelligence systems. I could set up a complete studio to promote the development of my own film and television productions and flood platforms with this content. All it would cost me is the subscription fees for non-Canadian computer server and algorithmic system companies, all located in the United States.
This is a real threat to sovereignty, not just cultural sovereignty, but Canadian sovereignty as a whole. If elected officials cannot protect citizens from data theft and public deception, who can?
:
Thank you, Madam Chair, for the opportunity to address you and the committee.
Thank you to the other witnesses who are testifying.
I'm going to focus my remarks on the industry that The Hub is part of. That's the news and information industry. I'm going to leave discussion of cultural industries, which we just heard from, to others who are more capable of providing assessment and reflection.
What I want to talk with you about are the effects of AI on the publishing industry and the news industry and what we're seeing here at The Hub, a fast-growing independent news publisher.
We have been in operation for about five years. We see approximately two million unique engagements with our content every 30 days. We operate a suite of public-facing news products from podcasts to newsletters to, of course, a website and a YouTube channel.
Our view is that as we approach AI as an industry, we need to understand both the benefits and the challenges. What we have heard today from the witnesses has understandably been a note of concern about AI, about its effect on creative industries; I would venture that in the news and information industries, there are some clear and positive effects. There is the ability to synthesize large amounts of information quickly and report this out to news consumers. I draw the committee's attention to the remarkable work that is being done in the field of investigative journalism using large language models and large datasets to further transparency and accountability on the part of organizations, including government.
I think we need to approach this conversation with some sophistication and understand that there are important applications of AI in a news context that can generate productivity and profitability for news organizations and also deliver news to readers and consumers that is useful and well researched and genuinely uncovers new and important insights.
In my remaining time, I want to shift a bit to the challenges, because I hope we'll take these up in our discussion.
Many news organizations, for the better part of 2025, have been reporting significant drops in what is called their organic traffic. This is the referral traffic that traditionally has come from Internet search engines. Some news outlets—not The Hub, thankfully, because we have a very diversified platform—are reporting declines of 50%, 70% or 80% of their organic traffic, and that organic traffic, in many cases, for larger news publishers, can make up over 50% of their total traffic.
What in effect is happening is that people are using these powerful tools, the large language models, or LLMs, and searches powered by AI, to access their information. I am sure the committee is familiar with AI Overviews, for example, in Google.
Consumers are using these to access the information they need. They are not going beyond the search page into the websites of, not just news organizations, but anyone who is creating useful information on the Internet. Some recent search engine optimization, SEO, studies have indicated that upwards of 60% of all Google searches are now zero click. Think about that. Google is responsible for 90% of all search volume in Canada, the United States and much of the western world, and 60% of all Google searches are zero click. Within AI Overviews, AI Mode and other LLMs, the click-through rate can be as low as the single digits.
This obviously challenges the business models of many news and information providers on the web, and I hope that we'll get into the discussion of how, from a public policy perspective, we can address a fundamental change in the Internet that is now ongoing as a result of AI, a fundamental change that will affect the websites that exist in the future and the information that will be available to citizens and consumers.
I see that I'm at my five minutes, so I'm going to pause there. I look forward to questions from the committee.
:
Thank you so much, Chair.
Thank you so much to each of our panellists for being here and for presenting today.
My first question is going to be directed to Mr. Griffiths from The Hub. There was an article that was published by The Hub entitled “AI offers Canada an open opportunity”. In that article, it states:
Overregulation won’t stop risky actors—it will just sideline responsible innovators and transfer the risk to other countries with an entirely different set of values. A confident, cooperative model rooted in Western values and open participation offers a better path to both prosperity and safety.
Mr. Griffiths, you have outlined a few of the opportunities but also some of the hesitation or maybe challenges in terms of AI and how it functions within the online sphere, but this states that over-regulation is not the answer, despite some of these challenges that we face. Can you expand on that a little more?
:
I'll be careful of my time here. This is a conversation that we could have for the better part of this afternoon.
I would just point out that there are some very real challenges that over-regulation represents, and I would simply point to the Online News Act. This committee, the heritage department, was critical in the whole legislative process that created the Online News Act that now sees Google provide $100 million in funding to news groups in order to comply with the terms of the act.
As a part of that agreement, the Online News Act and the regulation that it imposes on the news sector, all news organizations must make all of their content available to Google. If you are a recipient of funding through the Online News Act, you are unable to prevent Google from scraping behind your paywall, scraping subscriber-only content to serve up in their LLM.
I think that we need to understand that, with every regulation, there are trade-offs. I'm sure that the committee members are well aware of this and have spent a lot of time reflecting on this. My point is that there are unintended consequences, and we are already seeing those emerge around the Online News Act and how this committee, the departments and the government will, for example, look at copyright protection as it relates to news organizations, if, in effect, those rights have been signed over to Google in the context of the Online News Act.
I realize that's not a precise answer. There are so many different directions that we can go on this, but I do think the Online News Act is in some ways a cautionary tale. It suggests the challenges of regulating this fast-changing environment and now, unfortunately, news organizations are going to have to live within that act unless it is amended accordingly to provide them greater flexibility to protect their content and to monetize it in a context of the use of that content by various LLMs and search platforms.
:
Yes. People are actually using LLMs to get directly behind paywalls.
Pro tip: If you confront a news website that has an article that is paywalled, you can ask most LLMs to summarize for you that article that's behind the paywall. The reason is that the LLM bots have scraped all the content in that website that is behind the paywall.
This is happening for two reasons.
One, Google requires you to make all of your content available to it, and if you choose to block its bots that are scraping your content for its AI LLM, it will block you from all search results. There is a huge issue there, a question of real fundamental fairness, and I think it needs to be taken up with a company that is responsible for 90% of all search volume in Canada and is telling news outlets like The Hub and others that if we prevent Google's AI bots from accessing content behind our paywall, we will disappear from all search results across a platform that serves the vast majority of the organic traffic to our website, if not all. That's problem number one.
Problem number two is that with the Online News Act, a condition of receiving funds from the Journalism Collective, which was established to disburse the funds, is that you have to make all the content you produce available to Google, to one specific AI company, one specific LLM and arguably to the most powerful one in the marketplace: the originator of AI Overviews within your Chrome browser and the creator of Google AI Mode, which is something that the company is clearly investing in and that it is rolling out across browsers in Canada and around the world.
You simply don't have a choice as a publisher right now, if you are accepting funds under the Online News Act, to block Google from scraping your content, including subscriber-protected content, because, again, you are a recipient of Online News Act funding. That is an issue that either this committee or someone obviously will have to take up, because it removes any bargaining power from a recipient of Online News Act funding to negotiate with Google over fair terms of what that content is worth to Google, to its LLM and to the clients and customers who are using its AI service.
:
Thank you, Madam Chair.
Thanks so much. That was super interesting. I'm so glad we're having this conversation.
I'm going to turn back to the creative question. I was an artist for 20 years, a songwriter, and I can put “write me a David Myles song” in the generative AI and it will write me a song that sounds a lot like my other songs and uses my voice.
[Translation]
I will try to speak in French.
Ms. Beaulieu and Mr. Raab, from Copibec, I wonder if you could explain how the artificial intelligence models collect data without infringing copyright. Where are we with that at present? How can we do that? Does this information currently come from open sources?
:
That is a very good question. Thank you.
From a broader technical perspective, technology players generally and overwhelmingly take content that is already available on the web, without necessarily obtaining permission. There are a few exceptions. Some players have direct agreements with producers who authorize them to use the content.
Otherwise, as Mr. Griffiths explained earlier, it is mainly technological actors, known as software robots, that regularly harvest various websites to copy all their content, which they then store in data sets that serve as sources to feed and train artificial intelligence systems.
:
Some cases are currently before the courts.
Essentially, but not exclusively, these are class-action suits. They are still in their infancy. They are a special form of lawsuit and follow a specific process: A single member of a group representing a large category of claimants can ultimately obtain a judgment on behalf of a large number of people. It is currently one of the vehicles that seems to be favoured, particularly by authors, since it is mainly authors who initiate class action lawsuits.
Sometimes, this involves cases where pirated libraries were used for training. In other cases, other methods were used.
That said, there is still no case law on how the law will ultimately be interpreted. Earlier, I mentioned litigation in the courts, which is a way of finding out what judges will say on highly novel and complex issues.
I would like to think that the group before me today, that is to say, people who have the power to legislate, will be able to ensure that the fundamental questions I raised earlier are respected, particularly with regard to transparency. I could elaborate on this subject, because transparency is very important. For example, not knowing which works were used to train models is, in a way, a form of denial of rights, since we do not know what was used. Transparency therefore allows us to know which protected work was used.
I would also like to take the opportunity—
:
Actually, it's not that clear-cut. In the early 2020s, we were still uncertain about the capabilities that artificial intelligence systems could develop. A few ingredients were missing for them to be reliable and able to create results that would have value, commercial or otherwise.
However, by 2023, it was clear that these systems used multiple modes, not just text. Initially, there was a lot of talk about large language models, but today we are talking about multimodal artificial intelligence systems that are capable of using any mode that relies on digital information, including music, voice, video and images. These systems are enormous. In fact, their size is unknown, since there is no longer any scientific transparency in the publications of the industry's major players. We do not know what data is being used to train them. We also do not know which algorithms have been used.
We do know that these systems will continue to improve at a dizzying rate.
:
Yes, there is absolutely no problem with artists using artificial intelligence tools in the same way as any other materials or tools that can contribute to cultural evolution and creativity.
The problem is control of the digital infrastructure needed to do this, as well as the scale of cultural production. It is extremely important to understand the difference between the scenario of a possible alliance between Nvidia and Disney to create new Star Wars series using artificial intelligence, and the scenario in which cultural artisans use artificial intelligence to accelerate and increase productivity and explore new creative fields.
That is where I have some concerns. Artificial intelligence will offer enormous advantages in several sectors. This is already the case in science, education and several professions where the actions and civil liability of the people involved remain human.
Unfortunately, it is now possible to create productions that claim to be cultural without any possibility of traceability or accountability for the individuals or organizations behind them. That is the difference.
:
—and I would prefer to testify before the Standing Committee on National Defence to discuss that issue.
In reality, we do not know. It is obviously something that leading experts, such as Yoshua Bengio at Mila in Montreal, are looking into. It is important to safeguard this.
In the meantime, it is important to understand that there is one step that is certain. Artificial intelligence is becoming increasingly agentive, even multi-agentive. This means that we can create systems with several specialized artificial intelligence models that are capable of creating production chains. Therein lies the problem.
Imagine that a production studio decides to replace all the artisans involved in making a film. Artificial intelligence screenwriters would propose scripts, which would be evaluated by other artificial intelligence systems, which would then propose the first segments of the film. Editing and everything else could be automated.
We already have the capability. Right here at Collège Jean-de-Brébeuf, we produce small video games using five artificial intelligence systems. By providing an initial idea or theme, we can produce a complete video game in just a few hours. That is what lies ahead for us.
I ask the same question to the representatives of Copibec and the Coalition for the Diversity of Cultural Expressions: Would you consider me an artist? That's a good question, isn't it?
Even though I have never made music, I could decide to do so using artificial intelligence tools. There are young people who do this every day. If they copy some of the works that Mr. Myles created in his lifetime to make an album, I understand that this could be considered theft. However, if I invent something from scratch, using artificial intelligence tools, I am not stealing anything.
In that case, would you consider me to be an author or an artist?
There are regulatory texts governing the use of artificial intelligence, particularly in Europe and the United States. In Canada, there are none, since former Bill , which we have already discussed, died on the Order Paper, unfortunately—or fortunately, depending on your point of view.
I don't know what the Liberals' intentions are, but do you think it is still necessary to pass a bill to create a regulatory framework for this issue in Canada?
:
Merci, Madam Chair, esteemed colleagues and panel.
I'm not much of a musician—to my colleague opposite—but I do identify as a visual artist. For the better part of my 11 years on the city council in Port Moody, which is known as “city of the arts”, I chaired the civic arts and culture committee, among many other committees. I worked very closely with artists over that time.
As a side fact, I also had an art gallery for several years, before COVID, and curated approximately 40 exhibitions during that time, featuring the work of emerging and professional west coast artists.
Ms. Desrochers, you mentioned returning from a conference with over 100 cultural ministers, and you touched on the pillars of authorization, compensation and transparency. Can you drill down a little deeper? I know that artists in my community care about these pillars.
:
As I said, authorization falls under the Copyright Act. With regard to remuneration, the goal is to allow a licensing market to develop in Canada, and to do that, we need the final pillar, which is transparency. This could come from a bill that specifically addresses artificial intelligence.
Unfortunately, creators were included very late in the discussions on Bill . We are very pleased to be heard today and now hope that the next time a bill on generative artificial intelligence is drafted, we will be included in the discussions from the outset.
At the MONDIACULT 2025 conference, some very encouraging commitments were made. More than 100 ministers of culture made the following commitment: “Supporting decent work for artists, creators, and cultural workers, including fair remuneration and adequate social protection, building on relevant international labour standards, upholding their economic and social rights, reinforcing the protection of intellectual property, supporting transition to the formal economy, if applicable, and addressing systemic gender-based and other inequalities”.
This is a commitment that Canada has also made. We simply need to turn these commitments into action.
:
Mr. Anctil, in two and a half minutes, I can't run your answers through the Copilot chatbot to get a summary. It goes quickly, and if you could keep your answers short, I'd appreciate it.
This week, my attention was drawn to news about the Claude artificial intelligence tool, which actually detected that it was being tested, that it was being challenged. It asked, “Are you testing me?” or something like that. I don't know if you've heard of that.
Earlier, when I was talking to you about superintelligence, I may have gone a bit far in my example, but it's clear that the evolution of artificial intelligence sometimes pushes us into some troubling areas. Could what we saw this week with the artificial intelligence tool mean that, one day, AI will refuse to be regulated, detect that parameters and limits have been imposed on it and decide on its own to circumvent the regulations? Could such a scenario occur?
:
Thank you, Madam Chair.
Artificial intelligence represents a major growth opportunity for this country. I think that, if you look at the last 10 years in our country, our productivity has gone down, and our prosperity and even our economic well-being are being questioned in every province. I've looked at AI. Even the newspapers, every day in this country, are talking about AI. If we get left behind, we will be a third world country, if we're not already.
I'm going to maybe talk to The Hub, here, and Mr. Griffiths. It's just moving quickly. I know that, when you move quickly, there will be some mistakes but, at the same time, if you don't move quickly, you'll get left behind. I think that's what's happening here.
I hope legislatures do not block progressive AI, because we need this in this country. There's a danger here that people in Parliament or elsewhere are going to block AI. Then we will get left behind.
I want to talk to you because you started up a few years ago. You've had two million people go through your independent publisher. I just want to know how you did it and if AI had any effect on you.
:
I'm glad we're getting to this line of questioning. There are significant productivity gains, obviously, that a variety of types of businesses are experiencing through the adoption of AI. I think the news industry—largely because so many of the LLMs were trained on the output of that industry—is in many ways well positioned to use the types of outputs that the LLMs produce, as I explained earlier, to enhance various aspects of news gathering, synthesis and dissemination.
You're right; we have a profound productivity problem in Canada. AI surely is a piece of that solution. What we need to try to figure out, and this will by no means be easy, is how to ensure fair competition. We are in favour of competition at The Hub. We like the idea that, in fact, it would be great in the news industry if people actually competed on the basis of the content they produced and the audiences they were able to bring to that content, as opposed to, as we now largely compete in Canada in the news industry, on the basis of how many full-time employees you have and how many subsidies you can claim as a result of those full-time employees.
That's an apt analogy to understand the opportunity of the applications of some of these AI technologies in our own industry of news and publishing. There are all kinds of tools now that this technology gives me as a smaller publisher to compete with my larger peers. I'm able to do investigative journalism in a way I could never have done even six or 12 months ago on a scale that would only have been afforded a newsroom of the type and size that you would have seen at, let's say, The Globe and Mail.
Your line through this debate should be where competition is enabled and where competition is thwarted. There are smart ways to have a view on AI and to regulate AI that further competition overall.
I'll end by going back to what I said earlier: I don't think it's fair that I as a news publisher am forced to give away the content behind my paywall to, in this case, Google, which is responsible for serving 90% of all search volume across Canada, and if I install Cloudflare software that prevents Google's AI bots from crawling the subscriber-only content behind my paywall, I will be removed from all searches across all Google platforms. That is not competitive; that is anti-competitive. That is the abuse of a monopolistic position in a marketplace by a company that is absolutely dominant in the search category.
I will end on that and state that you're right to focus on competition and you're right to focus on productivity.
:
Thank you, Madam Chair.
Earlier on, this committee found consensus on the importance of undertaking studies on the impact of AI on Canadian culture and heritage. I say “studies” because I think—and I suspect some of my colleagues here agree—every committee should be undertaking a similar study on AI. Thank you for agreeing to be part of ours.
I will start with this first question, and I hope it speaks to the importance of human-centric design. AVIA has repeatedly spoken to this, but I'm open to thoughts from all, of course, including Professor Anctil.
You've described the need to put living beings at the centre of AI development. What does that look like in practical regulatory terms?
:
That's a very good question and a very difficult one to answer.
I would quickly say that artificial intelligence is a different technology from others, because it is the most human. By that I mean that it is literally based on data produced not only from human activity, but also from human intelligence itself. Given that, we need many perspectives and a lot of expertise. For example, AI can be good for mental health, but it can also be addictive. We need the perspectives of psychologists and psychiatrists who study these phenomena.
Transferability is a defining feature of this technology, which can be used by anyone. People who are 80 can use ChatGPT, they can use Dall-E to generate images or Suno to generate music. We all master the basic language of this technology, which is not a programming language but natural language like English, French and so on.
As a result, it is very difficult to pinpoint the issues of a technology that increases human capacity and has constant interactions with human beings. We don't yet have the definitive approaches that will help us understand these phenomena. At the same time, every year, technology takes absolutely gigantic leaps, and its rollout into society is not really controlled or legislated as such. In addition, there is no approval process for risks and hazards. All of this is complex.
:
There are some advantages. To be fair to Google, they have indicated that the quality of the traffic that clicks through from AI Overviews or from a search query undertaken in AI Mode has a better effect for you as the publisher. The user stays around longer and they engage more with your content.
Google has not released any information to actually document this. Again, here I think transparency is important. If large public companies are going to make claims about how their technology is beneficial, then I think there's an onus on releasing the data to back up these types of assertions.
I think there might be a quality of traffic issue. Nonetheless, even though there might be a quality of traffic issue, the Pew Center in the United States—a well-respected foundation—did a study looking at behaviours within AI Overviews and these types of summaries that are produced for topical queries on news and other relevant information and in many cases found that the click-through rate from those AI summaries to a news outlet like The Hub was in the single digits. It's 5%, 6%, 7% or less, when they look at users clicking through.
I'll go back to my original point. This technology is helping news and journalism. It is empowering journalists. It is dealing often with the asymmetry between smaller organizations like The Hub, larger competitors and large organizations like The Globe and Mail or others that have to deal with reporting out on even larger organizations, like the federal government, for example. There are all kinds of positive things that are happening for the industry related to AI, but I go back to issues of competition, fairness and transparency; I think there's work that needs to be done.
:
I think necessity is the mother of invention. I would urge that we take away all the subsidies and supports that are out there and allow competition to occur on a level playing field in our industry, like we should be allowing it to occur, as we know, in other industries like telecommunications or banking. I think this would do more to address Canada's productivity problems than anything else that we could consider at this time.
With regard to the actual technology itself and the large operators of this technology, clearly there are big challenges in terms of asserting issues of transparency and fairness in their dealings with content creators of all types, from the spectrum of news through to visual and other types of artistry.
We have to look at that. Ideally, copyright law probably should be where we start and end that conversation. I would be very leery if the conversation skews into more subsidies, that we need to find more ways to create, in a sense, an artificial economic foundation for industries that are challenged, because I think those subsidies distort free and fair competition and they ultimately lead to poorer outcomes for consumers. We certainly see that in the news industry in Canada.
:
Your question is very relevant, because it relates to the aspect of adding value to content. With generative AI, we are seeing a new redistribution of value. Artificial intelligence systems, particularly the Pro or Plus subscriptions that were mentioned earlier, generate considerable, even astronomical, revenue. We need only look at the investments made in these technologies.
As of now, the balance of power is uneven. Authors cannot negotiate on their own, individually, hence the need for a structured legislative response to the situation. Rights holders no longer have a choice, in a way. They cannot make their own choices about allowing their content to be used or not or about its corresponding value.
The redistribution of value must be fair so that the rights holders or creators who produced the content that was used to train AI are compensated for its fair value. Models are starting to take shape to compensate rights holders, but they are still programming-based exploratory models. That said, I feel that these models are very limited, since technology actors are the ones who have set up the models and who have absolute control over their operation.
:
I hope that Canada will continue to be a leader.
Another inspiring country at the moment is Brazil, which is strongly committed to being a leader. Earlier, I mentioned the additional protocol to the 2005 Convention on the Protection and Promotion of the Diversity of Cultural Expressions. Brazil's Minister of Culture is herself an artist, and she spoke at UNESCO, where there were three major interventions on the issue of the protocol: hers, one by Minister , and one by Quebec's Minister of Culture, Mathieu Lacombe. So, I hope there will be real political will.
Yes, we need to take action internationally and consult with each other, but we can't wait for that to happen. We are capable of taking action right now in Canada. At the moment, artificial intelligence falls under the purview of various ministers, but I am pleased with what I heard Minister say on the radio this week about the importance of taking action on generative artificial intelligence. We also saw that Mr. cares about our issues. We will be able to meet with him this week, along with the cultural community. We hope that there will be synergy between the parties, stakeholders and departments to take concrete action quickly in Canada.
:
MP Thomas, these are complicated questions.
I think we just have to prepare ourselves for change, for things to be different than they were in the past and for industries to adapt. Some of that no doubt will require patience. It will require foresight. There will be what we call structural readjustments—some of which could be painful.
The alternative might be more painful. To try to freeze the Canadian economy in place and to try to protect whole swathes of industries from this disruptive technology would be prohibitively expensive, I think. Ultimately and significantly, it would undermine the issues that we really need to be working on, which are gains in our productivity and our ability to create the share of wealth that underwrites, among many things, Canada's generous social programs.
I don't think I have a conclusive answer for you. I think that we need to have a sense of our ingenuity and our capability to manage through this and have some confidence in ourselves that this technology, like other waves of technology, from electrification to the innovation of steam power.... Canada saw itself through those eras. In many ways, those technologies were transformative to Canada's growth and to this country leaping ahead with the rest of the world.
We need to be integrated with the world. We need to be using the world's technologies. We need to, again, just have a sense of our own resiliency and our ability to meet this challenge together with, frankly, the bare minimum of intervention.
Use something like the Hippocratic oath. Do no harm, indeed. To step beyond that, as we have in other ways through large-scale regulation of the Canadian economy, especially in my own industry of journalism.... I think we're starting to see the pernicious effects of that approach.
:
It's a moment of profound dislocation. There will be winners and losers as we move through, just as when we moved through those other eras of dislocation.
The point was that we left the horse and buggy behind. We're not driving horses and buggies today because we understood that industry was supplanted by a new industry—the internal combustion engine. I think we're in the middle of a transformation like that now. We went from making horse tack to having automotive part plants in Ontario. We will go through this. We will come out the other side.
I would just go back to maybe a challenge to the committee. What are your core principles? I would say competition, fairness and transparency would be some good ones to start with.
I think that trying to protect entire industries, trying to stop the dislocation and trying to stop the disruption is a difficult and expensive task. It will have all kinds of unintended effects and consequences down the road, as we're seeing with the Online News Act now. All the recipients of the Online News Act, in a sense, as one of your colleagues said, are under the thumb of Google because they signed away all their rights to their intellectual property as a result of receiving supports through the Journalism Collective, which is managing these funds on behalf of the government and Google.
:
Your question is both very local and very global. That's how I would see it.
I am delighted to see a representative from the Coalition for the Diversity of Cultural Expressions here today. We are part of this coalition. We are a management company focused on reproduction rights related to texts and images. Artificial intelligence is therefore of great concern to us. However, all sectors are currently affected by artificial intelligence. Earlier, we heard questions about music, audiovisual media, books and illustration, to name just a few sectors.
The stakes are enormous. I am therefore unable to give you a definitive answer to your question. Once again, I refer the question to the legislator, who can help us regulate a market that currently has no known legislative framework, since the former Bill died on the Order Paper. As my colleague so aptly put it, when the time came to analyze part 3 of this bill, which dealt with the issues that concern us, we, the representatives of the cultural sector, were the last to be called upon to present our comments. It is as if the phenomenon under study mainly targeted very large technology companies and culture was something marginal.
In this regard, we would like to thank you for conducting this study on creative industries and creators. Today, we want to focus our discussion on one point in particular. Individuals, authors, creators of all kinds and creative industries of all sizes and locations, including your own, must be able to work with the elements discussed in our presentations: obtaining authorization from creators, paying compensation to those creators and ensuring transparency. If we have these keys in hand, we will have the tools to help us move forward. Of course, there will still be challenges to overcome.
:
First, the value chain is broken. It's not that complicated: It's theft. When theft occurs, someone loses out and no longer receives income for the work they have done. Not only is the work stolen at the source, but new competition is also created in the market. So the loss is twofold. That's the first challenge.
Then there are all kinds of challenges that add to the diversity of cultural expressions, and these are social challenges that should concern us all. Consider the data used for training: Is there any bias in this training? You can obviously hear my Quebec accent. Of course, there are also francophone minority communities who are concerned that their culture is not recognized in generative AI models.
I'm not sure if I mentioned that the coalition is quietly building bridges with indigenous organizations. There are some extremely inspiring projects, including the Heritage Lab, which is developing a closed-source AI model based on Inuktitut data, I believe, drawn from the education system. We are in the process of building generative artificial intelligence that will enable translation, revitalize an endangered language and provide access to heritage through a system developed by the communities.
If we rely solely on large companies, minority cultures will not be represented and will not be treated adequately in terms of socio-economic conditions.
:
Thank you very much, Mr. Ntumba, your time is up.
[English]
I think we'll have time for a third round because we started this committee late. I do have a question before we move on to the next round.
Mr. Griffiths, I was really interested in your introduction when you said that 60% of Google searches aren't getting any clicks anymore and that people are just using AI to access the information, and they're not going beyond that. It reminded me of a story I heard from an author of The Beaverton, which I'm sure you know is a parody news site written for humour. That author's work was being used, and I don't remember the exact details, but it was something like P.E.I. is 13 minutes off the rest of the country because of something funny with daylight savings time. It was clearly written for humour, but AI was directing people to that article as proof.
If people aren't clicking through, and they're not checking their sources, is this not a big danger to our society? Are we teaching people not to think for themselves, to just trust the bot and not check sources? I will give you a minute to respond.
[Translation]
If other witnesses wish to add anything, I would like to hear from them as well.
:
I'll be brief to allow time for my fellow witnesses to weigh in on that, too.
Accuracy is an issue, and there are many cases where the citations or sources in these AI Overviews are simply not accurate. The AI either is hallucinating or is pulling from a source that is, as you say, parody. It's not a news source.
I think there has been improvement over the last year, and some of the companies—maybe Google, notably—should be acknowledged for addressing this problem and understanding that it is a problem.
I'd go back to the fundamental point, though, that this is powerful technology. The reason people aren't clicking through is because it is performing a service. It is, in a sense, performing the service of your having to go to all of those websites to read everything in them. Yes, that would be wonderful. There might be all kinds of serendipity, and you would learn all kinds of different things that you didn't know when you started in that search journey or process, but I think we need to understand that this technology is here to stay precisely because 60% of searches are now zero-click. That is a sign that people are embracing the efficiency of this technology, this service that it is providing.
We are in this world now. We have left behind the old world of the top 10 ranked search results for you, those blue links that we all clicked on for about 25 years. That is gone, and we are increasingly in a new world of AI Overviews and of, in a sense, these platforms now acting as publishers. They are not simply searching out the information and cataloguing the information. They are now stepping in—very much like the role of my organization—to synthesize that information for you and present it to you. That is the brave new world that we are in, and I do not think that we are going to leave it any time in the future.
:
Thank you very much, Madam Chair.
Once again, I would like to thank all the witnesses. This has been very interesting.
I was in Toronto with Mr. Ntumba last week to attend the conference of the Association canadienne d'éducation de langue française, or ACELF, which is dedicated to French-language education in minority communities. During one of the presentations on artificial intelligence, it was said that in order to achieve good results, it is essential to ask the right questions at the outset. Otherwise, we can be led astray, meaning that all the background algorithms end up telling us what we want to hear, because the answers are constructed as we send queries to the system.
You just mentioned that, Mr. Raab.
Earlier, you also said that Copibec paid out $13 million in royalties. Did I understand that correctly? Which sector is that exactly? It seems to me that it's not very much, considering the number of artists. Can you elaborate on that?
:
Winners and losers in competition will be an ongoing problem. We discussed this afternoon the scale and the size of the capital infrastructure and other investment required to maintain and operate these large language learning models. These are now multitrillion-dollar companies. AI chips have roughly a 12-to-18-month shelf life before they have to be completely replaced by the next generation of technology, if, indeed, chips continue to progress.
My point is that there is massive capital investment that we're seeing, and it needs to continue for the foreseeable future for this technology to exist, and we'll possibly see it improve. That means that I think we have to understand that we're living in a reality whereby the large incumbent tech companies are to a certain degree protected. They have a moat around them as a result of their scale, their size and their ability to create this technology and then distribute it to us at scale. Our competition focus is probably best targeted on ourselves and our own industries inside Canada instead of trying to figure out how we're going to limit or encourage competition on the part of these large multitrillion-dollar multinationals.
We all know that there is a series of trade negotiations under way right now. There's a U.S. administration that clearly has a very favourable eye towards technology as one of America's pinnacle technologies that's attached to its geopolitical significance to an extent such that it is now competing with China.
I mention all of this not to make our discussion overly complicated or take it off into other directions but to try to be realistic about where I think government can and should have an impact in Canada. I think that's with regard to individual industries and hopefully seeing this technology as an opportunity to bring competition to industries. This is not the finance committee, but I would urge the finance committee to look at how these technologies could empower fintech and empower greater competition within our financial sector. Similarly, within telecommunications, we are going to have all kinds of changes, new consumer products and new ways of delivering and sharing information that might also be able to bring greater competition to that industry.
I would look at this technology as an opportunity to encourage and foster competition to the benefit of consumers, creators and the industries themselves.
:
Thank you very much, Madam Chair.
Thank you for this conversation. I'm finding it fascinating.
I want to say a couple of things. It seems to me—and I think we're all going to be discussing these points—that there is a point at which AI is a tool, and then there is a point at which AI becomes the craftsman itself. There is a point at which it becomes the search engine, and then there's a point at which it becomes the journalist, and this is the tension we're talking about.
I thought that Monsieur Généreux's question about whether one can use it and be an artist was super-interesting. I really like the question, because I think we want to be clarifying when it is a tool and when it is the craftsman itself.
I think about the 1980s, when hip hop came around, and people said, “That's not art. They're sampling records.” When we look at it now, of course it was art. It was transformative. Sampling records and those things were big, and they changed the way we made music and listened to music, so I think it's important for us to be thinking about these issues in a way that points to them as tools. We can't just throw up our hands and say, “No, no.” We need to move things forward.
Artists will adapt. They always adapt. That's what they do the best. That's what they did in the 1980s with hip hop, and hip hop changed the whole world.
In the world of regulation, how do we make these fine points between when it is a tool...? Maybe it's through transparency, as you were saying. When is it a tool, and when does it become the craftsman itself?
[Translation]
Sorry I only spoke in one language.
Actually, my questions are primarily for Ms. Desrochers.
:
Thank you for the questions. I will ask my colleague Ms. Beaulieu to answer.
Before she does, however, I'd like to say that there's a time for legislators to pass laws, and a time to make decisions, like looking at artificial intelligence as a tool.
I've been to many conferences on the subject, and the creator's intent comes up a lot. That's why proof is important. Creators must be able to show their creative process versus just generating something. Our position is very clear. We agree that artists, creators and cultural businesses can use artificial intelligence as a tool. The idea is not to prevent them from doing so. When we talk about transparency in using artificial intelligence, we're really talking about works generated exclusively with AI.
When it comes to art, boundaries are complex. I think a lot of people will look at similar cases and want to know the creator's intent to establish those boundaries. However, that shouldn't stop us from moving forward in defining what is simply AI generated. As an example, Spotify was flooded with synthetic music that was purely synthetic with the sole purpose of saving the company money at the expense of creators. That's what we're trying to avoid.
:
I am astounded by my colleague's answer.
I don't want to infringe on any copyright, so I won't use any specific work as an example. That said, your question is hard to answer, because there is no one-size-fits-all answer.
In case law, we use talent to define what an original work is. Let me give you an example. Stakeholders in the cultural industry have agreed not to consider outputs—that's what I call content generated solely by AI—as a work. I'm a bit overzealous when it comes to vocabulary, so I differentiate between a work and content, and AI-generated content is not protected by copyright.
Furthermore, we already have the necessary tools to define what is an original work when considering whether content is a tool or a creator's work. That means we'll have to come to decisions on various situations and sets of facts. Unfortunately, there's no magic wand for that. However, content created only with artificial intelligence output is not considered a creative work, so it's not protected by copyright. That's what we're saying.
That raises another question. We know who owns the copyright when a work is protected. There's a whole system in place, for which I have a soft spot, based on the Copyright Act. However, unidentified works, so outputs that have no author, fall in a different category.
You were asking about the tool. As I've mentioned, it's really to what extent the tool is being used that needs to be defined. Court decisions have often been used to determine which category the tool falls in.
That's my somewhat complex answer.
:
Thank you, Madam Chair.
Mr. Anctil, when radio came out, it was a major technological change. Many felt threatened, but over the years, it started paying off in the cultural and information industries.
The same thing happened with television. Everyone thought that it would be the end of radio, that it was a thing of the past, but both found their own space. At first, people were scared of television, but it turned into an extraordinary and indispensable tool for the cultural industry and still is to this day.
It was the same when the Internet came out. People were scared of the big Internet monster and wondered whether it would mark the end of television. Obviously, we might need to review the model, and that's been another topic of discussion over the last few years.
Now, it's artificial intelligence. Do you think the same thing will happen? Could the cultural industry eventually survive with artificial intelligence everywhere, including the creative sector? Will our artists, artisans and content creators in the cultural industry be able to create and make a living using artificial intelligence? Do you see a scenario where that would be possible?
:
There's a common thread in all the media, information technology and communication innovations you've just mentioned. They all led to new professions, and new ways of communicating and staying connected to each other, as humans.
The Internet is different, though, because it's dominated by a small group of players. We heard an example of that earlier during the long conversation with Mr. Griffiths on Google. Fair competition with Google is impossible, in my opinion. If I buy an Android cell phone, by definition, I'm in the Google environment. Whether it's through search engines, online software or social media, the American oligopolies keep us trapped in their environments. The evolution of artificial intelligence in those environments is controlled by a small group. That's the problem.
To protect cultural diversity, allow people to discover content created thanks to this diversity, and ensure that our cultural and media industries can compete globally, we'll have no other choice but to create our own infrastructure, as we've done in the past. The audiovisual and radio industry developed its own physical infrastructure, and we control that. The Internet, we don't have much control over. More importantly, we don't control what artificial intelligence relies on, meaning the servers.
Considering the evolution of artificial intelligence, without digital sovereignty, I have very little hope for the survival of Canada's cultural sovereignty.