Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.
Welcome to meeting number eight of the Standing Committee on Canadian Heritage.
Pursuant to Standing Order 108(2) and the motion adopted by the committee on Monday, September 22, 2025, the committee is meeting to study the effects of the technological advances in AI on the creative industries.
Today we have with us, from Access Copyright, Erin Finlay; from the Association of Canadian Publishers, John Illingworth and Brendan Ouellette; from the Canadian Authors Association, Travis Croken; from Cultural Careers Council Ontario, Diane Davy; and from the Fédération culturelle canadienne-française, Marie-Christine Morin and Sven Buridans.
Thank you for joining us.
From Meta Platforms, we have Kevin Chan and Rachel Curran.
It's nice to see you again.
We'll give each organization five minutes for opening remarks, starting with Erin Finlay from Access Copyright.
Madam Chair and members of the committee, thank you for inviting us to appear today. Access Copyright is a not-for-profit copyright collective that was founded in 1988. Since then, we have licensed the published works of more than 14,000 Canadian writers, visual artists and publishers, returning over $500 million in royalties to the creative ecosystem. These royalties ensure that Canadian creators and publishers are compensated for the use of their copyrighted material and they can reinvest in new publications that inform, educate, entertain and reflect the diverse experiences of Canadians across this country.
Access Copyright supports the development of a strategy that fosters a fair, safe and ethical AI ecosystem, one that recognizes the importance of human ingenuity to our society and our Canadian culture. Like organizations across creative industries, we believe AI uses must be authorized, remunerated and transparent. You heard about ART previously in these meetings.
I want to focus on three key points today.
First, do not introduce new exceptions into the Copyright Act. AI models do not create from nothing. They copy from human creativity—from books, journals, magazines, newspapers, songs, images and countless other works. Text and data mining and the training of large language models engage creators' exclusive rights and require licences. Calls for new exceptions are both unfair and unnecessary. AI innovation can and should coexist with a system that incentivizes creators and protects their rights. However, big-tech companies and small-tech companies are profiting from the unauthorized use of creative works to train their AI systems. Now they're asking the government to legitimize that behaviour with new exceptions. AI is fast and powerful, but speed and scale cannot replace fairness, consent or respect for creators' rights. AI innovation cannot be a shortcut to ignoring the people who create, protect, sustain and proclaim our culture. After all, culture is what makes Canada Canada.
Second, enable a rights market. Introducing new exceptions would undermine emerging rights markets, create uncertainty and harm the creative industries. Voluntary licensing, by contrast, is feasible, desirable and already happening. We've seen multiple licensing deals between copyright owners and AI developers recently, including HarperCollins and Microsoft in November 2024; The New York Times and Amazon in July 2025; News Corp, Time, Axel Springer and others with OpenAI; and, most recently, the landmark Bartz v. Anthropic settlement coming out of the U.S. In addition, copyright collectives, such as Access Copyright, in the U.K., the U.S. and Australia are also offering voluntary collective licences for AI uses and training.
These examples prove that the voluntary licensing model works. We need a market built on voluntary licensing. Voluntary licensing lets creators retain control, receive fair remuneration and know when their works are used. It also provides AI companies with the rights they need to do the work they do. It makes innovation fair, sustainable, and legal, respecting the value of human creativity while enabling responsible AI development.
Third, require transparency. AI often operates as a black box. Creators cannot see if or how their works are used, and users cannot tell what is human-made versus AI-generated. Platforms must disclose both the works used in training and which outputs are AI-generated. Transparency allows creators to verify use, ensures accountability and builds trust. Without it, Canada's creative industries face information asymmetry in licensing their rights and impossible evidentiary burdens when enforcing copyright. Building transparency obligations offers a practical, balanced solution.
In conclusion, Canada can lead globally, but only if we insist on authorization, remuneration and transparency—no exceptions, no free riding and no guessing. Voluntary licensing protects creators, encourages innovation and ensures that AI evolves in a way that respects the human creativity at its core.
Madam Chair and committee members, I am grateful for your invitation and the opportunity to share the Canadian-owned book publishing sector’s views on this issue with you today.
As this committee is aware, the writing and publishing sector is currently locked in litigation around the world with developers of large language models, LLMs, over the unauthorized use of pirated book collections, or shadow libraries, in the training of their products. These shadow libraries contain hundreds of thousands of in-copyright titles, including thousands by Canadian creators. They offer AI companies an easy, expedient, unethical and arguably illegal route to make their models more robust and expressive. The result has been an unprecedented industrial-scale extraction of commercial value from the collective published work of humanity without any compensation flowing to its creators and facilitators, the authors and artists, and the businesses with whom they’ve partnered voluntarily to bring their work to the public.
In the United States, this litigation is beginning to translate into colossal settlements. In the Bartz v. Anthropic case, Anthropic has agreed to pay $1.5 billion U.S. to rights holders for its unauthorized use of their works. That amount hints at the scale of what has been misappropriated during the development of large language models. The real value across all AI developers is massively higher.
Books, especially those that have been through a traditional publishing process, are of tremendous value for AI training. A large language model is only as good as the works it has been trained on. These models are economically valuable not solely because of new technology; it is that combination of technological innovation and overlaying repositories of cultural expression that makes them powerful. It is unjust that the technological innovators should be rewarded while the cultural producers, without whom they would have no product to sell at all, are cut out of the deal.
We maintain that the use of a copyrighted work for the purpose of AI training is a licensable right. Canada’s publishers, and the authors without whom we would have no business, are ready to come to terms with the developers of AI. Not all authors will want to participate in such a transaction, and that is their right, but we have seen licensing models emerging in the U.S., the U.K., Australia and elsewhere, and we are ready to do the work necessary to ensure that Canada’s creators share in the wealth their artistry is already generating for the tech sector.
As such, we too implore this government and the opposition parties to avoid disrupting this emerging market. No new exceptions to copyright should be entertained. AI training must be based on those principles of authorization, remuneration and transparency.
Of course, there are impacts on our industry that go beyond copyright concerns. We are already witnessing one consequence of the advent of LLMs in a deluge of poor-quality, AI-generated books on major distribution platforms.
An Amazon.ca search for “Mark Carney biography”, for example, brings up a slew of purported biographies of our Prime Minister. Many have AI-generated cover art, and some rank higher in search results than his own Value(s). Not all of these books are selling, but some are, and the average consumer has no means of distinguishing a properly researched book from incoherent slop until they buy it. Putting transparency obligations on AI platforms to help Canadians identify what is AI-generated will help build consumer trust.
Finally, I'd like to raise the issue of competitiveness in the cultural industries. We have learned that some or all of the so-called big five publishers—the global corporations that produce the overwhelming majority of books sold in Canada but publish an elite minority of Canadian writers—are developing bespoke AI-powered tools in-house. Good for them. They should be doing that, but the fact that the Canadian-owned publishing sector, which is composed primarily of SMEs, is not in a position to make comparable investments in research and development means that an already uneven competitive playing field will become even more tilted against the domestic industry unless steps are taken to enhance our own capabilities.
This is a situation in which cultural sovereignty and AI sovereignty are closely linked. Canada’s domestic cultural industries—the businesses that do the hard work of discovering Canada’s writers and artists and putting them on the global stage—need a cultural AI strategy that centres the interests of Canadian creators and cultural workers. What would that strategy include? That’s up for debate, but we can make a few suggestions, including compensation for past, present and future use of copyrighted works; a legal framework that doesn’t undercut emerging rights markets; and selective investments that support the Canadian community, competitiveness and culture.
Our publishers need AI that works for them and helps them be stronger engines of Canadian culture, not AI that harvests their works as a way to replace them.
Thank you, Madam Chair and members of the committee, for the opportunity to contribute to your study on the impact of artificial intelligence on the creative sector.
My name is Travis Croken, and I'm an author and the national co-chair of the Canadian Authors Association.
Artificial intelligence is transforming the world around us, the creative sector included. While it offers tools that can assist artists, writers and other creatives, it also poses serious risks, threatening to undermine cultural diversity, intellectual property and ethical rights, and the creative market.
I will centre my remarks on three key points.
The first is intellectual and ethical rights. The Canadian Authors Association was created over 100 years ago, largely to advocate for the protection of authors and copyright. Here we are still discussing the same issues on a much larger scale with little room for error. Copyright was created to ensure the protections of creative works and to ensure creatives continued their work. Why pour time, heart and soul into a project if it can be disseminated without appropriate compensation and control?
The use of copyright-protected materials to train AI datasets counters the intended purposes of the Copyright Act. Worse, it limits the author's control over their work, over the ethical use of their work and hinders their compensation, all for a system that will later threaten their livelihood, that will oversaturate the market and that will potentially cause damage to their reputation and style through mimicry of their voice or by using their words in a manner the author does not condone.
The second is the impact on the creative. Much like a painter's brush stroke, an author's voice, writing style and creative concepts are unique and create an identifiable brand for the author. Artificial intelligence can mimic an author's voice and can flood the market with books strikingly similar to their novels, creating an unjust competition for the author.
It also creates a multitude of ambiguity. If an author uses artificial intelligence in their novel, how does the reader discern between which parts of the novel are created by the author and which parts of the novel are developed by artificial intelligence? This undermines the reader's confidence in the author. If artificial intelligence is used to create a novel and to directly copy from another author's work, who is liable? Is it the author, the publisher or the creator of the artificial intelligence system?
Further to this is the time sink that is created. Writing a novel can take years, including multiple drafts and edits, working with publishers and doing marketing and promotional tours. Unless the author is exceptionally well known, the royalties are not such that they can quit their regular jobs. Artificial intelligence has now added further steps to this process, including fighting for their rights and for fair compensation, trying to ensure their works are not used illicitly and trying to navigate this new and uncertain era. Further, the use of artificial intelligence in writing runs the risk of impacting an author's ability to create, diminishing the creative muscle as it is used less.
The third is cultural diversity. Canada thrives on its cultural identity, and it has fought hard over the years to protect it and to ensure Canadians are treated fairly in the market. If artificial intelligence systems are allowed to be trained with an author's work without their permission or knowledge, are allowed to diminish an author's financial gain from their work and are granted copyright-protected status where it is not needed—as artificial intelligence systems do not require any incentive to create; they only need commands—we risk losing our creatives we hold so dear to our hearts.
If our authors are not protected and granted security to defend their livelihoods, we run the risk that they stop creating, not for spite but for the inability to survive. This would leave a dearth of Canadian cultural diversity to be filled with foreign creators or with artificial intelligence systems that do not create anything new, but they simply recycle and reword what has already been created before.
In conclusion, Canada should ensure that the guidelines and rules created to govern artificial intelligence ensure that our human creatives, our cultural heritage now and in the future and our ability to stay on the world stage of creation are protected as a distinct and valuable contribution to our economy, our culture and our future. Consent, fair compensation and transparency must be included in any governance created. We have a choice: Do we want the future legacy of our cultural heritage to be created by humans or by machines? It is my opinion that we have one opportunity to get this right. Artificial intelligence already moves at a daunting pace, and if we misstep now, it may be too far ahead of us to catch up.
Thank you.
I am looking forward to answering any questions you may have.
Thank you, Madam Chair and members of the committee, for the opportunity to speak today.
My name is Diane Davy, and I am the executive director of Work in Culture, which is the popular name for the Cultural Careers Council Ontario, which is quite a mouthful.
Work in Culture is a non-profit arts service organization. Its mission is “to advance the careers of artists, creatives, and cultural workers from diverse lived experiences, and support the organizations that engage them, through entrepreneurial and business skills development and innovative research.” We are best known across the creative sector for our job board, which is the most popular arts and culture job board in the country. In addition to the job board, we develop and deliver a wide variety of training programs, both in-person and virtual, and do related research on an ongoing basis.
We recently published a report, “AI for Administration in Ontario's Creative Industries: A Snapshot of Current Use, Concerns, and Considerations”, which seeks to explore and understand the specific potential of AI tools for business operations and administration in Ontario's creative industries. The study asked how organizations and individuals in film and television, book and magazine publishing, music, and interactive digital media are using generative AI to streamline tasks and manage day-to-day demands and asked whether AI is helping to alleviate the pressure to do more with fewer resources, a pressure that faces all of us in the arts. The report focuses on how AI trends can be used to help the predominantly small and mid-sized enterprises that dominate the sector, but it also acknowledges the challenges and ethical concerns of working with tools that have been built using creative content without permission or recompense. We are supporters of strong copyright policies that ensure that rights holders maintain control over their works and receive the fair compensation that they deserve.
While the focus of our report is Ontario's creative industries, the findings are likely to resonate with cultural workers and small businesses that are navigating similar operational pressures across Canada. The research is intended to help creative organizations situate themselves within a rapidly evolving landscape, to gain insight into how their peers are approaching AI and to reflect on their own values, needs and readiness. At the same time, it supports a broader understanding for sector leaders of how AI is currently being used in practice and where knowledge gaps, barriers and opportunities remain.
Work in Culture, with its training mandate, specifically provides the following recommendations: build foundational AI literacy training to equip creative professionals with a baseline understanding of how AI systems work and their implications; support the development of workplace AI policies to help organizations create clear, responsible, ethical guidelines; and provide ongoing training on critical issues like data privacy, algorithmic bias and effective use strategies.
Since the release of the report, we have been getting more and more responses from the community on the need for this kind of training, along with concerns about the ethical issues. We will be presenting the report in person at an event on October 18 in Toronto—if anyone is there and would like to attend, let me know—and we expect additional feedback and insights at that time.
The Work in Culture team of four, which is typical of many of the small arts organizations in the community, recently offered itself up as a guinea pig in a pilot training program working with Skills for Change, an agency that works to enhance skill sets, opportunities and access to good work for newcomers and underserved groups across Canada. The pilot program, which combined a series of virtual modules created by Google with several in-person sessions by Skills for Change, has given us a model that we feel we can build and adapt for use across the Canadian creative community. We continue to look for opportunities, resources and partnerships to build, develop and deliver this much-needed training across the sector while we work to make appropriate use of AI's potential to enhance and augment our own internal capacity and help us serve our community. We would welcome the development of a national training strategy that would help our Canadian creative community make the best of the opportunities offered by AI within an ethical framework.
Madam Chair and members of the committee, my name is Marie‑Christine Morin, and I am the executive director of the Fédération culturelle canadienne-française, also known as the FCCF. I'm joined today by my colleague Sven Buridans, director of innovation and digital partnerships. I would like to thank the committee for inviting us to testify.
For nearly 50 years, the FCCF has been the national political voice of the artistic and cultural sector of the Canadian and Acadian francophonie. Our sector plays a major economic role in Canada, accounting for more than $5.8 billion in gross domestic product and generating more than 36,000 jobs across the country in 2022. That's how important it is for local economic development and job creation.
Last month, we delved into the topic of artificial intelligence, or AI, at the All In event held in Montreal. Our exchanges with key players in the Canadian ecosystem confirmed two things: first, a genuine interest on the part of the technological community in cultural issues; second, the concerning realization that arts and culture are still absent from AI funding channels. Non-profit cultural organizations, which carry out a public interest mission, don't have access to funding programs like Scale AI's. By neglecting the arts and culture sector, Canada is missing out on a critical innovation hub and its creative, ethical and critical perspective on AI.
Earlier this month, we were also at the UNESCO Mondiacult conference, the United Nations Educational, Scientific and Cultural Organization, in Barcelona, alongside the Coalition for the Diversity of Cultural Expressions, or CDEC. This historic meeting led to a final statement, signed by over 120 ministers of culture around the world. It lists AI as one of the priority areas of action for states. This statement is consistent with the vision of the CDEC and the FCCF. It commits states to promoting the discoverability of multilingual cultural content, protecting copyright and involving the cultural sector in AI policy development.
We are concerned about the Government of Canada's response to this positioning. No representative of cultural industries sits on the AI strategy working group, which was created by Canada on September 26. We ask for significant involvement of the cultural sector in the development of AI policies and systems.
In the meantime, we need to take action and equip artists and cultural organizations on the ground. This fall, the FCCF will launch its national digital strategy, Impulsion 2025-30, which will mobilize its network around four major initiatives: taking action on public policy; strengthening digital skills and capacities; developing new structuring alliances with federal institutions, Quebec and the world; and supporting research and innovation.
This strategy will position evidence culture as a common thread in our collective efforts to put arts and culture at the heart of issues of discoverability, infrastructure, digital sovereignty and, of course, artificial intelligence. However, this transformation will not be possible without clear and sustainable federal support. Current investments of $2.4 billion in AI need to go beyond the private sector. They must also support the francophone arts and culture sector. We are asking Canadian Heritage and federal cultural institutions to work with Innovation, Science and Economic Development Canada and Employment and Social Development Canada to make their innovation and training programs accessible to our sector.
We met recently with ministers Steven Guilbeault and Evan Solomon. Minister Solomon referred to a “Gutenberg moment”, saying AI is transforming our cultural markers, just like printing did for knowledge. He pointed out that culture is at the heart of Canadian identity, and we agree totally with him. Our creators and institutions must be trained, supported and equipped for this transformation to be inclusive.
Finally, it is essential to train AI models in French using representative data from our communities, to promote the diversity of francophone cultural expressions in Canada. To ensure the consistent, inclusive, open and safe development of artificial intelligence in support of creation, the cultural industry must be part of the conversation and the future digital direction.
Thank you for your attention. I will be happy to answer your questions.
My name is Kevin Chan. I am the director of public policy at Meta.
[English]
I'm here with my colleague, Rachel Curran.
Meta employs more than 3,000 people in offices across the country, including in our AI lab in Montreal. Most Canadians use at least one of our family of apps to share with family and friends, discover businesses and connect over things that interest them. Our apps empower hundreds of thousands of Canadian businesses, artists and creators every month to reach new audiences and grow. Approximately 98% of the Canadian businesses using our platforms are small businesses and 55% are female-led.
Meta is also investing significantly in foundation AI models and generative AI. For example, Llama, our AI foundation model, is the leading open-source model, with over one billion downloads today.
Just as anyone can freely use our family of apps to connect and create, we believe AI technology should be accessible to all. At Meta, we have employed an open-source approach. That means making our AI models like Llama freely available for anyone to download, use, modify and build upon so that researchers and businesses of all sizes can customize and deploy these technologies in any environment without restrictive licensing or costly barriers.
This approach ensures that innovation, safety and opportunity are distributed as widely as possible, not concentrated in the hands of a few.
[Translation]
Research suggests that AI has the potential to inject $180 billion annually into Canada's economy by 2030. We know that AI helps countries grow in a competitive global economy. We also know that, along with Canada's persistent productivity gap relative to other OECD nations, Canadian organizations lag behind their global counterparts when it comes to commercializing AI. If Canada is going to realize its full potential as an AI leader, it must create policies that prioritize innovation and encourage investors to seize the moment.
(1700)
[English]
Open-source AI should be a key part of Canada's AI strategy. It gives Canadian governments, businesses, creators and indigenous communities access to world-class technology without the cost burdens. It increases transparency and safety and results in more widely distributed benefits. Most importantly, it is the key to building truly made-in-Canada AI solutions.
Open-source models will be important for Canada as we seek to build our own AI stack, because it helps address concerns about building independent AI capabilities. Meta’s Llama models, for instance, are free to download so that anyone can build innovative new applications on top of them while protecting data and privacy.
These models can be run on local infrastructure and do not require data to be hosted elsewhere or shared with us. Organizations that handle sensitive data, especially public sector organizations, need high degrees of security and often can’t send their data to closed models over cloud APIs.
In one example, the time-constrained North Dakota Legislative Council convenes every two years for 80 days. In their 2025 legislative session, they piloted an AI solution using Llama to help review and summarize more than 1,000 draft bills. The solution is 100% on premise and running on secure local hardware to ensure maximum data security and control.
Here in Canada, we were pleased to recently learn that federal departments are already using Llama to power solutions that are made in Canada and are secure, with government data staying within the Government of Canada.
We believe that AI sovereignty does not mean closing ourselves off from using models and products developed elsewhere. It's sovereignty, not solitude, as the Minister of Artificial Intelligence has said. It means taking frontier technology, regardless of origin, and adapting and refining it to best suit Canadian interests. Canadians can benefit enormously from the investments that companies like Meta have made in open-source frontier models.
Artists, musicians and other creators use our AI tools as a creative partner. It helps to automate repetitive tasks like editing and building content calendars and helps those same creators reach new and bigger audiences through personalized recommendations and content optimization. Our goal is to empower people to tell their stories, build their businesses and connect with their communities in new ways.
Thank you for your time. We look forward to working together for a prosperous Canada that brings the benefits of AI to everyone.
I was referring to more open-source AI increasing transparency.
The way we have gone about ensuring that we democratize this technology is by making our models freely available to anybody. That allows anybody—a large government agency, a business or a not-for-profit—to download a version of the model. They can run tests on it locally. They can poke and prod it. We publish model weights so that people have a better understanding of the nature of the model. Of course, they can then take it and fine-tune it to customize it for their particular needs.
Talk to me a bit about the specifics of how AI is currently being used by artists or creators within Meta platforms in order to support their efforts and reach further audiences.
I just spent some time with a round table of creators and artists who are using AI. You may have seen, as well, that The New York Times had a recent article about how artists are using this.
If we think about technology as an enabling tool for creativity and creators, there has been a boom of new use cases, new ways of having creative outlets to bring to life new kinds of ideas and new kinds of sources of expression—
First, as I mentioned earlier, they are using our platform to grow their audience, discover new fans and new communities. That is built into the nature of the platform.
Second, I think there are a lot of artists who are using open-source and closed source models to integrate them into their art. There are lots of installations across the world where artists who are at the leading edge of their work are using data visualization tools powered by open-source models like Llama to showcase new and creative ways of expressing themselves.
One of the interesting things taking place within Meta platforms, specifically on Instagram, is using AI for age verification. Talk to me a bit about how that's being done.
We build AI systems that help us identify underage users on our platforms. They read signals from users, whether they be the content they're interacting with or their friend networks, and look at things like birthday posts, of course, with identifying information removed. The systems look at a variety of signals to determine whether someone is underage or not.
If we think a user is underage—under 18—we proactively place them in a much more restrictive experience that we have built for youth, and we require that they prove to us that they are over 18 before we let them out of it.
Our AI systems are helping us determine whether users are in the right experience for their age or not.
We have a variety of methods. We work with a company called Yoti in the U.K., which makes facial recognition technology to predict someone's age. People can also submit a piece of ID to us that proves they're over 18.
We really want to make sure that people are in the right experience for their age, so we go through a pretty rigorous process to identify them if we think they're underage.
Obviously, this is done in a privacy protecting way. We use suppliers, like Yoti, which do not store the information they take from users. My understanding is that the information is destroyed immediately.
We also don't want to be collecting personal ID from our users, which is why we have proposed what we call "app store legislation”. We have suggested that age verification should be done at the app store level. When users set up their phones, Google and Apple are getting reliable information about someone's age. To date, 20-plus states in the U.S. have introduced or passed app store legislation. We're saying, “Send us that signal about whether a user is under 18 or not, and we can ensure that they're placed in the right box and the right experience.”
We are advocating strongly for app store legislation as part of the government's online harms legislation.
Jumping back into the more creative side, I guess, AI Studio is used to enhance creativity on your platforms. Talk to me about how that's being used to reach new audiences and increase effectiveness or efficiency within the sector.
I think historically we have seen that new technology has always preceded a boom in creative industries. Following the printing press, we saw the growth of the novel, for instance. I think new avenues of creative expression that we have not even seen yet will come as a result of this new technology. I think the opportunities are enormous for Canadian artists.
Before doing this, I was a Canadian artist for 20 years. I was a songwriter.
One part of this conversation that's really important is to distinguish the difference between when it is a tool and when it is the creator itself being the AI. It's really important that we think of artists as being entrepreneurs, as small businesses. They are very agile. They have moved through massive technological changes. I agree that they will use these new technologies as a tool.
That's not what people are resistant about. That's not what we're talking about. We're talking about how we can empower artists as small businesses. When we empower artists as small businesses, one of the great assets is their copyright. We have a system in the market to license copyright. It's been developed over a long time. This is what we're trying to figure out. Can a similar model be put onto the new world of AI and generative AI where I can put in “write me a David Myles song” and it sounds like my other songs and uses my voice but I didn't have anything to do with it other than my previous songs? It's about transparency licensing.
The other question that needs to be brought up is opting out and this idea of, well, if you're not up for it.... Certainly, we respect the IP of other small businesses, such as pharmaceuticals. What does opting out look like in terms of giving people the option to say it isn't for them and they don't want to do it?
I'll start with you, Erin.
Do you have any thoughts on how to position ourselves in a place where we can empower? We don't want to stifle technology. We want it to be used as a tool. I don't think anybody's a Luddite here. Artists certainly move with the times. How do we protect their assets, and how do we embolden them as small businesses?
I agree with you wholeheartedly that, first, artists are using AI as tools in the creative process. I think there's no question about that. Some artists will use them more than others. Some artists will choose not to use AI tools. That's everyone's prerogative.
In terms of empowering artists as entrepreneurs or otherwise to, number one, keep the copyright intact, I mentioned a number of licensing initiatives that are already happening in the market. Artists make a living by licensing their copyrights. We need to make sure the copyrights remain protected and strong and there aren't exceptions introduced into the Copyright Act that undermine their rights, that undermine their ability to make a living and that undermine this burgeoning licensing market. We're already seeing licensing examples of big rights holders, for lack of a better term, with big platforms. They are happening in the market. They're coming together and negotiating fair licensing terms on what works for them.
There are also numerous examples of collective licensing, such as what Access Copyright does, happening around the world. I mentioned Australia, the U.K. and the U.S. Access Copyright is certainly looking into that as well. How might we collectively license the vast repertoire that we represent to platforms that need the reproduction rights or the communication rights in those copyrights of the affiliates we represent?
What about transparency on the side of what's been adjusted and what's in the output? Have you seen in any of these models that they're able to give a recipe for what's in the work that's being output? That's what I'm curious about. What's the capacity in terms of seeing transparency? Do we have a recipe that might involve many, many works?
In the output as well. There are examples popping up. We've seen legislative solutions in the EU. Some of them are better than others. Transparency in terms of the ingestion of works into the training of a system is critical for artists to actually know what's being used and what they're able to license. Without that information, artists are left completely in the cold: Should I license? Can I license? Is it being used? I don't know.
We do need some transparency obligations on the platforms so that artists are able to discern when their works are used and when they're not. That opens up a discussion for negotiation.
You had a question about opt-out as well. Opt-out flips copyright on its head. Copyright is an opt-in system. I choose to license my copyright. These are the rights that I own. Requiring creators or artists to opt out of a system using their works completely upends the copyright framework as we know it across the world and our international obligations as it applies to copyright as well.
I'm trying to figure out how possible this is. How far along are we in these other jurisdictions? Are we dreaming that these models can actually give us this information?
Looking retroactively at what's been done, it's going to be a challenge that may be impossible in some instances, but I think, in turning over a leaf and moving forward, there needs to be a commitment to good faith negotiation, essentially.
I also wanted to get on the record that I think an enhanced level of transparency is actually to the great benefit of both the users and developers of AI. I recently gave a talk to a national group of library collections developers who wanted to hear from me. They're being approached by people in their communities who have written books with the assistance of AI and wanted to know how we make a choice between getting a book into our collection or not. What has value and what doesn't? What I had to say to them was, essentially, for anything that is non-fiction, based on the current state of AI, don't buy it because there's no citation. There's no traceability of that information. There's no understanding under the current state of the art of where the knowledge in that book came from and what it is that it represents.
Solving that problem would make those generative tools so much more authoritative. When you ask an LLM a question, knowing what informs the answer matters just as knowing what newspaper you're getting your news from sets context.
Before I begin, I would like to welcome all our witnesses and thank them for joining us. We don't often have such a large panel. We know this is a very interesting and relevant study, and that this topic matters to a lot of people.
Some witnesses might leave frustrated today, feeling they didn't have enough time to say what they wanted or answer questions. When there are a lot of witnesses, time is limited, so it happens. I would like to let all the witnesses know that, after the meeting, they can always send us notes or comments, or even briefs with recommendations that we could include in our report. I encourage the witnesses to do so if they run out of time to discuss certain things. Their comments are extremely relevant and informative. However, we will literally run out of time. I think my colleagues would agree with everything I just said.
That is very gracious, and I appreciate it. Thank you, Mr. Généreux.
I would like to thank my friends from Meta for joining us, and my first question is for them.
Two years ago, Parliament passed Bill C‑18 regarding online news. Meta decided at the time to pull Canadian news content from its platforms, probably respecting the law in the process. Countless small regional media outlets in Quebec and Canada, as well as large media companies, that used Meta platforms to share their content were tremendously harmed by this decision.
During the study of the bill, it was shown that, because consumer habits have changed considerably over the years, Meta played a major role, probably unintentionally, in delivering news content in Quebec and Canada. While it didn't change anything, I showed that Meta had a social responsibility to continue publishing content from news businesses in Quebec and Canada.
Meanwhile, for the past two years, Google has accepted the law and agreed to commit $100 million a year, helping small media outlets that would probably not have survived otherwise.
Would Meta reconsider its position and, in the short term, allow Quebec and Canadian news content on its platforms?
Look, we would love to bring news content back onto our platforms. I think I said that two years ago, as well. We are hopeful that the government will take another look at that legislation, which we think misrepresents the value exchange between publishers and our platforms.
We are not like search engines. Search engines scrape the Internet for news content, and they present it in the product that people see. When they do a Google search, they expect to see news content in that search, so they use news in a very active and proactive way. We do not. We just host news passively. News publishers place their content on our platforms because they get increased distribution and then they can monetize the clicks that they receive as a result of that distribution, so we think we are in a very different situation from Google.
That said, we would love to put news back on our platforms, and we're hopeful that can happen.
If I understand you correctly, your position has not changed and you are using the same arguments you used two years ago, despite the situation the news industry finds itself in. Is that correct?
I would say that we have a new government now. I think that the new government is more open to these kinds of discussions, so we're hopeful we can make some progress with them.
Let's go back to artificial intelligence. Mr. Chan, you mentioned earlier that Meta planned to develop AI in a way that would benefit everyone.
Have you considered that content creators, artists and copyright holders also need to benefit from artificial intelligence? There are a lot of them here today. If so, how will you ensure that copyright is respected and that content creators are properly compensated for the use of their work?
We think artificial intelligence is so important that it should be made available throughout the economy and for all levels of society, and that includes creators and artists. We think this technology will be very helpful to creation and creativity, and we are willing to make our models available for that.
[English]
In terms of what you're asking, which is about the act of model building and training models, as it is with the entire industry that is building these AI models, we do not see how learning about information and developing the patterns and relationships to build these models touches on copyright interests. We believe that this is very much an act of trying to get a tool to learn and develop very powerful models that are going to be very useful for society and the economy. In our case, we are giving away those models free so that everybody may benefit from them.
I understand you're making your models available. I also understand your willingness and the concept of open-source code. Meanwhile, content is being generated thanks to work covered by intellectual property or copyright laws.
When this government or another government decides to legislate to rein in companies that develop AI tools or tools that use easily accessible content on the web, will you welcome that legislation, or will you react as you did when other laws were passed limiting what you can do?
Yes. I think Mr. Chan's answer indicated where we're going with these models.
Our models don't store or reproduce copies of any content. They have been trained on publicly available information across the Internet, billions of pieces of data. Any one piece of data is only marginally influential in the overall performance of the model. What they do is extract what we believe are unprotectable facts, statistics, patterns and relationships from that data. They're not extracting the protected expression from that data and certainly not reproducing it wholesale or in part, nor, again, do they store it. We don't believe that training these models implicates copyright interests in that respect.
That said, Meta, and I think other companies too, have entered into licensing deals where it makes sense to do so, and I think we will continue to do that. I think our fundamental argument—and we made this argument in a submission to the government when they conducted their copyright consultation—is that model training does not implicate the interests of copyright legislation as it stands now.
Last month, in Canada's national newspaper, an article stated, “jobs aren't disappearing because of AI, but rather, they're being redefined.” That was an interesting article, because I have heard doom and gloom on AI and employment. I think we would see a dip on AI and then acceleration, but this is the first time.... We're not going to replace human connections. The jobs aren't going to disappear; they're actually going to be redefined, which is kind of interesting when we know that a lot of our young people in this country are suffering right now because of unemployment. I say that because—and I think you would all agree—young people are the ones who are going to drive AI in this country.
I'm going to start first with Ms. Davy.
I've read your Work in Culture report, “AI for Administration in Ontario's Creative Industries”. I note you found that “79% of creative professionals report using AI tools in their work, with nearly half using them often or very often” and they continue to use them. That was an interesting find from your organization.
Do you believe AI is creating new opportunities in arts and culture and in the job market? What are those opportunities, if you do believe that? How is it improving the landscape of Canadian content producers?
I do believe it's creating opportunities. It's quite a fantastic tool. It has to be understood and used in that context. It does not replace human activity or human creativity, hence the need for appropriate training and for adherence to strong copyright policies and regimes.
What kinds of opportunities? There are all sorts of things. We're using it in our own small organization. As I said, we're typical of art service organizations. We're four people, overworked and underpaid. We've been using it for everything, particularly for grants. Grants, as you may know, tend to let you write 250 words in a box, but you have 300. Often, we're using AI for that kind of thing. It does help.
You don't just do it and then use it. One of the things we learned was a concept of human in the loop. With anything you do, you check because AI has a bias towards positivity, and it hallucinates and produces slop, so it can lie to you, too.
One person defined it as having a really smart intern but an intern with no life experience. You don't turn them loose without supervision and rigour. Yes, there are opportunities.
I think it will be in all methodologies that we use for training now, which are virtual, in person, mentorships, internships, everything that, for example, Work in Culture does.
We're a very highly educated group, very well educated and very smart but generally lacking business and entrepreneurial skills training. If you think of an average artist, they come at their world through their techniques, their creative genius. At some point, they might think, “I want to make my living at this, but gosh, I have to sell it, and I have to market it, and I think I have to invoice and take care of my taxes,” and things like that.
We're dealing with a sector that already doesn't have a great depth of understanding of entrepreneurial and business skills training. Over that, there are digital skills, which we've all been coping with. I would say that, again, because of time and resources, the average organization in the sector has deficiencies in their digital skills, and on top of that is AI. There needs to be ongoing, rigorous training addressed from a variety of avenues.
Ms. Morin, given how easily AI has found its place in a changing environment, what measures would your organization recommend to safeguard against AI tools' reproducing or amplifying cultural or linguistic biases? I'm especially concerned when it comes to the fair representation of minority communities, the diversity of training data and algorithm transparency.
We work on different levels. My colleague mentioned the digital strategy for our sector earlier. We worked on that strategy with representatives of the Fédération culturelle canadienne-française and, on a larger scale, with members of the French-Canadian artistic and cultural ecosystem. One of the pillars of the strategy is to influence public policy. Our role, both in Canada and abroad, is to convey these messages, and to make sure governments listen to civil society and act accordingly.
Your question is an interesting one. There's a lot of conversation on the international stage around the need to protect the diversity of cultural expression. Everyone agrees, and there are many ways to go about it. One of the measures we support, which was proposed by the Coalition for the Diversity of Cultural Expressions, is to add a protocol to the 2005 UNESCO convention, which Canada has signed. This would allow us to have influence at an international level in relation to what you are referring to.
Obviously, there is a lot of training and support work being done on the ground. My colleagues were talking about the need to train members of the ecosystem. We face the same challenge. To help us meet the challenge, we created what we call a coaching pathway. We realized that training wasn't enough, and that artists, creators and organizations needed guidance during the digital transformation. They also need support in situations where AI is needed, because this technology is changing the way organizations work and has an impact on their effectiveness and efficiency.
I'm happy to. Is it the same question you asked Madame Morin?
[Translation]
Open-source code allows us to train models using data that are relevant to a specific culture or community. We talk a lot about training an open model like Llama to speak proper Québécois. I think it is theoretically possible. We did something similar for indigenous languages with UNESCO during the International Decade of Indigenous Languages. We programmed an open-source-code model to translate about 200 languages, including indigenous languages. We would like to use this model to help, protect and promote indigenous languages. However, since it's an open-source-code model, any community, whether in Canada or anywhere else in the world, could train it using culture-specific data and teach it to speak another language.
Cultural diversity is very important in AI models. A clear way to protect and promote diversity using these models is to train them in other languages using other data.
Ms. Finlay, Access Copyright reminds us of the gross injustice that resulted from the interpretation of fair dealing in the education sector. Academic authors suffered great hardship because of that. I don't think the situation has been resolved, because their work is still literally being stolen.
Academic copyright holders who already find themselves in a difficult situation because of fair dealing have concerns about AI. What are they worried about?
Madam Chair, the interpretation cut out at the end of my question. I would like to point out that we've had many technical issues with remote interpretation, which is affecting our speaking time.
Ms. Finlay, I don't know where my question was cut off, but I'd like to know how the advent of artificial intelligence adds to the stress and prejudice already experienced by authors in the academic sector. What are your concerns in this regard? How should we safeguard and protect authors of licensed content through Access Copyright?
Thank you for that question and for bringing it sort of full circle for Access Copyright.
For some clarity, it's not just university authors that are affected; it's any author or publisher whose works are copied or reproduced in a university setting. That's what the issues have been around fair dealing and what the issues have been around the enforcement of Copyright Board tariffs and whether they're voluntary or mandatory.
What it boils down to is that, for Canadian authors and publishers and for authors and publishers whose works are copied in Canada or used in Canada, our Copyright Act has become so out of balance that copyright protections and the expansive interpretation of fair dealing have really undercut the market for the use of their works. It has diminished the royalties flowing just through Access Copyright to authors and publishers to the tune of about $18 million per year. It's no secret that Access Copyright has lobbied for changes to the act to strengthen fair dealing and to clarify some guardrails around fair dealing, as well as to make the Copyright Board tariffs enforceable.
As for AI, when we start talking about additional exceptions, we're talking about tilting the balance even further away from copyright owners towards users, and it's incredibly problematic. We are undercutting the market for the use of copyright-protected works at every turn, so Access Copyright would say no to more exceptions; they're not necessary. Ms. Curran just mentioned that it doesn't even engage copyright, so I question why a TDM exception would even be necessary. If it's not engaging copyright at all, we clearly don't need an exception. Obviously, that's a debate between the rights holders and the AI platforms, but that would be the message: that we can't further undercut creators' rights and publishers' rights in the copyright framework.
Mr. Chan, you said earlier that artificial intelligence could inject $180 billion into the economy over the next few years. I didn't catch the approximate number of years that will take. That said, this money will not only go to Meta. I imagine that this $180 billion will be injected into the economy in general, and that some of it will go to copyright royalties.
According to a report published by Deloitte, $180 billion will be injected annually starting in 2030. The report was not published by Meta.
[English]
I think you're asking a question about licensing fees. There are different types of things on Facebook and Instagram that we do license for.
For example—and you may have heard this because I think a previous witness mentioned this at an earlier meeting—we do have, obviously, a catalogue of music that people can use to add to or layer on top of different user-generated content, such as Instagram stories or Instagram reels. When people do that, they do access a catalogue, and we only have access to that catalogue because we are, in fact, licensing that music.
We're certainly an outlier when it comes to protecting the rights of artists and creators in Canada. The fair dealing example I just gave is a big one. We hear from our international partners all the time that it needs to be repaired because it is a significant problem in this country.
I mentioned in my initial oral remarks the collective licensing regimes that are starting to take hold in Australia, the U.K. and the U.S., and we are close behind. I wouldn't say that we are far behind, but those markets have certainly started moving along. It's promising to see, and I think there's a lot of hope and promise for us as well.
Before you answer, Ms. Morin and Ms. Davy, allow me to ask Ms. Finlay another question.
Ms. Finlay, you mentioned the voluntary licensing system. Is this a new model or one that's been around for a long time? Are you proposing this model because of the arrival of artificial intelligence? Is this model also found in other countries?
Thank you for giving me the opportunity to clarify. It's not a new model. Voluntary licensing is the core of copyright. It is the rights holder's right to determine whether or not and how to license their works.
When I talk about voluntary licensing, I'm using the term in contrast to compulsory licensing, which is a different type of regime that I think most or all rights holders in this country oppose. It flips the voluntary nature or the rights of the author on their head, where the legislation would take those rights away and say, “You must license your work and we will pay you whatever we've decided is the set rate.”
It's not a new model. I'm just using it in contrast to some of those proposals.
I would just like to mention that the sector has been insisting on copyright reform for several years—but that reform never took place. Studies were conducted by this committee and by the Standing Committee on Industry and Technology, but the reports were contradictory in many respects.
All the work that was done in recent years shows that we need to closely examine this issue.
I sat on the Standing Committee on Industry and Technology during the study of Bill C‑27, a bill that dealt with both artificial intelligence and privacy, but it was completely bungled. I'm wondering if we've fallen so far behind that it's going to be very hard to catch up, or if it remains possible to catch up if bills are introduced, particularly with regard to artificial intelligence.
As we know, laws have been passed in Europe and the United States, but not in Canada. Do you see that as—
Once upon a time, when you published something, it was automatically protected. It really wasn't that long ago, but now we are in this new era, which is an age of digital colonialism. Everybody has been robbed to some extent. If it's been on the web, it's been scraped at some point.
Travis, I don't think you've had a question yet.
You asked if we want a future created by humans for humans or a future created by machines. I wonder if you could dig into that a bit. We know that right now, today, there are a handful of humans who are the puppet masters of AI, but tomorrow, that role could shift at any time.
Now that we are where we are, our Prime Minister has recognized the urgency of this. That is why, for the first time ever, we have a Minister of Digital Innovation, Evan Solomon. He's extraordinary. He's doing incredible work.
How do we move forward quickly and nimbly? What would you say?
I'd like to ask a couple more questions. I think we have a few minutes.
Thank you very much for the opportunity to respond.
What I meant by my comment do we want our cultural heritage in the future to be made by machines or by humans is that, as I stated, it takes several years for a human to come up with a book, write a book and go through all the steps and processes. It's not going to take a very long time once AI is generating books, and especially books that can be copyrighted, for the market to become flooded with AI-generated content.
The cultural expression of Canada that is being released onto the world stage is being generated by machines recycling and regurgitating materials that have already been created. That's where I fear we run the risk of our future being created by machines instead of by humans. It's by having that situation set up.
I believe that not allowing copyright to a machine that does not need an incentivization to create, just a command, and keeping copyright only for human creators will allow us to continue to hold on to our human creators. It will allow them to continue to hold on to the monies, limited though they may be, they are getting from their creations and ensure that we have a solid human component in the cultural identity of Canada. Part of that is making sure that we're keeping an eye on the small creators as well. It's easy to look at the large publishers, but there are a lot of self-publishers out there as well, and that's something we need to keep in mind.
Erin and John, you both talked about the same thing. You said not to introduce new exceptions to the Copyright Act and to protect, I think, the emerging rights market.
In terms of the Copyright Act, we know there are issues. Could you expand on what you would like to see introduced into the act? Not exceptions, but where would you go with that ?
Brendan, perhaps you could weigh in, if there's time.
We need to take a good hard look at fair dealing as it stands, not from the perspective of overly clipping its wings, but to bring it back to its core purpose of being something there that enables uncompensated copying for the use of individual researchers, scholars, parodists and satirists. These are people who are engaging in a dialogue with a work and adding to the original work in some way. Sometimes, arguably, they may be detracting from it in the case of satire, but that's okay. This allows work to be in dialogue. That's what fair dealing is and was about fundamentally. It has been kind of transformed into this mechanism for industrial-scale copying.
Finding a way to hone in on that without overly restricting the necessary exception for people to express themselves in a democratic society is what I would pinpoint.
I want to make sure we're not mixing some of the asks here.
As it applies to AI, there should be no changes to the Copyright Act: Don't touch it. There are no new exceptions. We don't need new rights. We're good. It's fine. Leave it alone. Let the market work itself out.
On the other issues that are coming up, such as fair dealing in the educational sector, whether copyright tariffs are mandatory or not and numerous other asks that the creative sector has as this applies to the Copyright Act, those stand. Continue to advocate for those.
As it applies to AI, no changes are needed.
I heard Ms. McGuffin say last week that it's the first time in her career she hasn't asked for changes to the Copyright Act. It's also the first time in my career that I haven't asked for changes to the Copyright Act.
Like most Canadians, I'm an avid user of Facebook. It's a great tool. It's regrettable that we don't have news on it any more, unfortunately.
I wanted to talk a bit about algorithms. I'm wondering if Meta uses AI for algorithms to determine which Facebook pages or posts get more views or fewer views.
Yes, I can answer that. There is something called classic AI, which is our automated system that assists with discoverability: what people are seeing in their individual feeds. For every single person, their feed is unique.
Again, our automated systems are reading a variety of signals from every user. They're looking at what kind of content you're interested in. They're looking at whether you like text or short-form video. They're looking at what you interact with, and they're trying to give you content in your feed that you'll be interested in that's relevant to you. That's the classic AI use and it's very much in play with the Facebook feed.
Continuing on the same vein, in the past, many people have told me that they worry their pages and posts are shadow banned. First of all, is there any veracity to those claims?
No. We don't do what's called shadow banning. We certainly have heard concerns expressed about that. Where we do get concerns or complaints expressed to us we will send material to our engineering teams. They will do a deep dive behind the scenes to make sure there's nothing blocking a page's performance or recommendability to various audiences.
We will make sure there's nothing interfering with that content reaching the audiences that it should reach, but no, we don't engage in the practice of shadow banning.
We get less than half a dozen, but we do respond to each of them and do a deep dive behind the scenes to make sure there's no veracity to those complaints.
Sometimes we find that pages are blocked from being recommended for other reasons. There might be something to do with ads they're running. We'll look into that and resolve those problems so the page can be returned to its normal performance.
If you go to our Transparency Center, we have a section called “Why am I seeing this?” We also have that on our Facebook feed. There are three dots next to every post that will tell you why you're seeing a certain piece of content.
Yes, we have a whole piece published on how the algorithms work and how and why they recommend content to you.
This was a major push, I should say, under Mr. Nick Clegg, who was our previous VP of global affairs. He was a great believer in the transparency of algorithms, so we published a number of pieces on how our algorithms work and why people see certain content.
I'll take this as a slightly different question, I suppose, Mr. Diotte, than the one about how our Facebook or Instagram feeds work.
Certainly, when we are training our AI models, we put them through a very rigorous set of tests. When we do the training, we want to make sure that the quality of the data that's fed is very good to ensure that it doesn't end up with any bias.
On the other side, when we look at the output, we want to make sure the information being shared is also not biased, that what's being generated is not biased. We put it through a very vigorous research process, but we also do a lot of red teaming. What we want to do is test it as individuals, people at the company, asking it all sorts of questions to see whether or not the answers skew. If they do skew, then we have work to do to make sure it is free of bias.
We do take this very seriously. We want it to be as neutral as possible.
Thank you all for taking the time to be here with us today.
(1800)
[Translation]
We appreciate it.
[English]
For those I haven't met yet, it's a pleasure to meet you, first of all.
I am the proud member of Parliament for Mississauga Centre. I say this because I think my riding is an incredibly unique example of how culture and innovation can overlap. I would say the same of Montreal, where I spent quite a bit of my life.
I'm also of a generation that is not just going to be involved in innovation; it will lead it. Mr. Waugh said something in that spirit as well. In fact, many already do, and that's because we've made so much use of technology ourselves already. I've grown up with AI in ways that many in this room haven't. The generations after me will grow up and in fact have already grown up with AI in ways I haven't.
In similar ways, my perception of Canadian culture differs from that of many in this room. That's exactly why I've been consistent in highlighting that striking a balance between innovation and the preservation of Canadian culture is what we should all be working towards. They mean different things to all of us but they are no less significant.
There's no doubt in my mind that innovation and culture can coexist. I do, however, believe that it's going to require all entities in the space to operate as honest, good-faith partners.
[Translation]
For that matter, Mr. Chan, I was encouraged to hear you speak of the importance of preserving Canadian and Quebecois cultures. For that, I sincerely thank you.
[English]
Mr. Croken, as you certainly know, the Canadian Authors Association is our country's oldest association for writers and authors. As a group, you've seen Canada's cultural sector go through a variety of significant changes.
From your perspective, how do we ensure that Canadian authors' voices, particularly emerging, diverse and francophone authors, remain visible globally in an age of AI-driven content generation and distribution?
I believe we ensure that we remain present on the world stage by making sure that those are the voices that are being pushed forward.
AI-generated content is going to be coming out. It is going to be a larger part of the picture we're looking at. With regard to what was stated earlier, models being trained on already existing works not being problematic because they're not spitting out an exact replica, that is not something I entirely agree with, because it can mimic an author's voice. It can mimic an author's style. It can greatly dilute the field for the author. I think ensuring that Canadian authors have their voices forefront and present when anything is being put forth to the world stage is solid.
When an author gets an ISBN, it is automatically registered with Library and Archives Canada. Is that something that necessarily needs to be done for a book that's created by artificial intelligence? If a book is written by AI, does it need to be entered into the cultural heritage of Library and Archives, of Canadian history? That's another way we might be able to protect Canadian heritage and have human heritage move forward.
That's interesting. Thank you for the thoughtful response.
My next question is for the Association of Canadian Publishers.
Thank you for joining us, Mr. Illingworth. Let me quote a couple of things: The association “is the voice of English-language Canadian-owned independent book publishers”, and “ACP represents approximately 115 Canadian-owned and controlled book publishers from across the country.”
You know more than I do that publishing in Canada has been disrupted by digital platforms and algorithm-driven distributions. What tools, partnerships or policy mechanisms do you believe could help Canadian publishers, especially small and medium-sized ones, compete and thrive in an environment shaped by AI and global platforms?
There are a number of measures. I might invite my colleague Brendan Ouellette to add his voice.
First and foremost, I want to emphasize that as a sector, contrary to popular notions of what publishing is, as sort of fuddy-duddy pipe-smokers, we've always been quick to embrace technology. We're hearing talk about the arts needing to digitize now. We digitized our industry 20 years ago.
Probably the area that I would most like to see investment in would be organizations like BookNet Canada, who were catalysts for technological change 20 years ago with the advent of both digital supply chains and digital book distribution. Another such organization is eBound Canada. My colleague Brendan happens to be the chair of eBOUND.
Those industry-owned initiatives have been remarkably effective at doing a lot with a little and at being world leaders in enhancing book distribution and enhancing book accessibility. I believe they are the pathway to opening up access to capable and competitive AI tools for our sector.
With the ACP, I'm representing a firm, Annick Press. We're a children's press based out of Toronto. We've been in business for 50 years and publish authors and illustrators from coast to coast to coast. We are finding some corporate, production and operational efficiencies through the use of AI tools. Some of those tools are Canadian. De Marque out of Quebec makes some excellent tools. Book Connect from Nova Scotia is implementing AI elements into their platform that we use to send our digital data out into the marketplace. We're also using some American tools.
What I would say in general about them is that they are respecting our copyright. They are walled gardens. What makes us hesitant to engage in licensing arrangements, where we are otherwise very active in the international licensing market, is that our content has been stolen. A recent piracy scan shows there are over 5,000 links to our content online right now. Many of those are Facebook URLs. Additionally, many of them are links to libraries that are documented to have been used in the training of such large language models as Llama.
We take a cautious approach to this emerging marketplace. We look to collective licensing solutions. We need regimes that respect international norms and copyright and the opt-in regime.
Mr. Champoux, you may be the last speaker for this round of questions.
In fact, colleagues, I don't think we'll have time to finish this round.
[English]
I was just wondering if we should continue for another 10 minutes after Mr. Champoux and finish the round, or if we should get to our committee business.
Ms. Morin and Mr. Buridans, you proudly represent francophones living in official language minority communities.
What is your greatest concern? What do you think of what you're hearing today? Do you share the concerns of francophone communities across Canada about the arrival of new technology that could further drown out francophone culture in the very areas in which it is most vulnerable? That's what could happen if we're not careful.
I'll answer first, before handing it over to Mr. Buridans.
It is certainly a concern when it comes to the diversity of cultural expressions. We see ourselves in this language, that's clear. In this mass of somewhat homogenized information and content, we can identify francophone voices in minority situations that are found throughout the value chain—that is, from the space where they're created and made accessible to the space where they're advertised. Clearly, however, without specific legislative measures or parameters on the subject, such voices risk being completely drowned out.
There is certainly cause for concern, especially since artificial intelligence has come into the picture. We had precisely the same conversation about the discoverability of broadcast content. It's the same problem.
I was just about to bring up all the work that was done on discoverability. It's as though it had suddenly been forgotten—because AI is a complete game changer in terms of structured and unstructured data—but that's the whole problem with discoverability.
We've trained our organizations to teach their members how to understand structured data in order to strengthen the discoverability of French-language content on online platforms. All of a sudden, we're facing new challenges related to artificial intelligence and we're unable to respond to them. That's why we're emphasizing the need to continue the digital literacy training we've offered within our network—precisely to keep pace with these technological advances and innovation.
What I find interesting about today's discussion is the issue of whether Canada is lagging behind. The answer is yes. I'm not going to go back over all the elections that led to Bill C‑10 and Bill C‑27 being on a sinking ship, but we may still have Bill C‑11, which remains afloat. What I mean to say is that we are lagging, and this legislative uncertainty is causing tension and distrust of artificial intelligence. I find it very interesting that, as soon as we tease Meta and Facebook representatives a bit on certain topics, they tell us that they have open-source code models. That's great, and we're working with those types of models.
I'd like to tell you about the Culturepedia project, the first social trust for cultural data management. We've brought our members together to work on this platform within a fairly innovative legal framework. Having a social trust for virtual data is quite innovative. The idea was precisely to create a protected and sovereign legal environment in which data can be uploaded to train members of cultural organizations to work with data, taking into account data interoperability, which is very complicated.
At the beginning, I spoke about the need to work with structured data because of large language models. We pitted open-source models, namely Llama and Gemini, against each other. We had them work on organizational data, structured data, and unstructured data to begin analyzing the effects of our artistic and cultural work on our communities and the country. However, we are in the early stages. It is really just a prototype.
The difficulty with an initiative like the digital band, which has helped our members embrace the concepts of digital literacy for three years, arises when budget cuts occur. This means that our momentum in training, awareness and digital maturity is being cut short at a time when we should be ramping up our efforts and when the complexity of evidence management is being disrupted. We absolutely need to strengthen our capacity to work with that data and help our members understand those concepts.
Personally, I look forward to having Canadian open source models governed by clear legislation that addresses threats or concerns. We would then be in a protected environment, a sovereign country, where we would work with models, computing power and infrastructure that would allow us to store data here at home.
Ms. Curran, I'm going to come back to you if I may, because we got cut off.
I asked about how AI Studio is helping artists reach new audiences and with their effectiveness and efficiency in running their accounts. You talked about how technology gives way to new forms or new opportunities within the creative sector.
Can you expand on that in terms of the use of AI for expansion?
We're at the beginning of these generative AI tools and how the creative industry and artists are starting to use them. Right now, we're seeing what you've heard from the other witnesses here. AI is being used to deal with more rote or administrative tasks that they need to deal with so they can then focus on the work that requires real creativity and human judgment. That is where we're seeing the early developments. It's the administrative work and the kinds of things that can be easily automated and made more efficient.
Over time, we'll see how artists start to use generative AI tools as assistants in the creative process. I think we're seeing that across the board, where AI is being used more as a helper and an assistant, not a replacement.
I think someone mentioned the concern about job loss. We see AI helping humans do their jobs better and really focus on the tasks that require human judgment and complex thought, which are things that AI probably will never replace. They're able to focus their time and attention on the parts of the work that require human activity, with AI assisting them in that work.
Our view is that you should regulate the use of the technology, not the technology itself. Look at the uses to which AI is being put. Many of those uses are legal or illegal already. It's already illegal to engage in impersonation or fraud, for instance, so those laws simply need to be enforced in the context of AI.
Where there are net new risks or marginal risks that are genuinely beyond the scope of current laws, we would suggest regulating those. Look at the use to which AI technology is being put and regulate that rather than the technology itself, which I think is impossible to regulate given how quickly it's moving.
Ms. Morin and Mr. Buridans, as you know, French-speaking cultural communities already face structural challenges when it comes to visibility and the size of their market. In the age of digital platforms and AI, how can we ensure that French-language content has visibility and support? How do we make sure it doesn't get swallowed up by a mostly English tech ecosystem?
That is the big question. Our experience tells us that, without parameters and clear legislation, it won't work. The country's cultural sovereignty and diversity of cultural expressions cannot rest in the hands of private companies that don't have a mandate to protect them. The government has to enact legislation to address these issues. Otherwise, what you described is exactly what will happen—French content will get swallowed up by content that does not reflect who we are, created in a language that is not ours. In a situation like that, the government absolutely has to play a role.
I'd like to add something, if I may. That is why we are calling for cross-institutional efforts in our digital strategy. We are engaging directly with federal cultural institutions. We have a co-operation agreement with the National Arts Centre, the Canada Council for the Arts and the Department of Canadian Heritage, among others. It is important to coordinate how our language and the unique characteristics of our communities are represented. Currently, those efforts are a bit scattered, so we are trying to focus them on these issues.
We've spoken with ministers Solomon and Guilbeault. I was part of a delegation for the Coalition for the Diversity of Cultural Expressions. The Minister of Artificial Intelligence and Digital Innovation seems to want to move forward on this issue, whereas the Department of Canadian Heritage delegates the responsibility for managing these issues and the challenge associated with overseeing things like copyright and transparency. It's as though the two things are supposed to work in parallel and aren't complex intertwined issues. I must say, that view astounds me, so that is where we are trying to play a role.
What's more, in speaking with Minister Solomon's staff, we found out that the government had announced an agreement with the company Cohere to train an AI model to improve public services. I believe models have been trained in English, but I don't think any have been trained in French. Nevertheless, not only does language need to be taken into account, but so do the unique characteristics of Canadian and Acadian French-speaking cultural communities. It's not good enough to simply train a model in this language or that language. We talked about AI bias earlier. It will take more than a language model to prevent AI from generating overly biased output. It will take a cultural learning model.
I want to thank all of our panel members for coming and for their excellent testimony today.
I will reiterate the always eloquent words of Martin Champoux. If there's anything you didn't manage to say today and you have more things that you wish to submit to committee, please send them to our clerk so our analyst can include that information in the report for our study.
I will suspend the meeting for a few minutes, and we will move to committee business.
We're coming back into session to deal with some committee business. I will advise members that we are still in public. Keep that in mind. We have not gone to an in camera session.
The point of today's committee business is to talk about extending our AI study. We are due to finish the minimum of five meetings which we set out at the beginning at the end of October. October 29 will be our fifth meeting. We still have a huge list of witnesses who would like to testify, so I wanted to canvass the will of the members to see if you would like to extend the meetings or move on to something else.
Martin Champoux, I think I saw your hand go up first.
Madam Chair, I think all the parties agree on adding a few meetings. I looked at the witness list, and I know other witnesses were added. It's a lot of people. If we decide to hear from everyone being proposed, we'll still be working on this study in March.
I am not saying that anyone doesn't deserve to be on the list, on the contrary. Certain issues affect a lot of people, so it's perfectly normal that so many people want to have their say in our study. However, we have to prioritize, since we'll have other studies we want to do.
I think we should add a few meetings. We can all decide how many meetings to add to the initial five. We should also look at our witness lists and figure out together whether there are any witnesses that all the parties definitely want to invite or give priority to. Everyone can co-operate. We need to prioritize and make sure we don't have any duplication. We've covered many facets of the issue. It's good to consider a variety of topics in the study. I'm thinking of the analysts, who will have to keep track of all our discussions.
I'm open to adding two or three meetings. I suggest two, and then we can see whether we want to add more. We can leave the door open. I think that will force us to assess the current list of witnesses and make some decisions together.
I would agree with my colleague, Mr. Champoux. We would suggest adding two more meetings. If we were to add two more meetings, that would take us up to the break.
I would agree that we have to refine our list as parties. I don't feel the need to hear from every single individual who wants to come to this table. I think eventually it becomes repetitive. Certainly we want to make sure that the main points are made and considered in the report.
To date, we have not heard from a number of individuals or organizations that would represent more of the creative sector, for example, video game creation or digital creators. I think we have a responsibility to hear from some of those folks.
I think that we can get a very good sampling with two more meetings and then, of course, the acceptance of written briefs. Anyone who wants their voice heard on this topic has the opportunity to submit a brief. My suggestion is that we agree to two more meetings. If, at that point in time, we feel that we still haven't heard enough, once again we can expand.
I'd like to follow up on what Mrs. Thomas said about needing to hear from witnesses with different areas of expertise. I think it would be a good idea to hear about the potential impact of AI on children. That was discussed at a conference I was at in Kelowna recently. If we can find witnesses with that type of experience, it could result in a much more informed approach.
I think we're all in agreement. We had a chance to chat a bit, and I think it all makes sense.
I like the idea of looking at the witness list again. We want to get the most out of the study. We know that when we start hearing the same thing from multiple people, then we're not necessarily getting the best thing. I think it's a good move.
We can get to our, say, seventh in total and say that we're clearly not there or that we are at it. I think we're getting a good picture, but there's still more to be done. We need to be a bit more strategic. What we're all saying is that we can look at this list and know that we haven't heard from specific digital creators. The video game industry is a good example. There are voices we want to make sure we're hearing, even from the industry. It's going to be helpful to hear from industry what is possible. I keep on asking that. It's going to be helpful for us to know what the models are capable of doing when it comes to licensing and regulations. I think that will be a great part of the conversation.
I agree with the two members, and I trust their judgment when it comes to the witness list. However, I want to mention a Quebec creator who was featured in a La Presse article a week or so ago. He bought a church and set up servers to house the works of artists who choose to store the works there to protect their cultural sovereignty. We are trying to get in contact with him so he can be a witness. The sovereignty of artistic works is an important issue. He's an artist himself, so he would make an excellent witness.
We've tried contacting two or three other creators who make AI-based content. Mr. Myles and I joke all the time about who has the better singing voice. He's way better. All that to say, there are people like me who know nothing about music, theatre or whatever it may be, but who create works from their imaginations. To some extent, they become artists. One day those works will inevitably end up on social media and all over the place. That's why we should hear from witnesses on that as well.
I trust Mr. Myles, Mrs. Thomas and Mr. Champoux to do a good job sorting through the witness list, but I'm quickly going to give the clerk some new names so we can invite those artists.
I really appreciate the discussion, and I completely agree that two more meetings would be fantastic. I'm wondering whether the clerk can resend the list and highlight who we've already heard from—or separate them, whatever is easier—and perhaps we can look at that.
As Mr. Généreux was asking, I know we have a lot of names already, but are there some that are missing, some of those voices in the video game design section or whatever it is, that could be added in advance of that?
I think we'll do it within parties, right? We'll have the clerk resend the list, and within each party, we'll prioritize the witnesses we really want to see. Perhaps we can add new witnesses whom we haven't heard from, potential witnesses we've heard a couple of people mention. Within our own teams, we'll prioritize the people we really want to see, and we'll make sure that our final meetings are productive.