Skip to main content

CHPC Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Canadian Heritage


NUMBER 010 
l
1st SESSION 
l
45th PARLIAMENT 

EVIDENCE

Wednesday, October 29, 2025

[Recorded by Electronic Apparatus]

(1630)

[English]

     I call this meeting to order.
    Welcome to meeting number 10 of the Standing Committee on Canadian Heritage.
    Before we begin, I would like to ask all in-person participants to read the guidelines written on the updated cards on your table. These are measures in place to help prevent audio and feedback incidents and protect the health and safety of all participants, including the interpreters. You will also notice that there's a QR code on the card. It links to a short awareness video, if you need it.
    Pursuant to the routine motion adopted by the committee, I can confirm that all witnesses have completed the required connection tests in advance of the meeting. That's for our two witnesses who are joining us online.
    Please wait until I recognize you by name before you speak. All comments should be addressed through the chair.
    Pursuant to Standing Order 108(2) and the motion adopted by the committee on Thursday, September 22, 2025, the committee is meeting to study the effects of the technological advances in AI on the creative industries.
    We have with us today, as an individual, Michael Geist, Canada research chair in Internet and e-commerce law, Faculty of Law, University of Ottawa.

[Translation]

    We also have Véronique Guèvremont, full professor, holder of the UNESCO chair on the diversity of cultural expressions.

[English]

     We have Chip Sutherland, lawyer, joining us online.
    We have Nikita Roy, from Newsroom Robots Lab, with us here in person.
    We have Vicky Mochama, communications director from PressForward.
    Welcome to all of you.
    Each of you will have five minutes to present opening statements, starting with Michael Geist.
    You have the floor.
    Good afternoon, everyone.
    My name's Michael Geist. I'm a law professor at the University of Ottawa, where I hold the Canada research chair in Internet and e-commerce law. I appear in a personal capacity representing only my own views.
    Thanks for the invitation to appear on this important study on AI in creative industries.
    As some of you may know, I've appeared many times before this committee on questions involving technology and culture, including studies on copyright, freedom of expression and Internet regulation. In each instance, much of the discussion amounted to risk analysis, the perceived risk arising out of new technologies, whether digital copyright, online platforms, streamers or digital advertising, and concerns about risks associated with some of the proposed legislative responses, such as anti-circumvention rules, regulating user content or blocking news links. I think too often the debate frames new technology as a threat, emphasizes cross-industry subsidies and misses the opportunities that new technology presents. We therefore need risk analysis that rejects entrenching the status quo and instead assesses the risks of both the technology and the policy responses.
    The debate over AI faces a similar challenge. I think that helps explain why the government has shifted from AIDA—the former Bill C-27—to now warning against overindexing on AI regulation and why groups that typically call for copyright reform find themselves arguing against it before this committee at the moment. These highlight the challenges of identifying AI risk and the fear that some regulatory responses could themselves create new risks that outweigh the problems they're trying to solve.
    What are the risks I think this committee needs to think about with respect to AI and the creative sector? Three issues that often arise in this area are freedom to create, appropriate protections and Canadian content presence or discoverability. I think each of these presents its own challenges.
    First, with respect to freedom to create, AI is already an integral part of the creative process, used to assist with everything from writing to film to music. Given its importance, AI has real benefits and restrictions on AI use are not only unrealistic but may be harmful. The risk comes from misinformation or public confusion that can come from “AI slop” in a video context or poorly crafted AI-generated news. This content should be properly identified, which would enhance the value of original human creativity. There is a need to work with the relevant sectors—news, video, music and AI services—to develop appropriate transparency measures to more easily distinguish between human-generated and AI-generated content.
    Second, copyright invariably arises when discussing appropriate protections. Yet in the context of AI, the application of copyright isn't always clear cut. The outputs of AI systems rarely rise to the level of actual infringement given the expression may be similar or inspired by a source, but is not a direct copy of the original. The inputs—such as inclusion in large language models—are currently the subject of numerous lawsuits, but few have to date resulted in liability since those cases suggest large language models, LLM, inclusion and the resulting data analysis often qualifies as fair use or fair dealing.
    What are the risks here? To paraphrase Minister Solomon, overindexing on AI regulation in a copyright context risks creating barriers that would render us uncompetitive as a market, undermining both innovation and creators. If Canada makes it more difficult or costly to develop large language models, AI development will shift outside of the country. It's therefore essential to ensure that our copyright frameworks are globally competitive. That's why we need copyright laws that continue to strike the balance through effective fair dealing rules and, given the use of text and data mining exceptions elsewhere, including the EU, the appropriate exceptions that position Canada as receptive to AI opportunities.
    Third, we want to ensure AI services feature relevant Canadian results, but conventional Canadian content presence or discoverability policies such as minimum content requirements or promotional presence efforts simply don't map onto AI. Indeed, these kinds of policies could backfire, leading to the exclusion of Canadian content in large language models, which would in turn result in reduced presence in AI outputs. Essentially, it would be a replay of what we've seen with news on some social media platforms, where there are fewer conventional news sources and more presence of substitutable alternatives. In other words, the answer to Canadian AI cultural relevance is more Canada in the training data. That doesn't come from more regulation, legal barriers or higher costs, rather, it requires transparency on datasets, reducing costly barriers to access and the development of public AI systems that encourage the use and availability of Canadian content.
(1635)
     I look forward to your questions.
    Thank you, Mr. Geist.

[Translation]

    I now give the floor to Véronique Guèvremont for five minutes.
    Members of the committee, thank you for this invitation.
    My name is Véronique Guèvremont, and I'm a full professor in the faculty of law at Laval University and holder of the UNESCO chair on the diversity of cultural expressions.
    While I fully recognize the potential of artificial intelligence to support creativity and creative industries, my remarks will focus on some of its negative effects on the diversity of cultural expressions and on the cultural rights of individuals and groups. I will also go over Canada's international commitments that should encourage it to take action to limit those effects.
    First, while generative artificial intelligence enables new forms of creativity and expands artistic opportunities, it also risks promoting homogenization and weakening cultural diversity.
    Those risks stem primarily from the training databases for artificial intelligence systems. They're built from a corpus in which certain cultures are overrepresented, mainly anglophone and western cultures, which encourages the production of the same aesthetic or narrative forms.
    Those models are designed to reproduce statistically average patterns based on their training data, which reinforces the most common styles. Non-conventional or marginalized forms of expression are often under-represented, which leads to representation biases in the creative content produced.
    Other threats stem from the rapid increase in synthetic content in the digital environment. That overabundance could make human creations less accessible and less visible.
    In the music sector, for example, such a scenario is mentioned in a European Parliament resolution passed in 2024, which emphasizes, “a growing number of [musical tracks] flooding streaming platforms on a daily basis, which risks aggravating existing imbalances as regards discoverability”. On some music platforms, artificial intelligence generates more than 10% of the tracks published each day.
    Second, the impoverishment of the diversity of cultural expressions limits the right of individuals and groups to access their own culture. It also leads to an infringement of artistic freedom, particularly because of the increasing difficulties for creators to reach their audiences.
    The impacts on the cultural rights of minorities and indigenous peoples are also evident, since the works of those groups are typically absent from training datasets, which limits their ability to use artificial intelligence to create works that are representative of their cultures.
    Artificial intelligence systems also raise significant copyright protection concerns. As you know, while artificial intelligence systems can be used to generate new forms of cultural expression, they do so using works previously created by individuals and communities, generally without compensation for rights holders. This practice infringes on the right to benefit from the protection of the moral and material interests resulting from any scientific, literary or artistic production, a right enshrined in subsection 27(1) of the Universal Declaration of Human Rights and paragraph 15(1)(c) of the International Covenant on Economic, Social and Cultural Rights.
    This obviously has an impact on artists' compensation. In addition, platforms are investing in AI-generated creation and promoting their own content to reduce copyright payments. A 2024 study published by France's International Confederation of Societies of Authors and Composers highlights, “Potential cannibalisation of creator’s revenue streams due to the substitution of human works by Gen AI outputs”. The study predicts that creators could lose nearly a quarter of their revenue by 2028.
    Finally, I'd like to point out that ethical and legal frameworks relating to artificial intelligence generally stay silent on issues related to the diversity of cultural expressions and cultural rights. However, some of Canada's international commitments should encourage it to adopt an artificial intelligence governance framework that explicitly takes into account the points raised earlier, in order to limit their impact.
    For example, UNESCO's convention on the diversity of cultural expressions, adopted in 2005, is complemented by operational directives adopted in 2015 that guide Canada's actions to protect and promote the diversity of cultural expressions in light of the rise of digital technologies in the cultural and creative industries.
    The UNESCO Recommendation on the Ethics of Artificial Intelligence, adopted in 2021, recognizes that the rapid development of AI technologies challenges their ethical implementation and governance, as well as respect for and protection of cultural diversity. The recommendation calls on member states “to examine and address the cultural impact of AI systems”. Finally, policy area 7 of this recommendation, on culture, includes a number of relevant recommendations that will be useful to this committee's reflections.
    I'm happy to expand on these topics during the discussion period. Thank you for listening.
(1640)
    Thank you very much.

[English]

     Next I'll turn to Chip Sutherland.
    You have the floor for five minutes, please.
    Thank you for the opportunity to appear today before the committee.
    By way of background, I am a lawyer based in Nova Scotia who has been practising entertainment law specifically in the music industry for over 30 years. I have represented artists at all levels both in Canada and internationally. In addition to my legal work, I also act in the capacity of artist manager for a select number of artists, currently including the singer Feist and Mustafa Ahmed. I co-authored the Canadian edition of the best-selling book on the music industry, All You Need to Know About the Music Business, with Donald Passman. I have also been the executive director of the Canadian Starmaker Fund for the past 21 years.
    My purpose in being here today is to provide an insider's view of the life of the artist in the music business. As a lawyer and manager, I have the unique perspective of helping artists navigate their careers in all aspects of recording, songwriting and touring. Through my work with Starmaker, I have the advantage of seeing the development of all emerging artists in Canada. I know it's sometimes difficult in public proceedings to find the artist’s true best interest amongst a lot of the submissions from industry groups, but my career is built on finding and managing those best interests.
    If you want to know how a musician really thinks and how they survive in an always-evolving commercial landscape, hopefully I can provide some of these insights.
    You’ve heard from several thoughtful organizations at this point. I've seen some of the previous proceedings, so there is no need to repeat a lot of that. I think there is high-level agreement amongst the music industry participants that the introduction of generative AI is a significant threat to the creative industries and specifically the music business.
    From the artist's perspective, I would suggest there are three areas where the Department of Heritage can provide leadership and support.
    The first is strong artist funding supports. We currently have very strong supports for artists at all levels thanks to the Canada music fund and support from private broadcasters. In fact, we are the envy of many countries in this regard. The best defence for artists to combat and manage some of this new technology is to know they can count on the continuation of these strong supports at all levels. This is something the Department of Heritage can uniquely provide. The more we can support artists directly to sustain their ability to create music, the better. You can count on them to navigate the industry. I have seen musicians go from delivery of vinyl albums and recording on two-inch tape to producing CDs, home-studio recording on digital devices and, ultimately, streaming. I have seen them tied up in abusive recording deals all the way through to retaining ownership of their own masters and running their own labels. If we give them the support to create the music, they will figure out the best way to get it to fans.
    The second is to strengthen some of the copyright provisions in this area. We need to reaffirm the copyright protection that exists now and reject any exemptions for data mining or scraping as being a fair use. Creative output is well protected right now by copyright, and all AI companies need to be reminded that any use of it without permission and compensation is stealing. I'm not saying you can't do it, but there needs to be a structure and framework for it. I believe you heard this from Patrick Rogers from Music Canada, who made this point.
    The third thing to consider is the introduction and formalization of a personality right. In the U.S., there is a specific “right of personality” that is protected by their constitution. We don’t have an explicit right similar to that in our laws, although the courts have been inclined here to imply that a personality right exists in Canada. When someone steals the musical style of an artist, including their voice and maybe their image, it's a violation of not only the commercial rights and the copyrights but the very core of their character, likeness and personality. I think it would be helpful to review some of the laws relating to a right of personality and add further protections in that area.
    Thank you.
(1645)
    Thank you, sir.
    Next I give the floor to Nikita Roy from Newsroom Robots Lab.
    You have the floor for five minutes, please.
     Good afternoon, Madam Chair, and members of the committee.
    My name is Nikita Roy. I am a data scientist, journalist and AI educator. I host the Newsroom Robots podcast and lead the Newsroom Robots Lab, which is incubated out of the Harvard Innovation Labs.
    The work that my team and I do sits at the intersection of technology and journalism, helping media organizations worldwide harness AI to transform their work.
    Artificial intelligence is becoming the infrastructure that mediates how people encounter information. It's redistributing the power to tell stories and to decide whose stories get heard. That transformation strikes at the core of Canada's cultural sovereignty and our ability to shape our own narratives in a world that is increasingly filtered by algorithms that we did not design and do not control.
    Let me paint a picture of what's happening to our information ecosystem today. There are three shifts that are colliding all at once.
    First, news is no longer something people just consume; they're starting to talk to it. Google, Microsoft and OpenAI are already having personalized, conversational news and informational experiences.
    At Newsroom Robots, we built a voice-first AI expert for the news industry. It is trained on all of my own work and able to speak in over 30 different languages. It shows how AI can democratize access and also how quickly the ground is shifting. As news becomes something that literally anybody can talk to, we are entering a world where conversations guided by the AI algorithms—not simply articles—shape journalism. We must ask whose voices guide them and whose are left out.
    The second shift is the collapse of search. For two decades, search was a major gateway to journalism. That gateway is now collapsing. We've moved from search, click, read and act to just ask, answer and act. AI cuts out the middle. There's no longer a home page, there's no click and there's no context. If Canadian journalism isn't built into those answers, it's invisible.
    Across the world, increasingly searches now end without a single click. That is a silent collapse of the pathways that once led audiences to the news. Increasingly, the reader isn't even human; it's a bot that is training on journalism without consent and compensation.
    The third shift is that AI is becoming the new home page. Platforms such as ChatGPT Pulse and Perplexity reassemble reporting from multiple newsrooms and present it within their own interface, stripping away our editorial voice. It's building a front page powered by our journalism without our bylines, without our context and without our curation. The Internet's front page is being rebuilt not by editors or publishers, but by algorithms of foreign AI platforms.
    We are entering an era right now where the very infrastructure of knowledge is being rewritten by systems that we don't design and do not govern. If we fail to anticipate that cultural shift, our cultural sovereignty may not be decided in Parliament, but in the prompt and response loops of foreign AI platforms.
    Together, these three forces—the collapse of search, the rise of AI interfaces and the shift from consumption to conversation—are redefining not just how we access information but who shapes the narratives that define us.
    In this AI era, the greatest risk facing creators is invisibility. If our data, our languages and our voices aren't part of global models, we lose presence. We fade from the world's informational map. These aren't just traditional copyright questions; they are context rights questions—the right to be represented, visible and understood.
    As UNESCO's “Artificial Intelligence and Culture” report warns, AI “is advancing faster than cultural governance, widening divides” and threatening cultural sovereignty.
    Who gets to tell our nation's story, when machines become the interpreters of culture? If we want our creative voice to not just survive but to lead in this new era, we must act decisively on three fronts.
    First is context rights. We must protect how our stories, languages and knowledge are used and understood within AI systems. Creators deserve transparency, credit and choice.
    Second is capacity building. We must invest in creators' ability to work with AI and strengthen AI literacy across our creative and information sectors so that people can shape technology and not just be shaped by it.
    Last year I launched and led the first-of-its-kind generative AI training for journalists at The City University of New York. The AI Journalism Labs, supported by Microsoft, since then has trained journalists around the world, including from Canada.
    What I've seen is that when creators understand AI the right way, they stop fearing it and start shaping it. AI literacy is creative agency and that's what keeps culture alive in this new era.
(1650)
     Third is the Canadian data commons. We must build an ethically governed cultural data infrastructure, a kind of public library for the AI age that reflects our bilingual, indigenous and multicultural reality.
    Our data is our cultural infrastructure, and our stories should not just become foreign imports in the digital age, because AI is not just changing how we tell stories; it's changing who gets to tell them, and that's what's at stake.
    I'll give my time back to the chair. Thank you.
    Thank you.
    Finally, we have Vicky Mochama from PressForward. You have the floor for five minutes.
    Good afternoon, Chair, and esteemed committee members.
    I'm Vicky Mochama and I'm the communications director for PressForward.
    Launched in 2020 with six founding members, PressForward has grown to represent 24 member media businesses across Canada. We advocate for, and organize with, media publishers, regardless of medium or business model, to ensure that independent media are heard on the issues affecting our work in the communities we live in and strive along with.
    It should be noted that the news industry has been adopting AI tools for a fairly long time. For example, the Associated Press has been experimenting with AI since 2014. For our members, that means a variety of approaches and uses.
    At Taproot Edmonton, in this year's municipal election, an AI tool helped the publication parse the issues most important to the city's residents, based on responses written to the publication by Edmontonians, and then shape a 30-question multiple-choice survey sent to all candidates.
    For Future of Good, artificial intelligence tools help the publication in writing website code and with some copy-editing. Regardless, a human then edits and verifies the final product before publication.
    At The Narwhal, a working group is consulting with readers and the newsroom alike on the publication's use of AI.
    Each publication is responding to the AI question carefully, cautiously and with their communities in mind. Many publications are working across their entire businesses to craft responsible AI policies.
    This will differ for each publication, its audience and its mission, but this can be said: There are uses of AI that are effective and ethical, and there are those that are destructive and unethical. I'll speak to the latter first.
    As people who provide information, our journalism outlets rely on people in our communities coming to us for trusted, verified and bylined information. These points of connection are vital for a healthy public, particularly in the age of rampant disinformation and misinformation. These touchpoints are also where our publications generate revenue that covers the cost of journalism, which we do through advertising, reader donations or selling tickets to community events. AI summaries that reduce traffic to our sites by scraping human-verified information without compensation or attribution remove these points of connection.
    As news publishers deeply embedded in our communities, our members highly value the trust and credibility that their communities place in them. We take responsibility for every single word, image and video that ends up on our platforms. We issue corrections, clarifications and apologies when necessary. In part, that is because news media businesses are legally liable for the veracity of their reporting. Practically speaking, this means there must always be a human in the loop to be held accountable.
    Meanwhile, it's unclear what legal accountability generative AI, chatbots and the like can face when they get things wrong. Without the necessary guardrails and repercussions in place, the Canadian public are increasingly being asked to do their own fact-checking in a deeply fractious information ecosystem.
    Moreover, we are in the business of news, while AI is often repeating what has already been established. The intellectual work of journalism cannot be replicated. AI tools may augment and assist, but cannot replace, the work of trusted journalists connecting with and reporting on what people are doing, thinking and saying in local communities.
    In an era of misinformation and disinformation, toxic online discourses and digital foreign interference, trade wars and subsequent job losses, and indeed AI's as yet unresolved accuracy issues, it is even more essential to strengthen the media sector in Canada so that we can continue to serve our communities with factual information.
    The Edelman Trust Barometer for 2025 notes that there is a “widening trust gap in Canadian society, with institutions facing mounting pressure to rebuild confidence and credibility.” PressForward's member publications play an important part in that rebuild by centring our communities and upholding the core values of transparency, accuracy and accountability.
    In sum, AI is having a multitude of impacts on journalism in Canada, and there are of course some use cases where AI can be a useful tool in our media businesses, but generative AI specifically presents a number of threats and challenges to journalism by confidently distributing false or incomplete information; using news publications' content without payment or permission, or indeed attribution; and breaking the direct connection between our newsrooms and the audience.
    In the last decade, the Canadian government has responded to this disruption and many others before it with supportive measures to shore up the production of quality journalism in Canada. We implore you to approach any AI strategy or regulation in a way that does not undermine those efforts.
    Thank you, Chair.
(1655)
     Thank you.
     Thank you to all our witnesses for being so on time today.
    We'll start with questions starting with Mrs. Thomas from the Conservatives for six minutes.
    You have the floor.
    Thank you so much.
    Ms. Roy, I was reading some of the things you've published, and I've listened to a number of podcasts that have been produced. One of the things you stated in an interview was the following. You said, “I believe that conversational AI is going to hold huge potential for news and it's going to be able to connect us to our audience even more than we've possibly been able to before.”
    Obviously you're seeing a great deal of potential. You're seeing opportunity. I believe that you're fairly forward-thinking in terms of your willingness to consider AI and give it a role within the newsroom.
    I hope that maybe you can expand a little bit on those opportunities that you see potentially playing in our future and how that can be done in a way that advances all Canadians.
    The whole part I'm talking about, news becoming a conversation, is the big shift that we're going to see in how generative AI is going to revolutionize the way people access information. We're already seeing that with tech platforms, but if newsrooms are able to do it with trusted, verified information, what you essentially have is people engaging with news at a one-on-one level.
    Right now, news is one size fits all. Imagine just for people across Canada itself. We're such a multicultural community, and we are able to cater to people across languages at the level of language they require. That's one way in which news is going to be able to help people be a more informed society.
    The second part is that, when people are able to have conversations with the news, we are also able to understand a bit more what's missing. What aren't we covering? Right now when we publish a news story, we aren't able to engage with our audience other than through the comments section on our social media page, if you have comments available.
    This transforms journalism from a one-way broadcast to a dialogue that we can have at an individual level. I think that will make for a more informed Canadian society. It's only when it's trusted information and within a trusted algorithm.
(1700)
    Right now, there are different news barometers that would say that this news source is more left of centre when it comes to the political spectrum, this one's more right, and this one's more centrist. You have this continuum of different news sources across the country.
    My curiosity is this. Is there potential, then, that AI could bring a more balanced approach to news in giving the audience member the opportunity to access news from all sides or even present divergent views at the same time to the individual who's reading or listening to the source? Is that possible? Is there an opportunity there?
    That's definitely a possibility. It's already happening.
    If you look at a lot of the tech platforms when they are going and crawling newsrooms' data, they do in a way provide that information. If you are asking, “What does this newsroom versus that newsroom say? How do their words differ? How did their context differ?”, you're able to create all of that. That system exists.
    The question now is who's in control of this algorithm and who's building that platform to do that.
    There are also a lot of studies coming out where, if people believe in a particular type of conspiracy theory or something, a conversation with AI is able to help them understand the fallacies in their argument and help them understand what the truth is, in a way.
     There are a lot of ways in which just conversation itself, that ability of having a back-and-forth conversation, helps people to understand information. The ability for AI right now to access a ton of information at the same time gives them an ability to see a spectrum of views.
    If we as legislators want to ensure that the opportunity is protected, advancement is protected, innovation is protected and creativity is protected, we want to make sure that those things are possible. At the same time, we want to make sure that the dangers are minimized.
    What would you suggest in terms of how that is achieved?
    First of all, it is about getting the people who are doing that reporting compensated. If you go down to the first principles of what news is, news is a real-time structured knowledge network of verified information. Now, it can have different contexts and different framing, depending on the news site that you are going to. When an AI model kindly takes over that framing, that editorial judgment, that's the danger that I spoke about in my testimony. That's the danger; it goes away to foreign AI platforms, and it's not necessarily something that is controlled within Canada itself.
    If you have, say, just a public broadcaster, the CBC, being that source of information where people are able to go, you're able to get it across different sides of understanding what the context is, what the framing is. That is where conversation helps people because, once again, you're able to tackle a particular story from different sides and opposing views and understand it that way.
    I think what we need to do is really compensate people who are getting or producing that information, because it's possible right now, and it is happening already. I can go to ChatGPT or Perplexity and ask for the news across the political spectrum, but those people are not being compensated, and there's going to have to be a factor in the business model for those knowledge creators to continue producing that, because they will not be compensated.
     Very quickly here, there are some who say AI replaces human creativity, and there are some who say that, actually, we can partner with AI for further creativity. Where would you fall on that spectrum?
    Oh, completely on the pro-AI side.... AI does not replace human creativity; in fact, it amplifies it. Any person who did not even go to a journalism school or do an entire videography class today, if they have an imagination, is able to go and create it. I think that creativity is definitely amplified. I don't think it's going to replace it.
    Thank you.
    Mr. Al Soud, you have the floor now for six minutes.
    Thank you, Madam Chair. It's great to see you, as always.
    Thank you to our witnesses for being with us today. It's truly appreciated.
    During the conversations we've had here, over the course of this study, we've started seeing themes evolve. The one I latch on to is the following: any potential cultural policy can't just protect the past, it has to govern the systems shaping what we see and what we don't see. I've often emphasized, at this committee, the importance of striking the right balance between preserving and empowering our creative sectors while also incentivizing innovation. This will resonate through my questions today, in general, as follows: How do we build a digital future where Canadian creators, institutions and cultural voices thrive, not in spite of new technology but through it?
    With this in mind, Mr. Geist, you've previously said, “AI investment worldwide runs into the hundreds of billions of dollars, and the Canadian government contribution is never going to be more than a rounding error in total AI spending.”
     You've long argued that our laws lag behind the realities of the Internet economy. This has come up on several occasions already. From your perspective, how does Canada create room for itself, given obvious barriers, while simultaneously protecting our cultural and creative sectors?
(1705)
     You're right. I've been, at times, critical of some of the proposals that we've seen—including here at this committee. I should preface this by saying that I think your perspective, in saying how we facilitate innovation while at the same time adhere to some of our other objectives, is the right starting point.
    The challenge we face is evident, even in the discussion we've had already so far. We've had a number of witnesses express real concern that the Canadian perspective—the wide range of views, languages, ethnicities and the like—somehow won't make its way in there, and so we won't be reflected in the outputs. At the same time, in the same conversation, we say we're really concerned about when you include our stuff, because that, then, raises concerns about whether or not we're being compensated.
     One of the real things that we have to do, as part of even the broader government efforts around AI regulatory policy, is to say, “What do we really want to see happen here? Do we want to ensure that there is a Canadian perspective, there are Canadian virtues and values that are reflected in some of the outputs?" I think the danger is in erecting barriers and saying, “We don't want you to use our stuff.” It really comes at a significant cost, particularly for a committee that has been so focused on ensuring that Canada is well reflected in a wide range of media.
    When I heard Ms. Roy talk about things like “transparency, credit and choice”, those are real, valuable pieces to start thinking about. I may have a different interpretation of it, but to me it's transparency, in terms of knowing where the sources are; credit, so that there's appropriate attribution since, sometimes, we do see people linking through; and choice, in a way that actually does empower sites, creators and others to ask, “Do you want to ensure your work is there, available and included in these large language models, or do you want to opt out?”
    I think that, at the moment, we don't have a great system for it. I'll quickly give you an example. We've long had a system in “search” that basically says, “Search companies can index my content, but I can choose to opt out.” Many of the AI companies are, essentially, relying on that same signal, the robots.txt signal, and it seems to me that it's somewhat inappropriate here. It should be the case that a site should be able to say, “I want my content indexed for search purposes, but perhaps I don't want it indexed for large language model purposes.” We need systems that will allow people to make that kind of granular choice about who gets to index their content, in a sense.
    That's very interesting. Thank you for that.
    In your Globe and Mail piece, you noted that Canada faces a choice between U.S.-style deregulation and Europe's stricter rules. Do you believe there is a middle ground?
    Well, I think we are struggling to find one, to be candid about it. One reason for that—and this, of course, extends far beyond just the creative sector discussion and applies more broadly—is we know that capital and individuals are mobile. In the current environment, if we take a look at where the trends are headed—you talked about looking ahead—I think we do run real risks. If we establish regulatory frameworks that send the signal that we'd like to see AI here but only on our terms, we're going to find that many say, “Well, do you know what? We're going to shift elsewhere."
    I think we've seen it quite profoundly, even with respect to the EU's approach, which, in a fairly short period of time, has gone from what some perceived to be the model for AI regulation to one in which many who believe that they want to be competitors in the AI space are increasingly wary of, because I think they have real fears that, ultimately, this will essentially extricate themselves from some of those opportunities.
(1710)
     Incredible.
    I don't have that much time left.
    Ms. Mochama, I'm quite curious: You've cited concerns about AI being a potential extinction-level risk, notably the tension between technological power and democratic communication.
    From a communications and media viewpoint, how do we hold AI companies accountable to democratic principles when their tools shape public discourse?
    At the core of any decisions around AI is the question of public trust. I'm certain you see it in your offices where people become angrier, more frustrated and less accurately informed. That has been in progress since well before AI intervened in that process; what we're now seeing is an acceleration of that with the use of AI tools.
    Can we meaningfully intervene in what AI is doing but also take a step back, look at the larger picture and look at the landscape of trust? How does trust factor into our digital information ecosystem, whether it's your ability to log on to the Toronto Public Library website—as many Torontonians could not do, because that website was held ransom for almost a year—or your ability to log on to ChatGPT and ask it a question about who Vicky Mochama is? When I ask that of ChatGPT, I've never met that woman in my life. We have a set of unreliable factors. That doesn't mean that everybody is a bad actor in the mix; it's simply that we do not have a coherent information ecosystem that we can point to and say to Canadians, “You can trust that.” I think the trust question is primary for our publishers: How can we ensure that they can trust that?
    The question of attribution is an important one when it comes to AI summaries or anything that AI software is going to use. If you're going to pull from a reporter's work, the public should be able to replicate that work, find that reporter and ask if that was true. I think the human-in-the-loop paradigm is one that our publishers rely on, especially when it comes to public trust.
    Thank you.

[Translation]

    Mr. Champoux, you have the floor for six minutes.
    Thank you, Madam Chair.
    Ms. Guèvremont, I first want to congratulate you on the book entitled Intelligence artificielle, culture et média, which you published with Colette Brin. It's absolutely fascinating. I found a lot of answers and a lot of interesting questions in there. It provides extremely relevant material to fuel discussion. I'm pleased to have you here today.
    There's certainly a lot of talk about the impact of artificial intelligence on the cultural sector. However, alongside that sector, there's the journalism sector, which we're discussing today with our guests. I'd like to take you down that path, then, because it's an integral part of your research. I found it quite interesting to see that not everything is doom and gloom in the conclusions drawn from your research, and I'd like to hear your thoughts on that.
    Is it possible that journalism might not be a victim of artificial intelligence, but instead benefit from it? I'm talking about journalism as a profession, but also, more broadly, journalism in the sense of its societal role and its essential nature for democracy.
    Do you think it's possible to use artificial intelligence and look to the future with a bit more optimism?
    Mr. Champoux, thank you for your question as well as your comments on the book I co-edited with Colette Brin.
    I'm going to disappoint you, because our expertise is shared between Colette Brin and me in this book. Colette Brin, who's a Laval University professor in the department of information and communications, and director of the media studies centre, is really the one who would have the expertise to answer that question, specifically in relation to media and journalism.
    I'll say a few words, because we do work together. Colette Brin is one of those researchers who recognizes the potential of artificial intelligence, on one hand, depending on how it's used by journalists and in newsrooms, particularly in terms of increasing productivity. Journalists and media rooms have been using artificial intelligence for a very long time now, and they can benefit from it in some respects. However, there are obviously also some risks associated with its use.
    I'm afraid I don't want to dwell on the topic, since it's her area of expertise.
    That's absolutely fine. We can also refer to this book, which contains answers, particularly the results of investigations conducted by Knight Lab and Knight Foundation, which reveal some interesting and encouraging things for the media, including those with limited resources.
    Anyway, we may be able to come back to it later if we can have more studies and hear from Ms. Brin at another meeting.
    You also talked about culture and raised an issue that I find extremely concerning: the homogenization of cultural content. That's an issue that's all the more concerning because we have a culture to protect, particularly that of Quebec, which is surrounded by around 400 million anglophones.
    Given Canada's francophone culture, specifically Quebecker culture in Quebec, as well as our cultural ecosystem, how can we play our cards right and convince the major players, who don't necessarily need to care about us, to respect that difference?
    Is that an impossible dream? Are we always going to be swimming against the tide, or do we have the technological ability to play our cards right?
(1715)
    Thank you again for your question, which is excellent.
    Some actions have already been taken. First, homogenization is a phenomenon that isn't unique to the era of artificial intelligence. Certain cultural powers, such as the United States or other governments in other parts of the world, have long had enormous power to penetrate our markets in light of globalization. That's already leading to a certain homogenization of what we're exposed to as an audience, so homogenization isn't a phenomenon that's unique to artificial intelligence.
    Now we're seeing that the use of artificial intelligence at various stages in the value chain can amplify that phenomenon. When it comes to using artificial intelligence in analyzing user behaviour and algorithmic recommendations, studies have shown for some time now that content similar to the content previously viewed and enjoyed by users gets featured. That leads to a certain homogenization of the content consumed and to the filter bubble phenomenon that exists in the media but is also observed in the cultural sector.
    The way to combat those filter bubbles is to ensure that recommendation algorithms are reversed, meaning that they don't recommend only similar content, but open the field of possibilities and spark the audience's curiosity about other types of content than those they would have previously enjoyed or viewed.
    Legislative measures have been developed in that area. In Canada, the Online Streaming Act talks about promoting Canadian content, which has to be anglophone, francophone and indigenous at the same time. As you know, a bill on the discoverability of French-language content is currently being studied in Quebec. That covers the stage of distributing and disseminating works.
    The challenge now is to also combat homogenization at the content creation and production stage, given that there can be more and more reliance on artificial intelligence.
    I'll go back to what I said in my opening remarks. It's really important to work on the training datasets so that they are diverse, don't include biases and can generate a certain diversity of content.
    It's important to be careful, though. This isn't about generative artificial intelligence that would generate content on its own. If that were the case, we would necessarily be dealing with a form of homogenization. Indeed, depending on the functionality of those systems, which are based on calculations and probabilities, we're inevitably moving toward the production of similar content. Where we have some leeway is when—
    Thank you.
    Yes. I'm sorry.
    That's okay.
    Thank you.
    Thank you, Ms. Guèvremont.

[English]

     Mr. Diotte, you have the floor now for five minutes.
(1720)
    Mr. Geist, I was a long-time journalist, so I have a special interest in this regard.
    I believe you mentioned something about how over-regulation of AI could spark a replay of what we've seen happen on news sites or something to that effect.
    Could you expand on that and say what the risks are?
    It's reasonably well known that I was quite critical of Bill C-18 or the Online News Act. Many of the concerns that were raised at that time have played out, with respect to the blocking of news links now for a couple of years, which hasn't entirely eliminated the existence of news on those Meta platforms. It has eliminated much of the quality news. We've seen substitutable content that has undermined the reliability of it and diminished the amount of news because, quite frankly, people are happy not to view that news. It has ultimately hurt many of the news organizations that were beneficiaries of the referral traffic and the like that was coming from that platform.
    Even more, I think Bill C-18 has also had the effect of undermining the independence of many media outlets. We've now seen some outlets that previously were opposed to taking the Google money now saying they can't effectively compete if they don't take the Google money when their direct competitors are.
    We've created a system where the vast majority of organizations are now increasingly dependent on regulatory models that result in these kinds of payments. I don't think that is a healthy place to be with respect to where we're at.
    I referenced that in my opening remarks because it seems to me that one of the risks we face is if we adopt an approach that increases the costs.... Basically, if we say that the only way you can include certain Canadian content in a large language model is if you meet these new regulated costs, we're going to loop in some of these same players and we run the risk of running a replay on this all over again.
    It's not that these companies are going to exit per se, although some might. The Quebec legislation that was just referenced—Bill 109, I think—very likely leads to the potential exit of certain streaming services, given some of the demands, especially in the context of AI. We have services that will basically say that if the choice is that they have to compensate in a manner that is inconsistent with what the business models and the global standards are or exclude their content, they're going to exclude the content.
    That leads us to a place where many of the benefits we've heard about from this are lost. Even more, the concerns we have about the AI systems reflecting our culture and reflecting our interests may be lost.
    You have to ensure that you are in the game, it seems to me. We face real risks if we make choices that lead some of these organizations to say that Canada is a market that isn't AI-friendly and they can't effectively compete in this space.
    As I said, we've seen that play out with respect to Bill C-18. I don't think we'd want to see that play out again with respect to generative AI.
    Excellent.
    Dr. Geist, who determines if AI is impartial or objective about the news it creates? We've heard, obviously, that it scrapes news. Who determines and how do we know that news is objective?
    People ask that same question about all sorts of news organizations, not just the AI-generated versions of that content.
    We don't have someone who reaches those determinations. If you take it as a given that the majority of the public is interested in being able to access accurate news, whether that's from the original source or a summary of what is available by gathering up sources from a number of places and putting that back, if you end up with an AI system that consistently provides information that is found to be inaccurate or biased, I suspect that the majority of users are going to say that it is not an AI service they want to rely upon. We've seen it play out with Twitter's AI service, Grok. There were a number of instances when some of the results it was generating caused a lot of concern. People then abandoned that service because it was not reliable.
    The market has some disciplining effect here. If the kinds of outputs that are being generated are the sorts of outputs people can't rely upon, if the summaries themselves don't reflect what is happening in their own community or their own interests and the like, they simply aren't going to use those services, it seems to me.
    The market plays a bit of a role, but we don't have—nor should we have—a government agency or otherwise going on and checking this out to say, “Is this accurate or not?” That's not how the system functions.
(1725)
     Thank you very much.
    Ms. Royer, you now have the floor for five minutes.
    Thank you, Madam Chair.
    I'm really enjoying the conversation.
    A recent Library of Parliament report highlighted the fact that many countries are now re-examining their copyright laws. The EU gives creators the right to opt out. That's really interesting, because it puts the onus on the artists. The creator's work has already been stolen, but that means that unless they opt out of having it used, it can be used freely. We've heard some witnesses at previous meetings on this topic talk about opting in to have their content used.
    Could you speak a little bit more about that?
    I know, Professor Geist, you touched on opting out, but I'd also like to hear from Vicky on this topic.
    Thank you.
    I'll try to go quickly.
    First off, I don't see what is taking place as theft—it's not being stolen—and it is not necessarily the case that it can be used freely. What's taking place, certainly in an AI context, is an interest in the underlying data itself. It's not being republished. It's not being commercialized in any way where people are taking that original source and trying to make that original available to them. They're interested in the underlying data itself. To the extent to which we see even these systems learning and then generating results that are similar, musicians and creators have been doing that forever. Many of their perspectives and their approach is based on seeing what others are doing and trying to adapt and put their own unique spin on it.
    In that sense, this is not theft in the way we would conventionally think of theft, and it is not free use. The uses—at least what we've seen from the courts so far—are consistent with what fair use or fair dealing would be, and frankly, because you mentioned the EU, the EU has a text and data mining exception that specifically seeks to ensure there are certain kinds of uses that are appropriate for that kind of informational analysis purpose.
    That's not to say we don't need to be thinking about copyright in this context, but I'm not wholly convinced the starting point here is to say that what's happening is that content is being stolen and then used with no limits at all. There are clear limits in the law right now under fair dealing and fair use, and the kinds of uses are not what we would conventionally see when we think of that.
    I guess, really, it's specifically to opt in and opt out.
    Vicky, could you also respond?
    Creators should be empowered to leverage their data, their content and their creations as they best see fit. There are examples of singers who can no longer sing who have opted to participate with AI systems to recreate songs or music they want out there; they want to continue having their voice out there but conditions are such they can't do that. That should be an option that's available to you as the creator. If I want to resell a book I'm writing but I'm not going to finish the book and I want the publisher to finish it for me using an AI system, that should be within my rights. It shouldn't be within the publisher's rights to insist on that. I think it's about where the power balance lies and whether the creator is still made whole and sacrosanct in that process.
    Thank you.
    My next question is for Mr. Sutherland.
    You had talked about the right of personality. You mentioned this is within the U.S. Constitution, but we are seeing deepfakes come out of the U.S.
    Can you speak a little bit more about that? Does their constitution protect them enough? What can Canada do in this realm?
    The protection is there. Whether you can enforce it or not is the question of the deepfakes.
    We barely have it. Just to give you an example of what I mean by this and why it's separate, there was the case of the skaters Salé and Pelletier. A photographer went and took their picture skating, framed it and sold it for $500 at a store, and they were like, “Wait a minute; that's us.” You own the photograph, because you're the copyright holder. The copyright didn't protect them, but he was using their image to make money.
    There are a few cases like that, and that's what I'm talking about. If somebody were to use the recording, it's such a fine line. What Professor Geist said is true: Musicians constantly reference other musicians they're inspired by. It's different from scraping the actual songs from a record and then using the images of an artist. Feist, who I manage, is very distinct. It captures her whole character.
    That's what I'm saying. It wouldn't hurt for us to be exploring whether we can formalize this as a separate protection. It's not data. It's not that vague, “Well, they're just learning from the data.” If you're using their voice and their image, then you're definitely stepping over that line.
(1730)
     Thank you.

[Translation]

    Mr. Champoux, you have the floor for two and a half minutes.
    Thank you, Madam Chair.
    Ms. Guèvrement, I'll come back to you. Our American neighbour has its eye on the cultural sector. I think it really doesn't like the barriers that Quebec and Canada put in place to protect our cultural distinctiveness.
    In your opinion, in the specific case of artificial intelligence, would it be possible to effectively regulate and oversee its uses? Is it a waste of time?
    If so, how could we regulate the development of artificial intelligence to properly oversee it, with all the constraints that entails?
    If not, do you think we should invest more in Quebec and Canadian businesses to encourage the development of artificial intelligence?
    Which approach would prevent us from disappearing due to artificial intelligence?
    Thank you for your questions.
    The first, on how to regulate AI, deserves a nuanced response.
    First, what underlies your question is the idea of establishing regulations. Here in Canada, I think there is still progress to be made, even though some of our laws are already having an impact on certain uses of artificial intelligence. The fact is that we don't have the equivalent of the European Union's AI regulations, far from it. Therefore, the idea would be to establish a regulatory base. Regulations could include references to the use of copyright-protected works, and we could explore various possible options. Earlier, we talked about the opt-out rights in European legislation. It must be said that artists and culture professionals are very critical of opting out, preferring instead an opt-in standard. The reason for that is that in terms of the rule's wording, opting out may sound very appealing, but the legal experts who are currently working on implementing the relevant provisions are pointing out the technical challenges of implementing them. Even if an artist uses their opt-out rights, they have virtually no way of verifying that their work is no longer being used, because there's very little transparency.
     In terms of the obligation of transparency, requiring companies that develop training models to disclose the range of data they use is also a huge challenge. We need to legislate and also try to address the loopholes that emerge when legislation is clearly hitting a wall.
    The second part of my answer on this issue is to wonder whether we shouldn't even go so far as to legislate specifically on the use of artificial intelligence in cultural and creative industries. What we see is that general frameworks are usually not detailed enough to take into account the risks that are specific to the use of artificial intelligence in culture.
    We naturally want to support innovation, and we want to enable artists to use these technologies if they wish to. That said, we still have to guard against the risks arising from the competition between synthetic content and human-created content, where human creativity is being gradually marginalized in some areas.
    Thank you.
    Thank you.
(1735)

[English]

    Mr. Waugh, you have the floor now for five minutes.
    Thank you, Madam Chair.
    I come from an era when newsrooms always sucked the money out of a company. It didn't matter if it was radio, newspaper or television. Executives would look away because it was the area of the business where it always cost money to do the business.
    Of course, there's the editorial. Here is the line of the newsroom. The other one, as you know, is the advertising, but you can't come over that line on the editorial side, at least when I was in it. They were never encouraged to come into the newsroom.
    Ms. Roy, you talked about compensation. You talked about a business model. What is the business model? How can people make money, other than through subscriptions to Mr. Geist after his editorials? How can you make money to make a living, if you don't mind my asking?
     That's the question all the newsrooms are exploring right now because it is an existential question for the business model of news. It has been our business model for over a century—funded by advertising and subscriptions.
    The key thing for newsrooms right now is how they build a relationship with their audience. A lot of newsrooms are going toward building direct relationships with their audience, rather than depending on referral traffic from social media, which went away after Bill C-18, and now Google Search is dropping.
    The problem that is happening is that with search traffic completely collapsing, tech companies are able to create a competitor to journalism products with journalists' reporting. Not just that, but Canadian journalism is extremely important because right now OpenAI is being sued by the top Canadian publishers like CBC and the Toronto Star. As a result, ChatGPT does not show you any Canadian news from these Canadian publishers. When you ask it questions about Canada or anything like that, you are getting news from outside Canada as sources that are informing the public.
    Still, as a user, it's a really great product, so people are getting that as a competitive product. It's still able to give me news about Toronto from public sources and it's becoming a competitor to the Star.
    What we have is this broken place where people can connect to and find just the Toronto Star news, for example, or CBC's news because search used to be one of the key ways. Increasingly, that link or that funnel is broken, so right now there are two areas where newsrooms are really investing. One is looking at how they can be more outside in the communities and connecting one on one with their audience members. The second one is looking at what types of competitive products they can create with today's technology and reimagining what news looks like with today's infrastructure.
    That requires a lot more investment, which a lot of newsrooms do not have unfortunately, to compete with the technology companies of Silicon Valley. That's one of the critical gaps that has to be filled and supported for the news industry. We need to build a technology infrastructure for newsrooms.
    Where is the trust in AI in newsrooms now?
    Trust is the big thing, I think, from the public. We've had Ms. Tait here from the national broadcaster and she admits that there is a decline in trust in newsrooms.
    How can we get trust in AI in news? How is that going to work?
    A lot of work is being done in understanding and being transparent with our audience. I think the next big thing is about how we are using AI within news itself. In the news-gathering process it's helping us do better stories. Vicky was highlighting a lot of ways in which it's helping to distribute the news better.
    Fundamentally, something that we can really infer from a tech giant like ChatGPT is that when they go wrong, you cannot hold them to account. If CBC is wrong or any of the major newsrooms are wrong, we have a person to call up and at least talk to about how it's giving us the wrong information.
    It is a Canadian company, it is a Canadian journalist and it will be, at the end of the day, a Canadian who has found and reported that information. I think that trust is something that newsrooms are trying to really highlight. It's the individual voices and the individual reporters who are going and mapping that information and building that connection.
    Again, what is happening is this top funnel is being completely broken because a lot of it is being captured by tech giants.
(1740)

[Translation]

    Mr. Ntumba, you have the floor for five minutes.
    Good evening. I want to thank you all for your answers and your great presentations.
    I'll start with you, Ms. Guèvremont. First of all, how are you?
    I'm doing very well, thank you. How are you?
    I'm fine, thank you.
    Ms. Guèvremont, what mechanisms do you think should be put in place collectively to ensure that artificial intelligence algorithms respect the principles of cultural diversity?
    Thank you for your question.
    We're already working on the first mechanism. Through the review of our broadcasting laws, for example, we set obligations to showcase Canadian content. In Quebec, the act in force is currently subject to a review on showcasing French-language content.
    We are taking steps to ensure that there is more cultural diversity, regardless of the means used by the platforms to publish content, make it more visible and recommend it to audiences.
    Right now, as I said earlier, it's clear that the content that is promoted and broadcast is based on similar content or content produced by the platforms themselves, which often goes against the promotion of cultural diversity. These laws still need to be implemented. We are awaiting a number of decisions from the Canadian Radio-television and Telecommunications Commission, or CRTC, on the implementation of the Online Streaming Act to clarify the obligations set out in it.
    We do not yet have all the means in place to ensure that artificial intelligence, which is used to distribute content based on an analysis of consumer behaviour, provides us with more diversified content.
    Given what generative AI produces, we will also have to limit the way synthetic content competes with human-created content. Suggestions have been made in terms of what's happening on digital music distribution platforms. Improving transparency, particularly in terms of identifying content generated by artificial intelligence, would be a step in the right direction.
    Our definition of the various types of content will no doubt have to be reviewed. We will have to take a stand on how artificial intelligence will be used to produce Canadian content. For example, content that is artificially created should not receive the CanCon designation. There are other aspects that need to be addressed to make sure that AI doesn't compromise cultural expression.
    Thank you.
    I want to go to Dr. Geist.
    Earlier, my colleague asked you a question about the positions taken by the United States and Europe, and Ms. Guèvremont talked about defining certain points.
    Why do you think Canada is reluctant to act? Why is Canada not keeping up with the global trend on this issue? Are we lacking skills or information to be able to legislate AI?
    Could you elaborate on why Canada is lagging behind the U.S. and Europe?

[English]

     Let me start with why we have fallen behind. Respectfully, the strategy that we saw for the last number of years seemed to be premised primarily on the so-called “make web giants pay”. Rather than constraining certain behaviour and focusing on some of the harms that might be arising, it seemed more how to profit from it by requiring payment, whether it's for the news or on the streaming side. Frankly, I thought that was an ill-advised approach.
    I'm less concerned about this notion of falling behind. I think many countries are struggling, whether it's on the news side or on the streaming side. I have a somewhat different perspective than Professor Guèvremont on some of the issues around, let's say, homogenization and the like. It seems to me that some of these streaming services, whether on the music side, Spotify, or on the video side, offer an unprecedented amount of variation and content. The amount of differing choice that we have has never been greater, by magnitudes compared with what it once was. The challenge that we often focus on is how you find that. How do users take advantage of that, and how do we effectively promote some of that content? There's been some fairly robust debate on that.
    I don't think that, at the end of the day, we need to be thinking about how we fell behind. What we need to ask, and what we haven't had, I think, enough of a conversation on—and this study helps contribute to that—is what our policy objectives are, especially around AI. Is it that we take advantage of it, that we build on the leadership that we once had on the technology side, the Hintons and the Bengios of the world who are Canadian? Canada sees itself as a leader here. How do we build on that? Do we see this more as a threat and so erect barriers, saying, “We want to get off this train”, and find ways to restrict some of it? Can we carve out policies that are receptive to the benefits? At the same time, we want to ensure that it addresses some of the Canadian concerns that we have.
    Those are many of the objectives, but clearly it's a challenging policy order.
(1745)
    Thank you.
    Mrs. Thomas, you have five minutes.
    Dr. Geist, I'm going to invite you to go a little deeper on some thoughts you shared with the committee already on Bill C-11. You outlined that you were a bit of a critic of that bill. In fact, you said, “instead of modernizing the law to reflect the current reality, the government chose to retain an outdated model and penalize Canadian digital success stories in the process.”
    With regards to Bill C-18, the Online News Act, you stated, “[it] has been an utter disaster, leading to millions in lost revenues with cancelled deals, reduced traffic for Canadian media sites, declining investment in media in Canada, and few options to salvage this mess.”
    Obviously, you're not a fan of either of these pieces of legislation, where they landed or, of course, the negative repercussions they have had on the Canadian public.
    Here we are at the table considering a new technology, AI, and trying to figure out how best to legislate it. In your mind, how do we make sure we do not create the same mess that was created within Bill C-11 and Bill C-18? What's the line in terms of freedom versus restriction for the sake of the public good?
    Clearly, people are able to look back on the stuff I write. I have to be careful sometimes, I guess.
    I did research it myself. I didn't even use AI to find it.
    Fair enough. It's not hard to find, I think.
    You're right. I was certainly critical of both of those bills. Quite frankly, I think the concerns that were expressed at the time have been borne out. We haven't seen any money really come forward with Bill C-11. The Online Streaming Act is now stuck in the courts. Bill C-18 has raised real concerns at the end of the day. There was some Google money, to be sure, but whether or not that's more than the money that was already being generated through Facebook and Google deals is an open question, and plus, there have been some costs.
    I will come back to the response I gave a moment ago. I think the starting point for some of that legislation was wrong-headed. It wasn't that we shouldn't regulate where appropriate. It was that some of the approaches around regulation really seemed to be more about how we can generate some new revenues in support of particular policies.
    There is this sense that there is now a willingness to rethink some of these things. The digital services tax would be another example of rethinking some of the earlier approaches.
    It comes back to a theme that we've now come back to on a number of occasions. What really are our fundamental objectives here? Is it that we want OpenAI to fork over more money for some of those kinds of uses? Is it that we want to see our culture better reflected? Do we want Canadian digital sovereignty better protected? If so, then we need to prioritize some of those things.
    For example, one of the biggest concerns that we have in the AI space, since it's already been put on the table, has to do with the failure to enact modernized privacy legislation. Quite frankly, this is very much about data. The fact that we have struggled to move forward with modernized privacy rules really ought to have been job one with respect to many of these issues, and one would hope that we would deal with those things.
    I think we can find ways to position ourselves as leaders here without creating barriers by focusing on things like transparency and perhaps by trying to adopt a leadership position when it comes to some of the opt-out models. I didn't have the chance to fully respond to Ms. Royer, but it does seem to me that the opt-out model is the right one for creating efficiencies and opportunities for both creators and the platforms, but it hasn't been implemented very well. If you're treating opting out just like a search and choose to opt out of anyone finding you online and being included in a large language model, that's not the choice we should be talking about. There's an opportunity to do the opt-out system to be truer to that notion of genuine choice with respect to large language models. We can try to adopt some leadership there as well.
(1750)
    I'll put my next question on the table and you'll get a chance to answer it in a moment, because I have only limited time.
    We had Dr. Kearney here at the last meeting. She's quite a talented and, I would say, intelligent scientist when it comes to AI. It's her area of specialty. She was talking about transparency and some of the challenges it poses.
    I'm out of time. I'll cut myself off there, but I will come back to you, Dr. Geist, with a question about transparency.
    My apologies.
     I have been giving people quite a bit of latitude today.
    If you have a quick answer—
    No, it's okay. My question wasn't formed.
    Mr. Myles, you have the floor now for five minutes.
    That's great. Thank you so much.
    This is once again very fascinating. I appreciate everyone for being here.
    Just to keep going, I'm not going to ask the same question, but I am going to talk about transparency, because it's come up. Everybody's conversation has revolved around the importance of transparency, talking about different outcomes and different levels of regulations, but with transparency being crucial.
    One thing I think we need to establish is how possible it is to have transparency. That is a good question.
    On the other piece, I'm going to go to you, Mr. Geist, because you have talked about transparency, credit and choice, along with the idea that it's maybe not theft, but the remuneration piece was not there. If someone opts out and there's no remuneration, what's the incentive for a creator to opt in if they're not going to be paid for it or be recognized for it or credited for it?
    Maybe I will ask Mr. Sutherland that question, as he is a lawyer who represents artists. The part I'm trying to figure out is where the incentive would be.
    With respect to transparency, I guess I would say that it should be basic table stakes.
    I think we could have a discussion about what transparency ought to mean. Quite frankly, the obligation ought to lie on those who are creating these large language models to ensure that they are transparent on the way their algorithms function but even more on what is included within a large language model so that those whose work is being used are able to know whether or not that is in fact the case. Opting out only works if you have the ability to actually know if your work is being used, so it seems to me that this is where the transparency lies.
    I think there may well be opportunities for payments. We have seen some deals cut between some of these larger AI companies and media outlets and others. I shouldn't be taken as saying that there will be no payment. I think the market does create scenarios whereby the value of the work is such that there is real value in being able to use it and then to be compensated. Even in the case of books and authors, we've seen some settlements there as well.
    I don't think that it's no payment. What I'm saying is that using the cudgel of saying that we want to try to make changes to copyright to mandate payments and override the existing balance would be a mistake.
    Where there is real value, there will be payment. We've already noted that many of the things that we want to see from a Canadian perspective are smaller and don't have wide audiences. If we're honest about it, the economic value in terms of payment is going to be pretty limited. Then the choice becomes whether it is more important to find ways to ensure that it is present so that it's reflected in the outputs and the culture kind of lives on through these AI systems.
    The payments themselves.... The work may be very valuable to the individual; whether that small piece is valuable to large language models that are so large that we can scarcely imagine the amount of content may leave some rather disappointed.
(1755)
    On room for licensing, is there a place for licensing in this?
    If there is a desire in the market to make use of this in a way that goes beyond fair use, then they need a licence for sure.
    It would have value, I guess, because it's in the model. Otherwise it wouldn't be used, would it?
    The models themselves value as much information as they can.
    Much like the debate we had back with Bill C-18, the reality is that The New York Times is more valuable than a local paper in different kinds of communities, both in terms of the output and the impact that it has. The same would play out in terms of the inclusion in a large language model. Just because it's there doesn't mean that it's all equally valuable.
    I didn't know we were going to talk about Bill C-18 so much today.
    Mr. Sutherland, maybe you can speak a little bit to this.
    The message I was sending in my opening remarks is that artists find their way. We saw the whole streaming thing and everyone thought the music business was over because artists were never going to figure it out, but they figured it out. Right now, I know that Universal is working hard by trying to partner with vLLMs and some of the AI companies to find out how they can work together and how they can use the music.
    From an artist's perspective, to follow up on Professor Geist, it's a tiny drop in the bucket. By the same token, if Mustafa Ahmed doesn't want his music ingested, then it doesn't get ingested. Because it is a tiny thing, it should be allowed to say, I don't want to be part of that. I don't want to hear albums that are ripping off.... It's different to be inspired by versus the actual use of someone's voice. If Billie Eilish records a song and her brother is a big fan of Feist and it sounds a little bit like Feist, that's a compliment, but nobody is ripping off her actual voice and using it as backing tracks without her consent and participation.
    The system is there. The opt in, opt out.... The idea of opting out is bizarre in that it's a property right. You own it. You shouldn't have to opt out.
    When I come home at night, I don't expect someone to be watching the Jays game on my television in the living room saying, you didn't opt out of me living in your living room. I just came in and had a beer and I'm watching the ball game. You didn't opt out. I didn't see the opt out sign on your door.
    I don't have to opt out of the property that I own, why should an artist have to opt out? They should have to opt in and they should be compensated for it.
    The final piece is that the artists are definitely represented by large, aggregated distributors, so you're not phoning up Feist on her island in the Georgian Bay saying, can I put your thing in my ChatGPT? These are all licences that will be done from a business, just like the labels invested in Spotify. First, they were going to sue them, then they decided they were going to work with them, then they owned a piece of them, and they found a way to aggregate the streaming so that people could be compensated at least partially for their music.
    Thank you.

[Translation]

    Mr. Champoux, you have the floor for two and a half minutes.
    Thank you, Madam Chair.
    Ms. Guèvremont, I'll come back to you again.
    Your book features an article by Alexandra Bensamoun that deals with distinguishing original creative works from those generated by artificial intelligence. A lot of artists use artificial intelligence as a tool or as inspiration. Some works end up being a kind of hybrid in terms of copyright ownership.
    How can we update the idea of copyright, but also the Copyright Act, to take into account this reality that will hit us hard shortly?
    That's a very good question, but I would prefer if Alexandra Bensamoun were here to answer. She's one of the people we work closely with on implementing UNESCO's 2005 Convention on the Protection and Promotion of the Diversity of Cultural Expressions.
     I don't have a clear answer for you. In all of our thinking and work, we are certainly not looking to deny artists the ability to use AI to enhance their creativity, generate new ideas and explore new frontiers. On the contrary, that should be encouraged.
    Again, I'll come back to transparency.
     I think funding agencies in particular are wondering how to manage this use. How do they get artists to at least disclose their use of AI? Should that use be subject to some sort of standard or framework? At the same time, it's important to keep in mind that an artist's ability to use this technology in the creative process is part of their artistic freedom.
    As for your question about copyright, I don't have an answer. The whole copyright issue alone is in need of clarity.
(1800)
     You just touched on something very interesting, and I don't think we've talked enough about it, not during this study, at least. I'm referring to the ethics of it all.
    The use of AI raises numerous ethical issues, and transparency somewhat ties in with that.
    Do you think we should refocus our thinking on the ethical considerations around using AI? Something that comes to mind is the disclosure of the use of AI, whether as inspiration or otherwise.
    Absolutely. I think the two are complementary.
    While jurisdictions are looking into regulating the use of AI or the development of AI models, the cultural sector, too, certainly has a lot of questions about how to use the technology ethically.
    It may very well be necessary to improve existing ethical frameworks. As I said in my opening statement, those frameworks often overlook the issues associated with the diversity of cultural expressions, with the exception of UNESCO's 2021 recommendation on the ethics of artificial intelligence. It contains a lot of guidance relating to AI, creative industries and the diversity of cultural expressions.
    There is room for more clarity at the local and national levels.
    Now I have a personal question for you.
     The European Union is able to establish regulations or introduce a framework. Other countries, other parts of the world, are moving much faster than Canada is.
    Given the need to protect Quebec's and Canada's cultural diversity, do you think we've missed the boat? Do you think we've been asleep at the switch, so to speak?
    Sorry.
     It will have to wait until next time, Mr. Champoux. You ran out of time at least a minute ago.

[English]

     Mrs. Thomas, you have the floor for five minutes.
    Thank you.
    Mr. Geist, I'm coming back to you.
    One of the illustrations around transparency that Dr. Kearney painted for us, which was really helpful for me, was this: She explained how, as a photographer—she does photography on the side—she might take a picture of a sunset and then she might mix a few changes or some edits into that photo.
    She is the original artist. For her to be able to identify what influenced the changes she made to the photo would be impossible, because which of the 10,000 sunsets she's seen impacted those decisions that she made? Who knows?
    In the same way, with machine learning you have 10,000 original inputs, let's say. Then on top of that, every new output becomes a new input, so every millisecond, or even faster—we can't even compute it in our own human minds—you have this effect of just amplifying information or producing more and more and more.
    If you, in the spirit of transparency, were to create a footnote with all the possible information that the machine could have pulled in order to generate content for you, it would be insane. Then how do you develop transparency around such a model, when you don't actually know what content was drawn from to begin with?
    Thanks. That's a great question. I'll try to respond this way.
    It seems to me that transparency—and this picks up on an earlier conversation as well—can include a number of different things. It can include—and this is where I thought you were going with the photographer example—the fact that someone used various AI technologies to supplement the work that they created. It's their creation, and they feel comfortable using AI in that context, but there's going to be a spectrum there too, and when, at some point in time, does it become more AI and less them, and do they need to disclose that?
    That's something we're grappling with quite regularly in an educational context, because students like to use AI, and we want to ensure that the work they're generating is their own work, even if they are using AI for different purposes.
    We all use it, whether that's for some of the more mundane things or spell-checking and grammar. Sometimes it's much more.
    In terms of the generative AI sources, there's the inclusion in the large language model. To come back to the notion of MI, are they using this at all so that if you had, say, an opt-out system, there's that ability to identify if it's there or not?
    Then there is that next level that you're talking about: “Tell me how you got the answer that you got.” You're right to ask, “How am I getting this answer right now?” It's a result of a lot of study and thinking about different issues and just a moment of, “Okay, how can I best deal with this issue?”
    I don't know that the expectation should be a 10,000-point citation of all the various inputs that went into that, but in our everyday lives we recognize that a lot of inputs come into what our ultimate output is.
    However, with some of these services, I do think that we're already sort of starting to see the answer to your question, which is that people want to know the core sources for some of the more notable claims that are being generated through a generative AI source. Some of the earlier versions of ChatGPT did not include that. Microsoft provided a version of ChatGPT that included citations that you could then click through, but the original version of ChatGPT did not.
    We've seen a shift because I think the public, when using these services, want the ability to dig deeper. This is good news for news organizations and otherwise, because it tells us that people still want to see some of the original sources and know that there is an ability for these services to answer the fundamental question. They don't want to see the 100,000 or 100 million tokens that went into this answer, but some of the core sources that they should know about that may allow them to better understand how this particular response was arrived at. To me, that's important transparency.
(1805)
     What if it doesn't have that?
    In many instances, it will have a lot of that. If there is no answer, then perhaps that speaks a little bit to the credibility of the answer too.
    One of the things we see, and will see more of, is a timesome skepticism about the answers that this generates. Is it being hallucinated? Have they canvassed the right sorts of things?
    In the legal world, there are a lot of concerns about made-up cases, known as hallucinations. You can't rely on that when you're developing a legal paper or an argument, so there are demands to better understand how these things are being arrived at. If the service isn't able to provide you with effective responses, I think that actually undermines, in the view of the user, some of the value of the service itself.
    Thank you, Dr. Geist.
    Mr. Myles, you have five minutes.
    Thank you very much.
    This idea that this is a machine that can do so much came up before. Certainly, it could provide us with that kind of list, if it needed to, from a technological perspective. There's just that idea of being able to give us those transparency answers.
    I had a question for you, Mr. Sutherland, on the licensing front. Is there a risk of the larger companies being responsible for establishing some of those licences, and then where does that leave the independent creators in the country?
    That's the age-old issue.
    The independent creators don't have to sign up with the large companies, but they tend to. To use streaming as the example, when Napster was around, it looked like the music business was going to just die. Then the major labels figured out how they could work with the creators. They set up the model and they figured out the compensation package, and then the independent creators were able to use distributors to access them to the extent that they wanted to, although, for some smaller artists, it was still beneficial to sell product off the stage at their concerts, so it was fine.
    We're not going to put the AI genie back in the bottle. That's just like saying, “Oh, you can't have streaming.”
    The other thing that strikes me is that in the context of creation, we're talking just about AI. We've had AutoTune around for about 25 years, and no one has complained that “Garth Brooks doesn't sound that good in real life. It's just this little machine that tunes his voice.” I mean no offence to Garth Brooks or his fans.
    The independent people will get to draft in behind whatever the major aggregators of the rights are going to sort out. That's my view.
    I had a question about the smallness of the amounts of some of these licences, as Mr. Geist imagined. Is this something you also imagine, or is it just a matter of morality?
    As you said, it's property, so you need to do it. Is the size a concern of yours? Does that matter? Sometimes we hear that it's so small that it won't matter.
(1810)
    It absolutely matters. I can't stress it enough.
    To an artist especially.... I'm thinking about Mustafa right now, my artist. He's very specific about where he is going to appear and what he's going to do and who he is going to be affiliated with. It's so important to him creatively.
    For most artists, it's that integrity. It's how they're going to be associated, who they're going to play with, how their music is presented. It's all of those things. It's not about the money. We don't have time here to go through the entire economic cycle of an artist, but they really do make most of their money in touring and songwriting in different configurations, so it's not about how much money it is.
    If Feist doesn't want her songs being scraped because someone is going to fake her voice, she shouldn't have to do it. Just because she wants to sell her music to the public, this idea of.... I agree with Professor Geist: I'm not getting in the way of AI. I think it's great, and you're not going to put that genie in the bottle anyway. Just stay out of the way, support artists and make sure that we have creativity. I think what Véronique is saying as well about homogenization is fantastic.
    The funding systems in Canada have helped, from a French music point of view, and from my point of view in the music business, working with acts in Quebec, they are absolutely essential to keeping a francophone voice out there.
    Just give the artist the ability. Keep that strong. Let them do their thing. Let them choose. If they don't want to be part of this whole large language model, then they don't have to be part of it. I don't see any reason that this stops technology. If Feist isn't in it, who cares? If she doesn't want to be in it, great: She's not in it.
     Do we still have time?
     Tim, did you want to ask something?
    Let me first welcome Tim Louis back to this committee.
    It's always good to see you here, sir.
    Thank you, Madam Chair. It's good to be back.
    In addition to Mr. Myles being an artist and a musician, so am I. This is an important conversation, and I appreciate this.
     I would continue with Mr. Sutherland for a quick question. I speak with artists on a regular basis. I hear from their organizations, and as recently as a couple of weeks ago, it was the Songwriters Association of Canada. They have concerns about how AI will affect their livelihoods and their artistic integrity.
     These are important conversations, but some of the companies we're talking about are bigger than countries. How can countries collaborate with each other, with other international partners pushing for global standards on AI transparency and licensing, and maybe specifically organizations like the World Intellectual Property Organization? Are conversations like that happening now?
    I think that to some extent everyone is aware of it. It's moving so quickly, and I think that's hard. To answer your question, especially for the Songwriters Association, again, if you want to think about it, Mozart figured out how to commercialize his music. That was in the 1600s, and you know what, from there, everyone figured out how to commercialize it.
    Where should our focus be? Are we going to try to attack the AI companies and try to regulate our way out of it? I mean, that's crazy. Or are we just going to say, look, we value the creative source and ask what we can do to help songwriters, to use your example?
     How can we help songwriters? Well, they need time to create. They need access. In my experience, a lot of the songwriters are also performers. That's what I see. They need the ability to go out there and play their music. They need to have places to play. They need to have other artists to collaborate with. That's how we're going to beat it. I don't think the international organization and co-operation.... I mean, the funny part is that artists are their own international co-operation organization.
    Again, I'm just using the example of the artists I work with, like Mustafa. He's in London hanging out with Dua Lipa, because she loves his music. There's a natural inclination for co-writing. Feist writes a song for Joe Jonas because he likes her style.
     I think the international co-operation is that the artists are out there interacting with each other.
    Thank you.
    Before I go on, I just wanted to advise all members that today is the birthday of our clerk, if you want to give him a little round of applause.
    Some hon. members: Hear, hear!
    The Chair: That was just to put him on the spot.
    Further, we do not have time for another full round. I'd like to ask members whether we want another shorter questioning round.
    Does anybody have any suggestions? Shall we finish the meeting or have a shorter round?
(1815)

[Translation]

    Sorry, I didn't hear that.
    We'd like to have a shorter round.
    Very well.

[English]

    Is everybody okay with a shorter round?
    We'll do Conservatives for three minutes, and then three minutes and two minutes. Does that work?
    I'm going to move my motion at the end of this meeting. Out of respect for other folks, I would give them the opportunity to ask their questions first. Then the final round can be mine.
    Thank you for the collaboration.
    I have Mr. Al Soud next, but he's not here.
     Mr. Ntumba, do you want to take the next question?
    Yes, I do. Thank you.

[Translation]

    My question is for all the witnesses. Since we don't have a lot of time, I'd appreciate it if you could each take no more than a minute to answer.
    In your view, what kind of partnerships could the Department of Canadian Heritage establish with the academic, legal, IT and cultural communities in an effort to control the impacts of AI?
    Do you want us to take turns answering?
     First, the fact that you are listening to researchers like us is extremely important, and we appreciate it.
     Right now, the UNESCO chair on the diversity of cultural expressions is carrying out a study on behalf of Quebec's culture and communications department. We were asked to survey artists and professionals in the cultural sector, in order to document what they need in terms of supports. In other words, what do they need to help them adapt to AI and leverage the technology in a way that best suits them? The idea is also to examine which types of policies would support them in that learning process in using AI.
     I think that's a very good thing to do. We will be releasing our findings soon, and I hope they will guide jurisdictions in rolling out new public policies to help artists and professionals in the cultural sector strengthen those skills.
    That's one way.
    Did you have anything to add, Ms. Mochama?

[English]

     I think we need everybody at the table of the information ecosystem, because it's very clear that it's fractured in many different directions.
    I think what we're doing repeatedly, as I think Mr. Geist spoke to at the top of his speech, is that we respond to each new technology as if it's a brand new crisis and needs to be legislated piecemeal. Sometimes that has worked and sometimes that hasn't worked, but what is clear is that it has led to a fracturing of our information ecosystem, and we need every player and everybody together.
    Our news media publishers are very much concerned with the lack of accurate information and how it's not getting to Canadians. That's a problem both within our newsrooms and with how the audience understands it, but also within the overall pipeline that is built towards them. I think that requires everybody, so that when we talk about partnerships, we don't talk about just the news media publishers getting together. We bring together foundations, library systems and municipalities. We bring together big tech if they want to sit with us, but there's nothing compelling them to do so, even though we are the closest to the communities where they lie.
    A teenager who looks up math answers on ChatGPT in class is residing in a classroom where there are serious deficits and information gaps, and I think that requires everybody: We have to think about who are the end users. What is the vision we want for them in terms of the information they get? Also, when we think of partnerships, who do we want holding the hand of that 16-year-old boy and making sure that he gets the right math answer and can replicate that information?
    Thank you.

[Translation]

    Mr. Champoux, you have two minutes.
    Thank you, Madam Chair.
     I won't use the full two minutes, because I'd like us to have enough time to discuss the motion Mrs. Thomas is going to propose.
     Mr. Geist, in the previous Parliament, we were on opposite sides of the debate when it came to bills C‑11 and C‑18. Today, again, we don't share the same view.
     In the case of AI, I think the goal….
    I still have a great deal of questions about all this. I still don't know whether regulations would be effective, given the breakneck speed at which the space develops. As my fellow member pointed out, the companies we are trying to regulate are bigger than some countries.
    That makes me wonder how we make sure we don't drown in the ocean of largely U.S. culture. Can a carrot-only approach work, without the stick? In other words, can we use incentives? Can subsides or programs that support artists who use AI help? Do you have any thoughts on that?
    Please keep your answer brief.
(1820)

[English]

    I guess I'll say a couple of things.
    It came towards the very end of my opening remarks. I don't want it to get lost. I think the value of what's often referred to as “public AI”, the notion that these large language models should not be the exclusive providence of these large tech companies and there is the ability to create large language models that better reflect some of the kinds of values that you're talking about and could be readily used by anyone...the way we ensure that we don't get lost is to show up.
     The risk we face, I think, if we establish systems that say we want to erect large walls here because we're concerned about some of the potential negative impacts may, in some ways, accelerate some of those kinds of negative impacts. I think it's about showing up. I think, in response to that last question about who ought to be around, that's also part of the answer. We need to ensure this is a dialogue that is as broadly inclusive as possible.
    One of the concerns I would have, quite frankly, is that the government, Minister Solomon, has set up this so-called sprint of 30 days of a public consultation, with 26 people on a strategy board that is not broadly inclusive and does not reflect many of the kinds of perspectives that we're talking about. If you don't establish systems and a consultative process that.... Frankly, this committee has done a better job of trying to bring in those perspectives than the Minister of AI has. That's a problem if what you want is outcomes that reflect all the kinds of perspectives that have been brought to bear over the last few weeks.

[Translation]

    Thank you.
    That was almost three minutes.

[English]

    Mrs. Thomas, go ahead.
    Thank you very much.
    Thank you to our witnesses for being here.
    I am going to take a step away from the agenda that we are currently dealing with and invite my colleagues to consider something different.
    I wish to move a motion. There has been notice given. I move:
Given that,

Through the Indigenous Art Centre, the federal government safeguards a collection of more than 5,000 Indigenous artworks of exceptional cultural and artistic value, with an estimated value exceeding $14 million,

More than 130 artworks managed by the centre have gone missing, according to a troubling audit that highlights widespread mismanagement, weak oversight, and inadequate security

The committee invite the following witnesses to testify before the committee: the director of the Indigenous Art Centre, for no less than two (2) hours, and the authors of the audit report prepared by the Audit and Assurance Services Branch of Crown-Indigenous Relations and Northern Affairs Canada, for no less than two (2) hours,

And that the committee report to the House its concerns with the audit report.
    Thank you.
    I have Mr. Myles next, please.
    We're supportive of this study. We see it as being important for us to study here.
    I have a small proposed amendment.
     We propose to amend the motion by adding a final sentence that says, “and that, pursuant to Standing Order 109, the committee request that the government table a comprehensive response to the report”.
    I think that's often assumed, but absolutely, for further clarity, no problem.
    An hon. member: We don't need to vote.
    Do we have unanimous consent to pass this motion?
    Some hon. members: Agreed.
    (Motion as amended agreed to [See Minutes of Proceedings])
    The Chair: Mrs. Thomas, go ahead.
    May I just take a couple of minutes and, through you, Chair, check in with the clerk on a couple of outstanding items that have been promised to this committee? I'm just curious as to where they're at.
    Sure.
    When the CEO of the CBC was here, there were several documents that were promised to us. One she was supposed to report to us was “the last five Conservative voices”, for the last two weeks, by October 30. I'm just curious as to if we've received that list. She still has a couple of days, but I'm curious to know if that's been submitted.
    The second thing promised to us was the total number of TFWs hired in the past 10 years and the positions they filled. That was also promised to us by October 30.
     Then there was the total amount spent on CBC Gem. That was also promised to us by October 30. I realize there are still a few more days there, but I thought I'd check in.
    The second meeting that I would like to check in on has to do with the minister who was here and promised that he would review the funding for and the eligibility of Cult MTL, as well as funding that went to the Anti-Hate Network. He promised that he would assess that and that he would get back to this committee with regard to whether that funding would continue, or if it would cease based on some troubling posts from these organizations. I'm curious as to whether if we've heard back from the minister on that.
    Lastly, Isabelle Mondou promised to send the working definition of “Canadian culture”. I'm just wondering if we've received that.
(1825)
    Is it the will of the committee to ask the clerk to reach out to those witnesses and ask for the timelines for that information?
    Some hon. members: Agreed.
    The Chair: Witnesses, I want to really thank you for your participation today. Your insights are very valuable. It was a super interesting conversation.
     I would add that if there's anything you weren't able to get on the record today, please send us a brief. Let us know if any thought occurred to you afterwards that you wish you had said. Please send it to us. Our analysts can take that information so that our members will have it. We can include it in our final report on this study.
    Thank you again.
    With that, I declare this meeting adjourned.
Publication Explorer
Publication Explorer
ParlVU