Skip to main content

CHPC Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Canadian Heritage


NUMBER 018 
l
1st SESSION 
l
45th PARLIAMENT 

EVIDENCE

Wednesday, December 3, 2025

[Recorded by Electronic Apparatus]

(1635)

[English]

    I call this meeting to order.
    Welcome to meeting number 18 of the Standing Committee on Canadian Heritage.
    Before we begin, I ask our two in-person participants to look for the green card in front of you. There are guidelines and measures in place to help prevent audio feedback incidents and protect the health and safety of all participants, including the interpreters. There is a QR code on that card, as well, if you need further instruction.
    Pursuant to the routine motion adopted by the committee, I can confirm that all witnesses have completed the required connection tests in advance of this meeting. We do have some witnesses online today.
    Welcome. Please wait until I recognize you by name before you speak. All comments should be addressed through the chair.
    Pursuant to Standing Order 108(2) and the motion adopted by this committee on Wednesday November 5, 2025, the committee is meeting to study the effects of influencers and social media content on children and adolescents.
    With us today David Morin, full professor and UNESCO Chair in the Prevention of Violent Radicalization and Extremism, Université do Sherbrooke. From the Media Ecosystem Observatory, we have Aengus Bridgman, director. Online, we have Michael Cooper, vice-president of data and partnerships from Mental Health Research Canada. We also have Katie Paul, director of the Tech Transparency Project.
    Welcome.
    I will note that we have another witness joining us at 5:30, Marie-Eve Carignan, also from UNESCO. We will give her five minutes to speak when she arrives at 5:30.
    Starting now, each delegation, each witness, has five minutes to give some opening remarks.
    We'll start with you, Mr. Morin. You have five minutes, starting now. You have the floor.

[Translation]

    Thank you for inviting me and giving me the opportunity to speak to you today about a rather specific aspect of social media, namely the link between exposure to hateful content and violent extremism, one of the dark sides of social media.
    My daughter would be very upset with me for not starting by noting that social media has many virtues. Overall, it is often very helpful and great for young people. However, today I’m going to talk to you specifically about one aspect, namely the link between social media and violent extremism.
    I will start with three very recent examples in Canada.
    The first is the arrest of a teenager in Nova Scotia who was charged with child pornography, among other things. He was part of what’s now called “nihilist extremism”, which glorifies violence and cruelty by using references or codes related, among other things, to Nazism and jihadism. That teenager also belonged to an online movement called group 764, which recruits young people to commit violent acts, including mutilation and suicide.
    I’m mentioning this example because, obviously, the 764 movement recruits a lot of people on digital social networks, and these individuals are getting younger and younger.
    The second example is the arrest of a young jihadist this summer in Montreal. Radicalized online in favour of the Israeli‑Palestinian conflict, he pledged allegiance to the Islamic State and was preparing to commit a violent act. It reminds us that the virtual caliphate, that of the Islamic State, and online communities play an important role in this terrorist organization.
    The third example is that of Patrick Gordon MacDonald, alias the “Dark Foreigner.” He was sentenced to prison for charges of terrorism and hate propaganda. He was promoting a violent far‑right ideology for the neo‑Nazi accelerationist group Atomwaffen Division. Here too, the Atomwaffen Division was an extremely active group online, which has also been added to the Canadian list of terrorist entities. This reminds us that, long before many other groups in the United States, the far right understood the enormous potential of social media to spread its extremist messages.
    I’ll talk to you very quickly about the Internet today, digital social networks and violent extremism. What are the current trends?
     I would like to emphasize three points.
    First, it should be remembered that social media today knows how to exploit periods of polarization and attempts to recruit people by targeting younger and younger individuals. There is therefore a trend toward younger people becoming radicalized through the Internet in an increasingly short period of time.
    Next, it’s important to know that mainstream platforms, where we find radical but nonviolent content, are being used as a gateway to then direct young people toward much more violent content on different platforms. That’s an important point.
    Finally, and I want to stress this point, today, video games with online connectivity features are being increasingly used to ultimately try to recruit young people into all sorts of violent extremism. This last element obviously relates to the issue of generative artificial intelligence, which will multiply the possibilities for these extremist groups to radicalize young people.
    I wanted to talk to you today about the results of systematic reviews on the potential effects on young people of online exposure to hate. What does the evidence say? It says that exposure to extremist content online today does indeed seem to be linked to the adoption of radical attitudes, regardless of the type of media in question. Exposure to extremist content online also seems to be linked to the adoption of extremist behaviour, not only in the virtual world but also in real life. It’s important to note that. Finally, I would like to add that exposure to hateful content on the Internet is not the only factor. We must also consider the other factors in an individual’s life that may lead to radicalization, such as personal crises, mental health issues, belonging to a radical group, etc.
    Indeed, the evidence reminds us today that there are repercussions on social attitudes when people are exposed to hate speech. It increases negative attitudes toward targeted groups; it decreases general positive attitudes; and it has potential effects on mental health, and societal consequences on trust between social groups, aggressive behaviour or the normalization of violence.
    I will note certain elements. According to Statistics Canada, in 2022, 71% of young Canadians aged 15 to 24 reported having seen hateful content online in the previous 12 months compared to 49% of the general population. According to the police, more than a third of the victims of hate cybercrimes were under the age of 25. The Royal Canadian Mounted Police, the RCMP, also noted that, between April 2023 and March 2024, 25 people were charged with terrorism, and seven of those accused were minors. In that context, obviously, the status quo is not acceptable.
(1640)
     I repeat, it’s not necessarily about having an approach that’s solely punitive and overly restrictive. There are examples elsewhere. We can see what’s happening right now in Australia, the United Kingdom and Europe. We need to take matters into our own hands and do it in a targeted manner. This is what we stress a lot, by first placing the primary responsibility on platforms to regulate harmful online content. Next, it’s up to other actors in society to work on prevention and awareness.
    In conclusion, Madam Chair, I would like to note the importance of accountability for both women and men in politics. It is their duty to make responsible statements that do not fuel the growing polarization in our society; this obviously does not prevent politicians from addressing sensitive and controversial issues and engaging in politics, since politics is all about debate.
    Thank you for today’s initiative, which is undoubtedly another step on this long and winding road.
    Thank you.
    Thank you.

[English]

     Mr. Bridgman, you are next. You have the floor for five minutes.
    Thank you for the invitation to speak here today. I want to open by saying that my expertise is as a scholar of the information ecosystem and the overall information environment. I'm not an expert on children or youth. Nevertheless, I find our studies of influencers and the information environment very pertinent for this study and very pertinent for this committee.
    Recently, we ran a study looking at the rise of influencers in Canada. We know now that amongst the youngest cohort we were able to survey, over four-fifths of Canadians, so 81% of youth, are getting their news typically from influencers now. They're getting their news, their political information and their entertainment content. That is the source of their political and social life. That is at its base. This has enormous repercussions for our political reality and for the training of youth in the political process.
    I want to highlight two major findings from that recent influencer study that I think are particularly pertinent. The first is the way in which influencers spread and come to appear on the screens of youth here in Canada. The primary way in which influencers reach new listeners, new adherents, is through the recommendation algorithm. It is not through explicit preference. It is not through social relationships. It is through the algorithm. In your day-to-day behaviour on social media, it is the platform itself that is determining what you see, and not any intentionality. This reduction in intentionality, and the way in which particularly youth consume and think about information, is enormously important. We haven't really appreciated the consequence of it.
    If we think back to 20 to 30 years ago, the way you chose to get your information and where you got your information was very much about a choice that you would make. You would go out and you would make a decision for a paper, for a TV channel or for people to talk to. It is not so today. For the youth of today, your choice is the platform. In some ways, that is determined by your social status and by your friend group. Then, once on the platform, your choices are much less important than your behaviours and your actions that you don't even know you're necessarily engaging in. That loss of intention is enormously important for political and cultural socialization.
    Number two is that influencers are now central to the political conversation. They make up the majority of engagement. The majority of Canadian eyeballs that see political content online are now seeing influencer content. We have a system, a set of norms and rules, around speech, around disclosure and around transparency that grew up in an era when influencers didn't exist and when it was unimaginable that a private citizen with a telephone in their bedroom would be able to reach millions of Canadians, but that is the state we are in right now. Our regulatory approach, particularly during elections but outside elections as well, is completely unprepared and is ill-adapted to the new reality.
    I have three recommendations for this study. First, this is what it is. This is not a phenomenon unique to Canada. Influencers and social media are now the primary sources of social life for youth. Any policy or approach that doesn't take adequate account of that is doomed to fail. We need to operate within that regime. We need to operate within this idea that youth like their social media. They want to continue to use it. We can better protect them, and we can better encadrer that space, but it is what it is.
    Second, algorithmic discovery is the key mechanism and the key way this stuff is shared. That algorithmic discovery is not a neutral process. It is a process by which platforms have made a series of choices about what content gets amplified and shared and which influencers are seen. The idea is that they would like you to think that there is no decision, that it is some black box that has no control, but that is not the case. There are decisions behind that. That is one of the key levers available.
    The last thing I want to leave you with is that, look, the line between entertainment, culture, community and political information has never been blurrier. For youth today, in their day-to-day consumption of information, politics, entertainment, culture and TikTok dances are all intermeshed and together. That creates an environment where they can become incredibly informed, but it also creates some dangers. Some of these dangers are that our media literacy training programs, the way we have taught people to consume news in this country, are completely ill-adapted for an environment where all of this is blended together. I urge this committee to reflect on and to account for that.
    I'll leave it there.
(1645)
    That was perfect timing. Thank you.
    We'll go online now to Michael Cooper with Mental Health Research Canada.
    Mr. Cooper, you have the floor for five minutes.
     Thank you for having me here, and my apologies that I could not be there in person. I very much would have liked to be.
    Again, my name is Michael Cooper. I'm the vice-president of data and partnerships here at Mental Health Research Canada. We have been funded to collect ongoing trackers of mental health indicators since 2020, as a pandemic response, and since that time we've evolved to include a number of cross-sectional issues that intersect mental health. I can share some of them here today.
    Specifically, I want to share a few things I've learned about age 16 and older. We don't collect any data for anyone under the age of 16. I can speak to a bit of other research on that topic. Specifically, I want to mention that we've been tracking online gambling specifically among youth. The algorithm is showing a lot of information about that particular issue and how that's driving problematic gambling.
    I also can speak a bit about screen time. One of the things we've been tracking is the volume of screen time. We've identified that for a number of youth—essentially, for anyone who consumes more than six hours of personal screen time per day—there are significant mental health implications, from anxiety to depression and to suicide ideation. We've published some reporting on that. Of course, we've seen that youth aged 16 to 24 are the group most likely to spend more than six hours a day on screen time. Therefore, they would be the ones most impacted by these indicators.
    The other issue I wanted to speak about a bit is how we have tracked social media specifically. We've tracked what youth are doing on social media: what sorts of activities; cyber-bullying; what their experiences have been along the lines of FOMO, the fear of missing out; and whether or not they're experiencing issues around comparing themselves to others as well. I've put together a deck and have sent it along to the group if you're interested in asking any questions about that specifically.
    I do want to speak on a few other issues that are more general around mental health and specific to social media, where we would say that we've been tracking long-term trends since the 1970s on mental health. We have indicators that have tracked this since that time. They're not clinical in nature, but they do track general mental health indicators. We did see a significant shift in about 2004 for a lot of these youth in terms of their mental health, which would have corresponded to when a lot of these smart phones would have ended up in individuals' hands. We did see another movement again in 2020, through the pandemic, and not a recovery since that time as well. I want to highlight that this is another area of research we are privy to as well.
    I want to highlight the social connection aspect of it. Individuals who are more connected to their community, to family and to loved ones are far more likely to have positive mental health indicators and to seek out help. We do know that for a number of individuals, the experience they're having online through social media is shallow, and not necessarily engaging with outside individuals could be one of those reasons why their mental health is poor if they're spending so much time on social media.
    The other thing I wanted to speak to very quickly is this idea of influencers. I do not track influencers. However, I am a vast consumer of research, and I know that a tremendous amount of research exists on understanding how youth process, especially, advertising. We have this from past studies by Concerned Children's Advertisers.
     We have a great amount of data on this. We know that youth are not fully developed in terms of their ability to discern between informational content and selling content. We also know that it becomes especially blurry if the line is blurred. If there's not a price tag at the end of an ad, most youth would not be able to identify that it is in fact an advertisement. When I think about influencers, I'm thinking about the fact that a lot of these influencers are being used to sell products to youth, essentially circumventing a lot of these Concerned Children's Advertisers laws that we've had over a period of time.
    I'm more than happy to speak to any of these topics. These would be areas that we have data and expertise in. I probably have 300 stats on these issues. I don't want to just throw numbers at you, but I can assure you that we are tracking these issues and other issues such as body dysmorphia and eating disorders. We do know that about one in four young women is experiencing a high risk of eating disorders. A lot of that does tie in to social media as well. We're seeing connections with the high social media use as well. There are a lot of very troubling statistics around what's happening in mental health as it pertains to social media for youth.
    Thank you.
(1650)
    Thank you very much.
     I'm coming up with lots of questions after the testimony we've had so far today.
    Katie Paul, from the Tech Transparency Project, you are up next. You have five minutes starting now.
     Thank you so much for the opportunity to speak with you today about the impacts of social media on young people.
    My name's Katie Paul, and I'm the director of the non-profit Tech Transparency Project in Washington, D.C. We are a non-partisan research organization that investigates the influence and impact of big tech on the public.
    Our research has found that big tech platforms have not only amplified harm to children, but often profited in the process. Recent reports from a multi-district lawsuit in the United States revealed that big tech companies like Meta and YouTube are internally aware, based on their own research, of the potential harms of their content to children. That research was then buried by the companies so they could continue to profit from that harm.
    The revelations from the lawsuit track with years of research from the Tech Transparency Project. Our investigations in 2021 and 2022 found widespread drug trafficking on Instagram that was algorithmically pushed to accounts for users under the age of 16. Meta's platform design and algorithms make it easier for kids to contact drug dealers than to log off the platform. The study found that while it takes only two clicks for a teen to find and connect with a drug dealer on Instagram, it takes five clicks to log out of the platform.
    Instagram's automated technologies also undermine the company's own efforts to address drugs. For instance, while Instagram banned hashtags for popular drugs like MDMA, its search autofill recommended alternative hashtags for those drugs, driving kids directly toward dealers.
    The problem isn't just platform design. Meta also directly profits from pushing drugs to users on its platform. A series of TTP investigations found that Facebook routinely approved ads pushing pill parties, alcohol, gambling and vaping, as well as extreme weight loss to kids under the age of 18.
    Meta's primary business model relies on advertising. It's the company's main product, but it has little oversight and quality control. Meta does little to implement safety when it comes to ads. In July of last year, our organization published a report that found Meta has run hundreds of ads for deadly drugs like cocaine and fentanyl. These ads are not simply content posted by third parties. Meta has reviewed, approved and is profiting from these advertisements. These kinds of advertisements continue today, as was reported by the Toronto Star in a recent investigation.
    The problem isn't limited to ads for drugs. In October of last year, TTP found that Meta was also running hundreds of ads for weapons, in some cases amounting to international arms trafficking. These were not ads for big box stores or local gun dealers. They were illicit ads selling ghost guns, fully automatic weapons and illegal gun parts. These ads not only help put illegal trafficked weapons into the hands of people across North America, but they also undermine the business of legitimate licensed gun dealers.
    Ads for both guns and drugs follow the same pattern. They feature an image or a video of the illicit content and link to a private messaging service like Telegram or WhatsApp, which is also owned by Meta, to conduct transactions.
    Meta is perhaps the most critical piece of this puzzle. These dealers buy ads from Meta to get their product in front of as many people as possible. They could not attain this reach without the help of Facebook or Instagram.
    While these social media and tech companies are aware of the harms of their platforms, they don't take action to mitigate those harms until after the potential consequences have been raised. Companies like OpenAI, which is facing a major lawsuit for its AI chatbot's role in teen suicide, created a teen version of the chatbot only after it was sued by the family of Adam Raine following ChatGPT's AI chatbot providing instructions on how to make a noose and encouraging Raine to commit suicide.
    In 2024, Meta launched its teen Instagram accounts, holding up the feature as a move for parents to help keep kids safe on the platform that they failed to effectively moderate. The move was largely part of a broader effort by Meta to stave off the implications of civil lawsuits and a wave of pending regulations from lawmakers in the U.S. and abroad. What Meta had pitched as new features to keep teens safe was simply a repackaging of things the company had already claimed it was implementing years earlier. TTP recently tested these accounts and found that the content Meta had claimed was barred from teens—notably graphic content and fight content—was served readily to teen accounts despite the heavily promoted claims of protections. This continues today.
    As companies like Meta have come under pressure, they have funded organizations like ConnectSafely and the National PTA to ensure they launder their narrative through paid allies.
    These social media companies and chatbots are among the most well resourced and technologically advanced in the world, but those profits have been built on decades of harm to children, which the companies are aware of but take no action to address unless faced with the potential of repercussions.
(1655)
     They have the capital and capabilities, but have proven time and again that they cannot be trusted to act in good faith. It's imperative for national governments to effectively regulate these companies for their role in profiting from the harms to the most vulnerable population.
    Thank you very much.
     Thank you.
    We will now turn to members for questions, starting with Mrs. Thomas, for six minutes.
    Thanks to our witnesses for taking the opportunity to be with us here today. It's much appreciated.
    My first question is going to go to you, Ms. Paul.
     You said that big tech companies, such as Meta, profit from these ads that are advertising illicit drugs, weapons or gambling to underage individuals. For us to have a better understanding of this, would you have examples of these ads that you could supply to the committee, so that we could see what they look like?
     Yes. I have submitted a write-up of the testimony, with lots of links to citations, including a report that has multiple slide shows on the ads with regard to drugs and weapons, as well as the teen ad account tests that we ran over a three-year period, and the ads that were submitted and approved by the platform for those.
     Perfect. My apologies that I have not seen that. I was just told that it is in translation, which is why I have not yet received it. I look forward to being able to review that. Thank you so much for sending that our way, Ms. Paul.
    My next question is going to Mr. Morin.
     I understand you've done quite a bit of research with regard to the radicalization of young people. Last month, the CSIS director, Dan Rogers, warned about hateful ideologies, including anti-Semitism. He said that young people are being radicalized in a dangerous way, and that this has been amplified since October 7.
    Can you explain what the link is between anti-Semitism and radicalization here in Canada, and what can be done about that?

[Translation]

    Thank you very much for this question. Can you give me half an hour to respond?
     Yes, absolutely. In the ecosystems we’ve been monitoring since the attacks of October 7, 2023, we have indeed observed a convergence of hate speech directed at the Jewish community, which is associated with the Israeli government without any nuance.
     You refer to influencers. I’ll avoid naming names, but there are obviously radical Islamist groups in Canada and some prominent figures in Quebec who are trying to take advantage of the feelings of injustice and anger among some young people concerning the situation in Gaza to promote a narrative that emphasizes the supposed incompatibility between Islam and western values.
    Obviously, this kind of speech tends to radicalize some of our youth, which is why it’s important to have extremely nuanced political discourse. Again, I don’t want to name names, but some groups do have a storefront, are present on major social media platforms, while others are on much more alternative platforms, and they still reach a fairly significant, albeit targeted, audience. I’m not sure, if I named them, that it would necessarily resonate.
    As for antisemitism, as you know, it did not originate with the attacks of October 7. Antisemitism has been present in our societies for a long time, but this type of conflict indeed contributes to reactivating it. In my opinion, we should better regulate hate speech because—as the statistics show—hate speech against the Jewish community has significantly increased in recent years. It does not seem to be weakening. It has stabilized, but not actually decreased.
    I hope I’ve answered your question.
(1700)

[English]

    Thank you.
    That's a good start. Can you break down further how social media platforms are used to enhance that radicalization of young people, or that pursuit of young people for the purposes of radicalization?

[Translation]

     Social media does several things. First, it obviously allows audiences to be reached and targeted, and I think my colleagues have said that well. It’s therefore possible to go into virtual spaces where we know, for example, that young people will be playing online war games, etc. That’s one example.
     It’s known that they can be reached. Through Internet messaging functions, it’s possible to contact these young people, quietly ask them questions about their life experiences and their political views. Indeed, that’s where the most vulnerable individuals are identified and gradually radicalized.
    We’ve seen it a lot from the Islamic State. I would say that it really invented a kind of banner that, even on digital social networks, made it possible to pledge allegiance to that group and to commit, without ever having been solicited to do so, a knife attack, a vehicle ramming, etc.
    Again, social media is, among other things, one thing in the tool box of terrorist organizations. Many people go on social media and, fortunately, do not become radicalized. I wouldn’t want anyone to think otherwise. On the other hand, we see that, among young people who are being radicalized, there is indeed a very high consumption of digital social networks. There’s no doubt about it, and it’s a consensus among researchers working on issues of violent radicalization.
    So it’s this ability that social media have to reach people. Obviously, there are also all the encrypted platforms that allow for the exchange of information. In addition, there are also all the sources of funding today using cryptocurrency, which make it possible to fund terrorist organizations or groups.
    So it’s an extremely useful and powerful tool for terrorist organizations.

[English]

     Thank you.
    Mr. Al Soud, you're next for six minutes.
     Thank you, Madam Chair.
    Thank you all for those opening remarks and for being with us.
    I asked about this on Monday, and I thought the response was extremely interesting. I'd like to focus a little on the parasocial relationship between influencers and young consumers. Many adolescents follow celebrities, like Drake and other top streamers, who openly promote online gambling platforms. Even when platforms claim to restrict access to minors, the content itself is watched, primarily, by youth. Based on your expertise, how concerned should we be that gambling-style content is normalizing high-risk gambling behaviour among children and teens?
    I open this question up a bit. I know, Mr. Cooper, that MHRC has previously considered this impact of gambling at large. I'd welcome your thoughts.
(1705)
    I'm happy to get started on this.
    As you know, gambling is illegal for those under 18. We did not ask about that specifically, but what we can identify is that there is a tremendous amount of bending the rules. Typically, Ontario has been the one that has legalized single-game sports betting, but we're seeing that coming in from every province and, of course, it's not legal in those provinces. In some provinces you go into, they're running ads saying, “Don't gamble on Bet365, but bet on our local platforms.”
    There's some great work that is coming out of UBC and the centre for gambling there. Dr. Clark is his name. He's done a lot of work looking at the neurology as to what's happening with youth and, specifically, with gambling-like activities. I would think it's along the lines of Roblox, going in there, buying a randomized loot box and, then getting some random item inside that, or going on Call of Duty or some video game and getting a randomized loot box. It's essentially the same dopamine hit you get when you get that sort of experience. You're essentially participating in a gambling-adjacent activity, and these are available at any age.
    There are even reports coming in, which we're seeing in some other countries as well, where they're taking artificial currencies like Robux and are actually able to participate and gamble their currency in Robux. It's not regulated because it's not a legal currency, so there are lots of ways that organizations are getting around this.
    We do know that youth are being inundated with ads for these sorts of things, and it is essentially rewiring their brains for both expectations and what they're prepared to do. It's basically, again, legalized gambling.
    I'll stop now because I'm sure other providers want to answer that as well.
    Actually, your audio cut out there for a bit, so I was going cut you off. Hopefully, we all heard the answer.
    Were we good with translation? All right.
    You still have several minutes, Mr. Al Soud.
    Would anyone else like to add to that before I jump into my next question? It seems not.
    Touching on this exact piece of loot boxes, I'm quite interested.... Do you believe Canada needs age-appropriate design rules or restrictions targeted specifically at gambling-style content, separate from traditional gambling legislation?
     If I could jump in there very quickly, we study 16 and up, so—
    I'm sorry, Mr. Cooper. Your audio is not working for us. We're going to have somebody call you to try to fix that problem.
    I'd be happy to jump into a separate question, if you like. Could I ask how much time I have left?
    I paused the time for this. You have three minutes.
    Fantastic.

[Translation]

     Ms. Tessier‑Bouchard, as media focused on—
    Madam Chair, Ms. Tessier-Bouchard is not here.
    She’s not here, I’m sorry. I misspoke.
     Ms. Tessier-Bouchard is not here; she will be here on Monday.
    I would like a clarification. You said that Mr. Al Soud’s time was suspended, but he was asking how much time he had left.
    Are we waiting for Mr. Cooper’s sound issue to be resolved? Are we continuing Mr. Al Soud’s time? Can you clarify?
    I reset the clock to zero when he started his questions again.
    I can suspend the meeting if you want.
    I asked because Mr. Al Soud wanted to ask questions for Mr. Cooper. I wanted to know if Mr. Al Soud wanted Mr. Cooper to be able to answer his questions.
    I would like to put my questions to Mr. Cooper, but if the audio isn’t working, we can’t do anything.
    I can easily direct my questions to someone else.

[English]

     I think my audio is fixed, sir.
    Okay. That's perfect. We don't have to discuss it any further.
    Okay.
    On loot boxes....
    I apologize. The light went out on the little USB thing. Can you hear me?
    Go ahead. I'll let you know if your audio cuts out again.
    Thank you.
    The answer is that we look at capture rates. Only about 2% of Canadians over the age of 16 are participating in loot boxes. Unfortunately, what we found is that about half the people over the age of 16 who participate in loot boxes are experiencing PGSI, which is a problematic gambling indicator. It is an extremely addictive behaviour. At this point, it's not widespread among the Canadian population, but if it were to expand.... It is one of the most addictive things we are seeing among all types of gambling activities.
(1710)
     Thank you.
    Ms. Paul, I'd like to follow up on your opening remarks.
    You noted a degree of dishonesty among certain platforms. I think I'm being modest in that assessment. Kick has become a major platform, mostly because it is welcoming of gambling streams. Twitch is imperfect but has adopted stronger restrictions and moderation policies.
    I'm curious. What does the divergence between Kick and Twitch tell us about the competitiveness pressure to reduce moderation in order to attract an audience? Certainly, I think of Meta and the pressures there.
    I will note that Kick is not a platform we focus on because we focus on the largest platforms that are of most consequence to general populations.
    That said, we have done some research on Twitch, not related to gambling but to entities monetizing Russian propaganda regarding the Ukraine war. We looked at the social media platforms that Pew Research Center, for instance, has shown to be the most popular among teens: Instagram, YouTube and TikTok. These are platforms where, largely, we do not see a lot of effective moderation, particularly on Instagram. Instagram is one of the platforms we ran test ads on, targeting teens for gambling. Every single one of those ads was approved, including ones for which we were able to generate images with Meta's own AI tool.
    These are companies that make very specific claims about their policies banning this kind of content, particularly in advertising, but we do not see that actually practised effectively by their platforms.
     Madam Chair, once more, do I have more time?
     I'm sorry.

[Translation]

    Mr. Champoux, you have six minutes.
     Thank you, Madam Chair.
    I would like to thank the witnesses once again for sharing their knowledge and opinions on this extremely sensitive and extremely interesting subject with us.
    Mr. Morin, between your opening remarks and those of the other witnesses, a lot has been said. I’ll try to gather everything I would like to ask you. You said earlier that you would have needed half an hour to answer my colleague Mrs. Thomas’s question. I would also like us to each have half an hour to further develop our ideas.
    In your opening remarks, you spoke mainly about the age of young people who are approached for the purpose of radicalization. They are getting younger and younger, you said. In addition, Mrs. Thomas raised this topic with you to understand a bit about the process and know how it works. There was also discussion of the issue of games like Roblox, where young people can communicate with each other online. I imagine the conversation must be very lively there.
    I would like you to explain to us a bit about the process leading to radicalization. Many of us are worried parents who don’t know a lot about it. How can we identify these changes in our youth? How can we be vigilant about that? What is the process, for example, of an organization that will try to target our youth to radicalize them?
     Thank you for your question.
    The first criterion is not to look at the number of hours the child spends on social media, because it’s not necessarily a good indicator. It may be an indicator of a mental health issue in the child, but not necessarily that they’re becoming radicalized.
    I’ll be careful with what I say because there’s no established profile. The important criteria or indicators that we observe include the young person’s social isolation, their psychological distress, a complete change in their habits, different friends, intolerance to any contradiction in a conversation, etc. Again, there are many false positives in these criteria that I’m citing. Anyway, I am the proud father of an 18‑year-old young woman. What I’m describing to you could have happened in my life, and I don’t think my daughter is becoming radicalized.
    We’re looking at violent extremism, so the act of committing violence. I think what should concern us beyond that is looking at what happens beforehand. That’s why I invite us to take a step back. I believe that one of the current harmful effects of social media—I think my colleagues have said it well—is that the business model of a number of social networks, in the sociopolitical space, focuses on emotion, conflict, confrontation and stopping the debate of ideas. It also aims to trap people, ultimately, in echo chambers. I think that, collectively, we must recognize that this is a problem.
    On Monday, I was testifying before another committee on the issue of anti‑feminist discourse. I give you the example of important influencers like Andrew Tate. The spread of this type of discourse on social media means that, today, many young men subscribe to an extremely unequal view of the relationships between men and women. They start their social, romantic and sexual lives with completely preconceived ideas that do not match those of young women and their expectations. I provided some quite shocking statistics on Monday. With reservations, I think more than 40% of young men believe, for example, that feminism is a strategy by women to control society. There are about the same number of people who think that equality between women and men has been achieved today and therefore that feminism, understood as a vision for the equality of men and women, is no longer useful and is no longer relevant.
    You see, beyond the violent extremism that our security and intelligence services deal with, if we take just one step further, we collectively face a bigger challenge with our youth. I do not believe in the good faith of a number of platforms. Remember the Christchurch attacks. I won’t describe them to you again. After the Christchurch event, platforms and governments came together to try to remove content. I remind you that a man was able to massacre 60 Muslim individuals live online. It was filmed, so the massacre could be watched.
    There have been advances, but in recent years, we have seen a regression. A number of platforms that had improved are now backtracking. You know them. This includes Twitter, where there is less and less moderation.
    Excuse me, I gave a long answer.
(1715)
     No, that’s perfect. I ask long questions, so your answers are completely justified. They’re much more interesting than my questions, by the way.
    I understand and admit that it’s very difficult to convince the platforms to collaborate. Earlier, we heard Ms. Paul talk about Meta’s business model, which almost encourages this kind of illegal trade and content, from which it profits.
    As legislators, what strategy could we adopt with respect to harmful influencers and their negative impacts, or with those predators that can often be identified online, like the 764 movement you mentioned? There’s a list of terrorist entities in Canada. Could we not make a kind of similar list? Without constraining the platforms, aside from allowing us to locate these individuals, would it be effective and feasible for Canada to establish a list of entities deemed illegal due to the content they propagate on the platforms? Would that be an interesting option to explore?
     Being on the Canadian list of terrorist entities already gives law enforcement additional powers to require platforms to provide information. As I mentioned earlier, the problem today is that there are small movements of influencers who are recruiting. It’s no longer always the large organizations with a storefront that can recruit people. I think that’s the problem today.
    I must inform you, however, that I was part of the expert committee that advised the Government of Canada on the moderation of harmful online content for Bill C‑63.
    I think that bill was a good starting point. It was a bill. It needed to be critiqued and improved. It was really a good copy on the table, with which it was nonetheless possible to move forward.
    That working base did three things.
    The first is that it gave platforms the responsibility to remove content and to demonstrate that they had indeed removed that content.
    The second was that it appointed a commissioner who could verify that the content had indeed been removed. It was also supposed to provide data to researchers to help them better understand how content moderation works.
    The third is that it nonetheless created an ombud position, which is very important because we all value freedom of expression. It was important for people who felt oppressed by the sudden removals by the platforms to actually have access to the ombud, and for their posts to be returned.
    I think the bill was going in the right direction. The European Union does it. The United Kingdom does it. Australia does it. On the other hand, Canada is in a much more complicated situation because the United States, which is next door, is still putting pressure on Canada not to impose restraints. However, I believe it will now become a matter of public health and social cohesion.
     Thank you very much, Mr. Morin.
    Thank you, Mr. Champoux.
    Mr. Généreux, you have the floor for five minutes.
    I thank all the witnesses.
    Mr. Bridgman, you said that 80% of people get their news from content published by influencers. Which news are you talking about exactly? Are you talking about public and political news? In general, does this also refer to news that’s aimed at young people by influencers?
     I’ll soon be 64 years old and I use social media, but I still watch television from time to time. I may be old, and I still have some old habits. What part of the population is represented in this percentage exactly?
(1720)
    From a young person’s perspective, we’re all old.
    When I talk about 80% of young people, I mean those aged 18 to 34. Thirty‑four years old isn’t that young, and the percentage is certainly higher for those under 18.
    The 80% represents the young people who regularly follow online influencers and consume their products and content. They also follow their opinions on politics on a daily basis.
    Young people may watch television from time to time, but probably not very often. They read a bit of the news, but most of the updates they receive throughout the day come from influencers. However, those influencers do not stick to the facts. The media report the facts. Influencers also report the facts, but they add their interpretation, their opinions and what it implies for our politics.
     When looking at the polls, we see a trend, namely that young people have more right‑leaning opinions than their elders. What reason explains that? Are there more influencers who are politically right‑leaning?
    I see you nodding, Mr. Morin. You can intervene after.
    Yes, we see that young people have more right‑leaning opinions. It’s a very recent phenomenon that we weren’t talking about a few years ago. Now, we are.
    Studies conducted in Canada and around the world clearly show that, at this time, opinions expressed online are more right‑leaning. It’s not just on the platform X. Certainly, the discussions happening on that platform among more conservative individuals have increased significantly, but it’s not the only place where they’re happening.
    The phenomenon is obviously complex, but generally, centrists and people on the left tend to trust traditional media a bit more. Currently, people on the left are dividing their attention between traditional media, influencers and social networks. People on the right have less trust in traditional media. They therefore seek out news online more and also engage in more online discussion.
    The popularity of these platforms gives some weight to the voices of the right. A study published on Media Matters for America in the United States clearly demonstrated that most of the biggest American influencers have a right‑wing ideology. Canadians consume American content. This is also true everywhere in the world.
    Mr. Morin, we’re listening.
     I would like to add a slight nuance regarding the data.
     For our part, we just finished a survey of over 6,500 people.
    When we say that young people are on the right, we need to be careful. First, there’s a big difference between young men and young women. That’s major. The answers are very gendered on these questions. Young women are rather left‑leaning, clearly left‑leaning, on a number of social issues.
    Then, when we say that young men are on the right, it’s not the majority of young men. There’s an increase in young men who are on the right, but they’re not yet in the majority. That’s very noticeable. I wanted to give these nuances.
     Now, we do see it on digital social networks. I mentioned earlier that I also think that, due to the way digital social networks operate, they’re more favourable to a number of right‑wing influencers in their approach to political issues. I say this here without any political or partisan judgment.
    Moreover, I also think that the American right—as I mentioned in my introduction—quickly understood how to effectively use digital social networks and make relevant use of them to reach young people. We saw it during Mr. Trump’s election campaign with the late Charlie Kirk, who was indeed able to mobilize a part of the American conservative right.
     Madam Chair, do I have a little time left?
    Apparently not.
    Thank you.
(1725)

[English]

     Mr. Greaves, welcome to the heritage committee. You have the floor now for five minutes.
    Thank you, Madam Chair.
    Thank you, colleagues, for having me today.
    Thank you to our witnesses for sharing your expertise with us today. This is alarming stuff, but it's important for us to hear. I appreciate your time.
    My question, to start, is for anybody, but perhaps Mr. Bridgman might have an answer. I'm curious as to whether there's a standard definition for an “influencer”. How do we determine the threshold that separates an influencer, per se, from anybody online with opinions?
    It's an excellent question.
    I'm going to give a data-driven definition. In our work we're primarily concerned with political influencers and political influence. What we're looking for are people with a large following who share their opinions online and who trade in the currency of, for better or worse, authenticity. It's somebody who is speaking from their voice and who spends the majority of their time talking about politics.
    We have a threshold depending on the platform, but we set the bar in the Canadian context at about 10,000 followers. If you have 10,000 followers or above, and you're primarily producing political content, you get labelled as an influencer.
    Various studies have different definitions, but this is the general thing: Are you focusing on political content? Do you have a certain number of followers? Are you speaking on behalf of yourself and with your authentic voice as opposed to an institutional or organizational perspective?
    Excellent.

[Translation]

    Mr. Morin, do you agree with this definition?
     Absolutely. You should never contradict a colleague or engage academics in a conceptual debate; otherwise you’ll all end up sleeping here.
    Some hon. members:Oh, oh!

[English]

    In the same vein, I'm curious if either of you are either aware of any research or have conducted research yourself on what the ideological orientation of this body of influencers is. Do we see a particular tendency in terms of particular places on the political spectrum, or is it not that clear-cut?
     It's certain that there is a wide variety of perspectives. Influencers are a very diverse group.
    In a recent study, we examined the Canadian influencer map. We were looking at over 1,000 prominent influencers in the Canadian context. One of the striking things about the Canadian context is that there's absolutely an ideological divide, but the largest divide we found in our analysis was about the type of content people focused on.
    What we were trying to model is how often people engage in conversations with one another. Having a perspective is one thing. We all have a perspective in politics, but are you actually talking to somebody else about issues you care about? Is there that dialogue happening?
    What we found was that in the Canadian context there certainly is a distinctive, smaller right-wing cluster that does not really have a parallel on the left. There are a few, but it's much smaller. The vast majority of engagement and attention is this core set of influencers who are really responding to the day-to-day of politics. They are really generating that conversation from their perspective, but often bringing nuance and interpretation and not bringing that ideology-forward approach.
    In some ways, this was very heartening for us in the study. Over and over again, social media studies have demonstrated deep polarization in online spaces. That absolutely exists. There are echo chambers, but in the Canadian context, it does seem that there is this core—I'm sure many of the names are familiar to this committee—that actually just responds to the political news of the day and shares their hot take.
     Thank you.
    Continuing on the theme of your remarks, Mr. Bridgman, you used an interesting phrase during your opening statement. You referred to the “loss of intention” that comes by the algorithmic nature of these platforms. If we think back, there was a lot of optimism about the increase in freedom that the Internet and social media might bring to individuals, with the democratization of information, etc. Your comments and, of course, many other indicators suggest that this might be a little bit of a misleading understanding of the current context, especially given the presence or the role of these very large tech companies in dominating this space.
    Would you agree with the assessment that having 81% of youth getting their news from social media and social media being dominated by a small number of large companies...? Does that actually increase or restrict the freedom to access information that individuals in a society might enjoy?
    I'm deeply concerned about the ability of individuals to access information. I think it is partly because of the algorithmic filtering. It is partly, as well, because of the isolation that occurs in online spaces. When I think about the radicalization stuff, people seek connection online but they often find isolation. What I mean by isolation is ideological isolation. It's isolation in terms of a particular community, where you lose that broader perspective. I think this can happen.
    Thank you for picking up on the intention comment, because I think this is one of the defining features of our current information environment. It's the ceding of intention. It's the ceding of agency to social algorithms. What's remarkable is that Canadians really don't trust social media platforms. They don't trust social media platforms to act in the public good, for many of the reasons that some of my colleagues highlighted. They say that, but actually, in their behaviour—their day-to-day behaviour and the way they engage in the platforms—they place enormous trust in the algorithm to show them what it does. You see this repeatedly, particularly on TikTok. The explosive growth of TikTok is exactly because people believe the algorithm is showing them things they need to see. Untangling that will be critical to moving forward on this problem and to addressing what I think is an information-seeking agency crisis in the country.
(1730)
    There are more and more questions all the time. We've had very interesting testimony today.

[Translation]

     Mr. Champoux, you have the floor for two and a half minutes.
    It’s true that we often find more questions than answers, but we still have to keep asking them.
     I come back to you, Mr. Morin.
    We talked earlier about different axes of radicalization. We talked about religion, which is one of the commonly used paths. We also talked about masculinism, which you mentioned a little earlier, and about which you testified earlier this week at the Standing Committee on the Status of Women, if I’m not mistaken.
    What are the other trends that are, let’s say, popular among manipulators?
    Is it possible to predict trends? Can we observe them to better equip ourselves and better prepare our youth, who are the targets?
    Obviously, you saw this summer that four people were arrested by the Integrated National Security Enforcement Team, two of whom were members of the Canadian Armed Forces. It is one of the main sources of concern for security services in Canada at the moment. Right‑wing and anti‑government extremism interact quite a bit. Currently, it’s a source of concern for both national security and public safety, as there’s a lot of resistance toward institutions and so on. It’s a movement that has obviously been very present in the western space for a decade.
    I would say that in North America, right now, it’s as concerning as the jihadist threat for our intelligence services. It’s difficult to deter these elements.
    I would like to add one point to answer your question and continue the thinking on the previous question.
    When we look at the radical ecosystems and influencers, we realize that to a degree, it mirrors the two solitudes in Canada: an English‑speaking ecosystem and a French‑speaking ecosystem.
     The English‑speaking ecosystem is heavily influenced by major American influencers, like the Tucker Carlsons of this world, among others. They really have a very significant influence.
    The French‑speaking ecosystem, particularly in Quebec, is more linked to French influencers. During the pandemic, we saw a lot of exchanges, and not just virtual exchanges. Today, there are influencer invitations on both sides. These two ecosystems represent a very important trend.
    The second trend, and I’ll stop there, is the desire of alternative influencers to reach a large audience, nevertheless. Obviously, I’m talking more about radical influencers who seek to enter the public space.
    The member was saying earlier that he watches a lot of TV, which I do too. I think these influencers still have a desire to exist in the public space. They’re trying to get invited to more mainstream platforms, the public platforms, and we’ve seen that a lot since the pandemic.
     It’s interesting to see that the older generation isn’t the only one still watching TV. So there’s hope.
    I’ll stop there because I’m looking forward to Ms. Carignan joining the conversation, Madam Chair.

[English]

     Before we move on, I'm just going to ask another question of my own, because I noted that, Ms. Paul, you talked about the social media algorithms being very intentional with how they are attracting their dollars and how these algorithms are being used to run their business. From Mr. Bridgman, we heard lots also on algorithms being the key part of the problem.
    Mr. Bridgman, you said that it's a key lever that is available to us, but we've had Meta at this table, at heritage committee. They tell us that their algorithm is their key value proposition and the government can't touch that. Perhaps you have some suggestions on what exactly we can do as the Canadian government about these algorithms.
    The social media platforms will stand here and say, you can touch nothing about our business model. Nothing about our business model is adjustable to any national government. We are above any single national government. They will say that repeatedly and they will say that with regard to ads. They will say, this is an international problem. This is tricky. You can't.... Okay. Fine. That is their perspective, but it is not the perspective that the national government has to take.
    The stuff that platforms are able to do, the ads that they are able to promote, the complexity of their systems, has made them feel invulnerable and ungovernable in this day and age, and I think countries around the world have struggled with governance. We are starting to see some of these pieces fall into line. Next week, in Australia, an under-16 social media ban is going into effect. This is very far-reaching and it's going to be very interesting, and I strongly encourage the committee to evaluate how that goes. It's very pertinent to this subject of study.
    These things are absolutely controlled by dials, switches and decisions within these platforms, and they are very responsive to different governments who impose fines, who impose incentives. I think this is the one thing that I really would like to emphasize. The way internally these companies are going to think about this is this: What is the cost to implementing a solution here? What is the cost to an algorithmic change? What is the cost for better screening ads? What is the cost for changing things? How much engineering time and effort is that going to take? That is a business decision. It's a probabilistic business decision. As a regulator, what you have to do, what you have to think about, is how you change that knob in favour of youth health. How do you change that knob in favour of democratic interest? How do you impose on these platforms a cost associated with bad behaviour?
    I just want to finish this remark by saying I am old enough to remember Saturday morning cartoons. There were ads on Saturday morning cartoons. If some of the ads that occurred on Meta had been shown on Saturday morning cartoons, those TV stations would have been shut down. There would not have been this hand-wringing. If the TV station said, “No, that's just our business model; we can't do that,” that would have been completely unacceptable, but for some reason, with these large tech platforms, we have ceded this ground. We've said that it's too complicated. I think we really need to remind ourselves that actually that's a choice. They are governable. They operate within our boundaries. They access an enormous Canadian market. It's incredibly profitable for them and they will respond to economic incentives to improve their behaviour.
(1735)
    Ms. Paul, do you have something to add?
    Yes. I would really like to echo what Mr. Bridgman said. I think it's critical to understand that these companies are completely capable of addressing these issues and only do so when there are repercussions. For instance, as I mentioned, we see ads for drug trafficking and weapons trafficking on these platforms. We see Meta profiting from human smuggling in North America. One of the things we don't see ads for is child sex abuse material, and that's because the legal consequences of even hosting that content are significant. It's proof that the companies are completely capable of addressing these issues and are choosing not to do so when there are not financial or regulatory repercussions.
    One of the things they like to highlight is these cost-cutting measures of removing moderators and relying increasingly on AI to become a more financially nimble company. We've seen them fire thousands of content moderators in trust and safety, while at the same time, Meta just offered a historic quarter-billion dollar contract to a single AI engineer as part of its efforts to poach that engineer from another company. It's clear that their efforts are not keyed towards safety in any way but towards maximizing profit at any cost.
     Thank you very much.
    I would like to welcome our next witness to committee, Marie-Eve Carignan. She is another professor from the Université de Sherbrooke, and she is here with UNESCO. Welcome.
    We would like to give you five minutes for an opening statement.

[Translation]

    I thank the members of the committee for inviting me to appear before them today.
    Indeed, I speak on behalf of the UNESCO Chair in the Prevention of Violent Radicalization and Extremism, and in my own name, as a full professor at the Université de Sherbrooke.
    We are particularly concerned about the misinformation that affects young people through online content. This affects not only young people but also the entire population. I think it’s a global issue that needs to be addressed for both the population as a whole and for young people.
    Several forward‑looking reports indicate that disinformation and misinformation are major issues, both in the short and medium terms, for our society and democratic societies. We can think of the World Economic Forum, which published a report in 2024, or of Policy Horizons Canada, which really highlights the major risks to democracy. People will no longer be able to distinguish the truth from lies, or from hateful content and polarizing content to which they’ll be exposed online. Furthermore, the rise of artificial intelligence makes it difficult for people to distinguish between what’s true and what’s false.
    Young people are directly exposed to this disinformation, often without context and without a perspective that allows them to understand what’s false and what’s true. They don’t know if, behind that, there are economic, political or malicious interests that underlie the content they’re exposed to. Many young people are circumventing the rules of the platforms. They create accounts on platforms that are not suitable for their age. It’s very difficult to monitor and supervise. I’m sure we can talk again about how we need to regulate the presence of young people on these platforms.
    Parents try to control the online content that young people are exposed to, but it’s a challenge for parents who don’t always have the tools to properly guide young people’s online practices. This framework is very difficult, as it can make young people feel like they’re being watched, which creates family tensions while we’re trying to have a good overview of the practices to which young people will be subjected online. Disinformation poses risks for society as a whole, both for the young and the not‑so‑young.
    The report entitled “Fault Lines” from the Council of Canadian Academies, which I had the pleasure of participating in as an expert, indicated that there were consequences at different levels, namely for society, communities and individuals.
    From a societal standpoint, disinformation can lead to political polarization, risks of democratic drift, a decline in public trust in political, economic, media and scientific institutions, and inaction on various issues such as climate change.
    From a community standpoint, this can lead to a low adherence to public health measures, resulting in risks of epidemics and preventable diseases, vaccine refusal and a significant increase in health care system costs.
    From an individual standpoint, this can lead to health risks, even the risk of death, due to poor decisions and money spent on products that are dangerous or even ineffective. This also concerns young people who are at risk of adopting these behaviours to which they’ve been exposed online, putting them at risk from an individual, community or social standpoint.
    Disinformation and the role of disinformation actors, particularly influencers, are a matter of concern for all Canadians. The Digital News Report 2025 highlights that 54% of Canadian respondents consider influencers and online personalities as significant threats in terms of misleading information online, which poses a risk to our populations and to the youth who are exposed to it. We can think of the 2024 NETendances report, which highlights that 59% of young people aged 18 to 34 follow at least one online influencer. A proportion of 45% of young people say they spend more than three hours a day on social media.
    In addition, data on young people under 18 is highlighted this month in a report from Sidaction in France, which shows that a large majority of young people are aware of and follow online masculinist influencers.
    Media literacy is becoming an important skill to protect young people from this disinformation. This is not a uniform skill. It will be administered somewhat sporadically, as teachers do not all have the same abilities to educate young people about the risks associated with platforms and media literacy.
    As part of the work of the UNESCO Chair in the Prevention of Violent Radicalization and Extremism, we’re particularly interested in media education and various initiatives implemented by stakeholders, such as the #30sec to check it out grants from the Fédération professionnelle des journalistes du Québec, the Départager le vrai du faux sur le web initiative from the Agence Science-Presse, or the initiatives from Les As de l’info.
    We find that one‑time initiatives like these really have an impact on young people. They help them find more reliable information. This can even have a very direct and effective impact on countering radicalization, bringing a pollinator discourse into families that will help nuance opinions, even among more radical parents.
    Exposure to the right information is important, but exposure to media education also allows young people to make their own choices as consumers of content. At least, it can enlighten them. They can know what content they’re exposed to, namely whether it’s professional or journalistic content, or if it’s ideologically oriented content. They can also know who is broadcasting it. Digital social networks are certainly a place of vulnerability for young people.
(1740)
     We can talk in particular about the exposure of young people to conspiracy content and offensive content that can disturb them and leave a lasting impact if they’ve been exposed to this. Several studies show that young people are even unintentionally exposed to sexual and violent content online. It can desensitize them, and it seeks to desensitize them and make them adopt radical ideas. Several groups operate deliberately like that. They will adopt social media and online gaming apps to manipulate and reach young people. Some platforms, like Kick, are used to promote violence freely and to make this violence acceptable to young people. A French influencer passed away this summer, precisely due to practices that normalize violence and encourage acceptance of it.
    Social media is becoming an environment for recruiting young people. Different extremist groups, jihadists, those involved in organized crime or those focused on sex or violence, target young people online. Cyber-criminal networks like 764 or criminal groups like The Com primarily target young people aged 8 to 17, who are impressionable, to lead them to commit violent acts, self‑harm, torture or kill animals, produce child sexual exploitation material or even commit suicide. The violence of these acts often intensifies over time.
    These platforms are sources of harassment and hate speech for young people. We must therefore be concerned about it.
    We need to think about solutions such as funding quality media and information education for young people, which will allow them to recontextualize the information they’re exposed to, and media education initiatives, like the one I mentioned and that we studied.
    We also need to think about initiatives to provide communication tools to parents to help them better interact with their youth, better assess problematic situations, discuss them with their youth and better evaluate the communication practices of young people.
    We need to provide teachers with tools so they can better raise awareness about the media.
    We need to better regulate access to platforms for young people and better regulate access to hate speech, cyber-bullying and violence, not only among young people but also across the entire population, as this is an issue that affects everyone.
    Finally, I believe we need to conduct more research to understand online manipulation tactics and the groups that target the general population with hate speech, but particularly young people, so that we can intervene. We really need to continue researching these issues to gain a better understanding of them.
(1745)
    That’s very interesting. Thank you very much.

[English]

     Mr. Waugh, I will now give you the floor for five minutes.
    Thank you, Madam Chair.
    I'm going to start first with you, Ms. Paul, because you're in Washington, D.C.
    In the last month, we're starting to see more news from the United States. We're seeing court filings now coming out, talking about increased depression and anxiety among users. We're seeing more information that social media platforms were maybe hiding. All of a sudden they're in front of the U.S. Congress, and now we're seeing, in the last month, some reports out of this.
    I would like your thoughts on that. I'm not going to name the platforms because they're all covering up. I just want your views since you are in that nation's capital.
    One of the things our organization focuses on, beyond just the harms of these platforms—as you mentioned, the court cases really have shown how these platforms are covering it up—is the influence these platforms have, the lobbying they are engaging in and how they use third party groups that they pay to launder their influence and to appear to have more voices in support of policies that keep them free from liability for any of the harms they're profiting from.
    This is something that I think has been an issue not just in the United States. We see it all around the world with these companies and their vast influence, not just on politicians but also on global economies. It is something they wield in order to push for particular policies. As you mentioned, yes, all of these platforms do have something to hide. I think one of the things that is critical is to see which ones have undertaken efforts to address issues and which have not.
     For instance, about a decade ago, Google was fined half a billion dollars for illegal online pharmacies that were trafficking opioids. It's much harder to see that kind of content in sponsored ads in Google Search, whereas Meta has faced none of those financial repercussions, and we see the company, as of today, continuing to run and approve ads explicitly for dangerous and deadly drugs.
     These are the kinds of things where we can see that pressure and repercussions have made a difference in how the companies operate. Making that a consistent policy across how all of these companies are dealt with is very critical in ensuring safety for all users online.
     Regarding occurrences of the exploitation of children, on which social media platforms are youth most vulnerable? We've had a ton of articles on this in the country. A big one is the sexual extortion of children. Money is on the rise.
    What are your thoughts on that?
(1750)
     To understand that, we need to look at the data and understand which platforms are the most heavily used by children. Instagram, for instance, remains one of the most popular platforms, so much so that, several years ago, it tried to create an Instagram Kids platform for children under the age of 13, when it has yet to solve any of the harmful problems with regard not just to its existing platform but also to the teen platform it has since been promoting. It's now facing civil and regulatory repercussions, or the potential for those repercussions.
    Looking at the platforms kids go to the most is important, as is looking at how widely spread these platforms are used. For instance, Kick was mentioned earlier. While there are issues with that platform, it's of much less consequence in terms of penetration of usage, as opposed to those of much better-resourced companies like Meta's Instagram and Google's YouTube. The private messaging aspects of platforms like Instagram—which are, largely, not moderated at all—tend to be where a lot of these types of sextortion cases take place, not just for young people but also for adults.
    Seeing which platforms kids are gravitating towards the most, and how these platforms are trying to further capitalize on children's attention by creating platforms for kids under the age of 13, is where regulators really should be looking in order to ensure they're keeping children safe.
     Thank you.
    Regarding disinformation and loss of trust, I want to move to Ms. Carignan because she joined us late.
    The former head of the CBC, Catherine Tait, was in committee. She admitted that her organization held a forum in Toronto on loss of trust.
    Traditional media has a loss of trust—she admitted that here—and now you're telling me so does social media. Who can you trust these days? Is there anybody?

[Translation]

     That’s a good question.
    The data on trust shows that Canada is still more privileged than other democratic countries. We have a higher level of trust than other countries. So trust is not as bad as we might think. However, we are still seeing a decline in trust. My colleague Marc‑François Bernier and I conducted a study in 2023 on the relationship between Canadians and the media. We found that there is still a sense of mistrust regarding the independence of the media and economic and political power. We also see a lack of understanding of the difference between traditional media, which have journalistic practices and must adhere to a code of ethics, and online influencers who claim to be journalists but do not adhere to a code of ethics, promote a social and political ideology, and comment on economic issues.
    To restore trust, it may be necessary to better define the difference between how traditional media operate and the work of online influencers who claim to be journalists and exploit this confusion among the public.
    Indeed, these influencers often fuel mistrust. Our chair observes the convergences between different ecosystems. Regardless of the underlying ideology of influencers, many of them are currently coming together in a discourse promoting mistrust toward current authorities. They thus give themselves legitimacy by presenting themselves as a reliable voice against government, researchers and media outlets that they claim are unreliable. This mistrust is fairly widespread in the discourse of all the online influencers we observe.
    Thank you very much.

[English]

    Mr. Baker, welcome to the heritage committee. I'm glad to see you here.
    You now have the floor for five minutes.
    Thank you very much, Madam Chair. It's a pleasure to be here.
    Thank you to all the witnesses for their testimony.
    I'd like to start with Mr. Cooper.
    If I heard you correctly, you spoke about eating disorders, earlier. Am I right about that?
    Yes, sir.
    If so, could I ask you to speak about how prevalent eating disorders are and what the consequence of that is, to the extent you can?
    As it pertains to the general population, about 10% of Canadians have a high risk of eating disorders. When we look at the 16- to 17-year-old population, approximately 21% have a high risk of eating disorders on the ODES-Y screener. It is higher among young women. It's closer to a quarter of young women having a high risk of eating disorders.
    When you look at social media specifically on that, the biggest indicator driving negative mental health is comparisons with others. It is being driven significantly by social media. The other indicator we typically look at is suicide.
    We see that there's an onset of these disorders. When you study these at the international level, which is where you start getting enough data, you see that there's a high spike of eating disorder deaths in and around that 16- to 17-year-old grouping. Actually, risk significantly falls off after that. If you're not addressing eating disorder issues in the 12-, 14- or 15-year-old range, you're missing the vast majority of those who would be affected by that.
(1755)
     Thank you very much for that.
    In full disclosure, I used to be a member of the provincial parliament in Ontario, and I introduced a private member's bill on this very topic in 2017. I spoke with experts like yourselves and others on this.
    Mr. Cooper, the statistic in 2017, from the National Initiative for Eating Disorders was that an estimated one million Canadians had an eating disorder and that eating disorders have the highest death rate of any mental illness. One in 10 people with an eating disorder will die from their disorder. Does that sound about right to you, Mr. Cooper, based on what you know?
    I work with NIED as well—the group you are talking about—on these studies. It is accurate. The challenge is that there isn't great death data for a lot of these indicators, so it's really hard to validate that.
     It's hard to ascertain exactly how many people have an eating disorder. There isn't a really great tracker for it, so we use a risk screener, the ODES-Y, to ascertain the potential risk for it. It's not a great screener, to be honest with you, but I have seen the numbers you're quoting. I have talked with them about that, and yes, they do seem correct to me.
    Based on that data from 2017—and I'm just going to rely on that—if one million Canadians were to have an eating disorder in 2017 and if one in 10 were to die from that disorder, then about 100,000 Canadians would die from an eating disorder, if those statistics bear out.
    I'm sharing this only to say that it's a big problem, and it touches a lot of people. Very often, I think, for understandable reasons, people do not talk about it, but it's an element of what has been discussed at this committee that is really important for us to consider.
    The private member's bill I had then, Mr. Cooper, was endorsed by a bunch of folks in the sector. The intention of the bill then was to require commercial content—paid content, advertising content—which was in any way manipulated or distorted, to have a disclaimer on it so that people knew the image was distorted. Specifically in the context of eating disorders, the idea was that a lot of young people, especially, would see images of unattainable beauty and it would contribute to their eating disorder. That's what the experts told me, and that was the intention of the bill.
    Mr. Cooper, I don't know if you have thoughts on that type of measure. However, I would welcome your thoughts on that measure or any types of measures you would recommend to help us deal specifically with how social media is influencing young people in the context of eating disorders.
    Again, I'll point to data that suggests that it's not just about seeing these ads. It's the volume and the number you consume. If you're consuming a lot of ads over a period of time, then this is where we actually start seeing the high numbers of depression and of comparing yourself with others. It's not just about getting that one ad; it's about getting dozens of ads because the algorithm will keep feeding you those sorts of pieces. You get one bad thing—and people click not only on the things they like but also on the things that make them feel bad—and that starts a doomscroll of getting more and more of this negative content on this front.
    I'm not a policy-maker, so it would be hard for me to say exactly what we could do to address this particular issue. There is an inherent need to want to compare yourself. We have lots of research on why young people specifically want to engage in that sort of activity. It's a part of growing up. It's a part of trying to understand themselves and their own bodies. They're trying to understand these things, and it's really taking advantage of those sorts of things.
    The other piece that was mentioned and that is critically important is the generation of wealth and profit as a result of these sorts of activities as well. A lot of these are not just showing these images but also generating money by selling this makeup or selling this product. It's a revenue generator. They're not only showing these influencers doing this but also making money off these young people who are starting to have body dysmorphia and other issues, which is, quite frankly, atrocious.
    That's a very good point. Thank you.
    Mr. Diotte, you now have the floor for five minutes.
     Thanks, Madam Chair.
    This is for Ms. Paul.
    I was very fascinated when you talked about Meta and Google being aware of the harm they are doing to kids and actually profiting from it. You also mentioned that there is widespread drug trafficking on Instagram. Can you go into that a bit more and tell us how it's done? What are your concerns, and how do you track it, and so forth?
(1800)
     When it comes to Instagram, there are several different studies we've done focusing on the drug trafficking on that platform. For instance, we created a teen user who searched for things like the word “Xanax”, and before even fully typing it, Meta's autofill recommendations would recommend accounts called “Xanax For Sale”, “Buy Xanax” and things like that, which would just feature profile photos full of pills. It took very little effort for teen users to connect on the platform.
    It's important to keep in mind that the way these companies' profit is through advertising, and the way that they can get more spend on ads is to keep eyeballs looking as long as possible. That's why you see things like AI slop proliferating, because that is one of the ways to keep people looking at the platform, whether it's in disbelief, shock or genuine interest.
    When it comes to the advertising mechanisms, Meta's ad library is one of the ways we've been able to look at the drug trafficking ads, and we actually have a running hashtag on our X account—#MetaDrugAds—where we frequently post videos and photos of the ads while they are actively running after they have been approved by Meta.
    The important thing to remember here is that even when the company's executives go before committees like yours and say that it's not allowed and they remove those ads after they become aware of them, they don't refund the drug dealers. They keep the money, and they clean up after they've been caught.
    There are also several whistle-blowers recently, regarding issues with children, who have come out from Meta. There was a congressional hearing a few months ago where two whistle-blowers, who were tasked with internal investigations regarding harms to children, detailed how the companies very narrowly directed what they were allowed to research, how they were allowed to write their reports and what they couldn't research, even though they knew it was causing harm to children. Those things are also important to keep in mind.
    The advertising mechanism is very important. One of the things about the Meta ad library is that ads that are not labelled as political are only viewable while they are actively running. The exception to this is that in the European Union, thanks to the Digital Services Act, that law has required Meta to leave every ad that runs on the platform available for view in the ad library for a year. That is one of the ways we were able to see the scale of drug trafficking ads on the platform, because at any given time, there is such a high volume of ads that the average researcher is not going to be able to simply scroll and see them. Because Meta is now required by law in Europe, and in the U.K. now as well, as a result of the Online Safety Act, to keep those ads in the library for a year, we can actually see the scale of the harm they're creating, and the fact that they profit directly from that content, don't remove those ads in many cases and let them run to completion.
    You've seen illegal activity. Do you report this to law enforcement?
    Our organization is an open-source research and investigative organization. In many cases, these are outside U.S. jurisdiction if they are users in foreign countries. One of the things to keep in mind is that many of these drug-trafficking ads or weapons-trafficking ads direct users to a private encrypted messenger, as opposed to a clothing website. One thing that we've seen with regard to those messengers.... In some cases, Telegram, for instance, whose CEO was recently arrested in France, is actually moderating more effectively before Meta ever takes down the ad. You can go to a Telegram account that is in an ad that is actively running, and it has already been removed for violating. However, Meta is continuing to collect money from the ads and actually promoting that harmful content.
    Those Telegram accounts are not able to reach users. They are not searchable on the platform; they have to buy that reach from Meta. In these scenarios, Meta is the kingpin, and in terms of any whack-a-mole of these individuals, their ability to reach a broad swath of the North American population is only possible because of the failures in moderation by Meta and its advertising platform.
    What would be your top three solutions to this?
    For any country where Meta is operating—and that is pretty much every country in the world— regulations from national bodies are incredibly important, particularly in holding the companies accountable not just for algorithmically promoting illegal content but for actively profiting from it. Meta is a facilitator in this trafficking process and not just a passive host. They have said that ads are 97% of the way that the company makes its revenue. In any other world, the quality control of a product at this rate would bankrupt a company, but we've seen no repercussions for Meta. They are allowed to continue collecting millions of dollars from explicitly illegal content that you could never see on a news station, in a newspaper or on a television station. At the very least, starting with their main profit mechanism and how that is being driven by illegal content is critical to beginning to get these platforms moderating more effectively and following the law.
(1805)
     Thank you.
    Mr. Myles, you have the floor for five minutes.
    Thank you, Madam Chair.
    Thank you to all the witnesses for this fascinating testimony today. It's been really well delivered.
    I want to continue on the regulation conversation that MP Diotte was just speaking about. It seems to me that there are three elements here now. We have the possibility to regulate access, which they've done in Australia. Nobody under 16 has access. There's also regulating the content. We've talked about certain users and certain influencers being considered too dangerous, or whatever, and trying to regulate the content from that side. There's also regulating the algorithm. Therefore, it's access, content or the algorithm itself and how it behaves, because that has changed.
    Mr. Bridgman, you were speaking about suggested content, and Ms. Paul was speaking about advertising. We know the behaviours of these algorithms. Do you sometimes feel that the access question is a cop-out? You can't touch it, so we just have to say that no one can use it.
    I'm trying to figure out what the right balance is here. It is obviously very disconcerting. I have young kids. Sometimes I want to say there's no access, but at the same time, I'm looking at the algorithm and thinking, at the same time, why can't we regulate how that algorithm behaves? We know it's driving people towards radicalization. We know what it's causing. These are facts that we know through academics, through study after academic study. We know about screen time alone and the relationship to mental illness and the addictiveness of the algorithm.
    Maybe I can start with you, Mr. Bridgman. Where do you think our weight is best placed from a regulatory perspective—on access, content or the algorithm itself?
     That is an excellent question. Thank you for it.
    For the platforms, they are here. They are widely popular. They are widely used by Canadians. In terms of cutting off access for maybe under-18s, we'll see how that plays out in Australia. I think what I've said, and what others have said or hinted at as well, is that you can tune these algorithms to reduce harms. That is possible. That is a governable fact. That is an available policy option.
    We've talked a lot today about the drug example on Facebook. We saw during the last federal election many AI slop ads using political content that were being promoted and that were political ads during the campaign. We ran a very small study using vision models: Is this political? During an election campaign, it's governed speech. Particular rules apply. The vision model can easily identify it. These tools are widely available. What is not being done is that there is not some pressure put on these platforms to apply that vision model effectively to ensure that the engineering and staff capacity time is put towards that. We have to turn that dial. We have to make it more expensive and more costly.
    There are ways to tune this. There are ways to work with the platforms and say, look, this is not going to work. We're going to start imposing fines in this space. If you run an ad that shares drug content, not only can you not profit from that ad, but all of the revenue there will also be applied. There will be taxes and fines. If you apply the right lever, the company will change very quickly the number of resources that it's internally devoting to this.
    The approach of the DSA is a harms-based approach that says we're going to identify harms on the platforms, platforms are responsible for proactively identifying harms and we're going to use those leverages, because we're going to try to reduce that amount of harm. They operate in probabilistic ways. For those of you who are kind of experiencing the new AI way, this is all probabilistic. We can play with those probabilities. We can post fines and adjust those probabilities in favour of reducing online harms, in favour of protecting children and in favour of a better democratic discourse. That's all possible.
(1810)
    Has that been done in other jurisdictions?
    Absolutely. This is what the DSA is doing. This is, to a certain extent, what Bill C‑63 kind of proposed and was trying to do. Yes, absolutely there's criticism of that, and there is space for that, but that was the approach. It was to try to reduce that harm calculus and to force....
    In these decisions, we cannot think of these companies as engaging in evil behaviour. They are pursuing their advertising and maximizing. That is great. We say, okay, yup, that's good. You are free to do that. You profit from that and that is great, but here are the rules for our society. Here are the rules. Here are the harms we are okay with. Here are the harms we are not okay with. Here's how we are going to push you in that direction.
    In fact, that is the purpose of government. The purpose of regulation is to do that.

[Translation]

     I will ask you the same question, Ms. Carignan.
    Do you have any comments on that?
    I agree with what my colleague just said. These are really important options to consider, and indeed, financial penalties work well with all these platforms, which are looking to make money because, unfortunately, money talks for them.
    However, I believe there are also other elements.
    You mentioned access to content and algorithms. With regard to content, I think it is fairly easy to work to prevent hateful content, i.e., to legislate against hateful and violent content online and to moderate content that infringes on other rights and freedoms under the Canadian Charter of Rights and Freedoms. We must therefore be able to identify specific content without regulating disinformation, namely content that is problematic, violent, hateful, or contrary to other rights and freedoms. I think it’s easy to block.
    Furthermore, during the COVID‑19 pandemic and the storming of the Capitol, specific accounts were closed because they were highly problematic in that they promoted violent and hateful ideologies. Since then, these platforms have been deregulated and the accounts have been reopened. I believe this shows that it’s possible to do so, but that it depends on the willingness of the platforms, a willingness that is not always there.
    I will add a fourth element with respect to the environment. We need to provide good information and promote positive content. For example, Mark Zuckerberg and Meta recently stated that fact‑checking was politically dangerous and stifled opinions. But that is completely false. Fact‑checking does not block any content; it shows which content is reliable, verified by journalists and authentic. I believe we need to return to solutions like this and highlight content that is valued by media outlets belonging to internationally recognized journalistic initiatives or scientific content.
    Platforms are therefore also capable of prioritizing good content and providing reliable sources of information to counterbalance all that. It would not be about blocking content, but rather about providing reliable alternatives so people online are also exposed to good information. In that respect, fact‑checking worked well, so there was no reason to stop it.
    I believe we need to focus on blocking hateful content, implementing fact‑checking and promoting good content within algorithms and platforms. As my colleague said, I also believe we should have moderated algorithms to block certain hateful content and impose financial penalties if this is not done.
     Thank you.
    Thank you.
     Mr. Champoux, you have the floor for about two and a half minutes.
    Since we talked about bill C‑63, I’d like to say that, during the previous Parliament, when that bill was under consideration, it was proposed that it be split so that everything concerning harm to children would be adopted. There was consensus, but the government refused to proceed in that way. However, if it could have been done, we would likely have made progress. I still find it unfortunate, and it’s important to point that out.
    Ms. Carignan, earlier you mentioned some proposed educational initiatives, such as the As de l’info program. There are others, including MAJ and Rad. These are various initiatives that already exist, and that’s very good.
    What I find unfortunate as a father of young teenagers is that schools don’t seem to have the official mandate to educate young people, specifically with respect to social media and being vigilant about misinformation and disinformation. I have never heard my children come home from school saying that, today they learned how to detect fake news, that it was interesting, and that they now have tips for spotting fake content online.
    Why aren’t we more proactive with respect to our youth in school?
(1815)
     The difficulty is that media education is often seen as a cross-curricular skill. Teachers who are required to provide media literacy training are not always trained to do so and do not always have the knowledge about how algorithms and social media work to educate young people on these issues. They therefore need help and tools to inform young people. That’s what As de l’info does, among other things. This site offers resources for teachers to provide young people with good tools so they know where and how to find information and approach certain sensitive topics.
    Our research on the #30sec to check it out training, an initiative from the Centre québécois d’éducation aux médias et à l’information, or CQEMI, and the initiatives from the Agence Science-Presse, shows that it works over the short, medium or long term. Young people who have taken these training sessions become more informed about how social media works and the difference between professional journalistic content and influencer content. They are therefore better able to assess information sources. This doesn’t mean that they’ll necessarily stop consuming certain problematic sources, but at least they’ll do so knowingly and will be able to seek out other sources.
    This initiative would benefit from being standardized within our education networks, but these networks are funded on an ad hoc basis, and it often takes a lot of resources. These include, among others, volunteer journalists who provide the #30sec to check it out training, but we already know that journalists are few in number and overworked, which certainly limits the number of training sessions they can offer.
     Without adequate training for our youth, is Australia’s proposal to restrict access to social media a good solution?
    Should we do that or do both at the same time?
     I am torn. I think restricting access to social media for very young children is a good thing. However, at 16, as we see in Australia, it’s more debatable because young people need to be educated. Later, when they have access to these networks, if they haven’t been taught best practices and are suddenly exposed to certain content, what will they do with it?
     They need to acquire skills to effectively use social media, to develop critical thinking in relation to these platforms and to know how to evaluate resources and tools. I therefore believe that complete blocking is not a solution. We need to focus more on education and blocking hateful, violent and problematic content. We need both a legal framework and education about these platforms, because young people will find other ways to circumvent it.
     We’re already hearing about this in Australia. There’s talk of adopting virtual private networks to bypass the law. Some young people use their parents’ accounts, who, as I said, find it very difficult to keep up with young people’s online activities. In any case, young people also need to be exposed to certain online content. So it is a little difficult to say what they should or should not be exposed to. I think it should be done through education and a certain amount of supervision.
     We may have let down a cohort in terms of digital literacy, but if we flood the market with educated and alert young people, perhaps we can reverse the trend.
     In any case, it will have an effect. In our research, we see that young people who are exposed to good, reliable, quality information share it with their families. For example, the research conducted by the UNESCO-PREV Chair with As de l’info showed that young people who encountered reliable information online discussed it with their parents, and that this changed behaviours, for instance, with regard to perceptions of immigration and climate issues. So it has an amplifying effect within families, which is very positive.
    Of course, it can sometimes create tensions. That’s what makes education difficult. Teachers don’t always dare to address sensitive topics with young people because they fear the parents’ reactions. It’s very difficult to have to confront certain issues.
    At the moment, the chair has a project with UQAM and Stéphanie Tremblay, particularly on how to address sensitive topics in the classroom. I think it’s a matter that needs to be discussed. We need to help teachers address these topics in the classroom to have a positive effect not only on young people but also on their circle.
     Thank you very much.

[English]

     Mrs. Thomas, you have five minutes or so.
    Awesome. Thank you very much.
    Ms. Paul, my question is going to be directed to you first. I want to get your thoughts on what the solution is in terms of properly regulating platforms to prevent this type of behaviour. Based on what I'm hearing from you, it doesn't matter if the public is outraged. It doesn't matter if organizations like your own are doing the necessary research and reporting. These things don't seem to be doing it. There is still illicit material reaching the eyeballs of youth, and then, in the case of ordering illicit drugs, reaching their hands, and then, of course, having an impact on their lives.
    What is it that we, as legislators, need to know in terms of how to regulate these tech giants, potentially, to generate change and ultimately to protect people who are under the age of 18?
(1820)
     Getting to some of the questions we've been hearing regarding the age verification laws, one of the things that's important to keep in mind is that companies like Meta have heavily promoted that as a solution, which should be suspicious. All that does is pass the responsibility on to kids, on to their families and away from the platform for the fact that it is directly profiting from harm. It also takes the responsibility off the platform to invest in any sort of moderation or quality control of its main product, which is advertising. We don't see any of these levers being used.
    It's gone so far that for some of these platforms we're seeing direct sanctions violations. Because they are so heavily operating without any sort of oversight, they're actually profiting from sanctioned entities for things like advertisements and subscriptions—sanctioned individuals who use their full names as listed in OFAC's list.
    This is because these companies have been able to operate with complete impunity for over a decade now, passing the buck of responsibility to parents. Keep in mind that not every kid has parents who are going to understand how to use social media. Not every kid has parents who are able to be involved in their lives. Some kids have grandparents who may have never even seen one of these platforms before. Again, it's all passing the buck of responsibility away from these companies who are failing to do their job and the quality control of their product.
    Starting there with their product, which is advertising, and ensuring that is controlled is one of the critical pieces because that is their main financial lever. That's one of things that's going to force further moderation, so they're not incurring harm, fines or regulatory financial burdens that are going to harm their business model.
    Looking from the top down, which is where the money starts, is one of the most critical pieces toward any successful regulation. Obviously, the DSA has been mentioned several times as something that is new regulation. The fact that the companies have pushed back so heavily on it.... Again, the DSA is not meant to police speech on these platforms. It's targeting illegal activity and terrorism.
    The fact that the companies are pushing that hard, even having the U.S. President do things like threaten tariffs against Europe for imposing the DSA, suggests that any sort of regulation is effective in a way companies are not willing to comply with because it would require them to give up profits they've been making from harmful content.
    Starting with the profit-driving mechanisms is one of the most critical ways to have effective enforcement in any country. Looking at how these companies react to the threat of enforcement is a really important measure for understanding where the pressure points are.
    Okay. I can appreciate that.
     In many ways, that's a fairly general answer. Are there specific pressure points that should be applied through legislation to be most effective?
    I'm not as familiar with the way the legislative process works in Canada, so I wouldn't feel comfortable speaking to that. However, I think that utilizing laws on the books and imposing fines for companies profiting from already illegal activity is a very easy first place to start.
    Also, look at models that are happening in Europe and the U.K., where they are imposing online safety protocols in a way the companies are clearly unwilling to meet. For instance, Meta has announced that it's going to stop running political ads in Europe, simply because it's unwilling to make an effort to moderate its platform in a way that complies with reasonable regulations.
    Thank you.
    One of the things you talk about in an article that is within your sphere is that Meta can keep certain illegal material off its platforms, specifically images depicting child sexual abuse. You say, “The company is completely capable of doing this...It has the technology to address these issues. It's a choice at this point to not do so.”
    If Meta is technologically capable of detecting and removing these types of things—trafficking—what explains their lack of action in this case, then?
    The failure of any sort of repercussions financially, civil or legally. Without repercussions, there is no incentive for the company to invest in moderation. In the years prior to 2020, for instance, where they really touted that they were investing in more content moderators in trust and safety—thousands of people they've since laid off—they were doing it because there was just the threat of U.S. regulation at the time.
    Unfortunately, we have not seen that regulation come yet. In the meantime, the company has decided to lay off thousands of those moderators and not invest in these types of safety because, simply, there's nothing stopping them from not doing so.
(1825)
     Essentially, then, if regulations or legislation is to be put in place, to have teeth, it would have to impose a significant monetary penalty on these tech organizations in order to see change. Is that a correct understanding?
    It could be a monetary penalty or a penalty of any kind. Incorporating the fact that there is illegal activity that the average person, if they were.... It was asked if we work with law enforcement. I'm assuming the people running these ads for drugs and weapons would get arrested in Canada. They would certainly get arrested in the U.S., but the company that's profiting the most from running the ads in the first place is facing no repercussions, either legally or financially.
    Any repercussions for profiting from that kind of behaviour and the facilitation of those crimes is a very base-level place to start. Again, I'm not as familiar with Canada's legislative process. From there, it's about understanding more abstract concepts, like what defines hate speech and what defines disinformation, but start with the illegal content because that's where we are. These companies are running ads for drugs, weapons and fraud.
    Thank you.
    It makes me wonder about enforcement, given the prevalence of all this harmful content, but I will pass the floor over to Mr. Myles for five minutes.
    Okay, so we're still going.
    Thank you very much.
    I'll quickly ask this: Is there any concern about regulating in the ways we've been talking about? At its core, I am still concerned about the addictiveness of the actual algorithm. We have ads and we have the content on it that is concerning, but there is something to be said about the idea that even without ads, it is designed to pull you in. You aren't choosing those things.
    That doesn't go away, even with these regulations. Are people looking at ways to actually address that element of concern for those of us who have kids using these things and are worried about the screen time and even their ability to pay attention for long periods of time? It certainly has affected my ability to pay attention for a long time. I scroll all the time. I'm as guilty as anybody on these things.

[Translation]

     Go ahead, Ms. Paul, please.

[English]

    I'm sorry. I didn't direct the question.

[Translation]

    Mr. Morin, you have the floor.
     This is certainly an important issue.
    I’m going to make a kind of leap through time. You probably saw this morning, while reading the news, that after 25 years, an article showing the negative impacts of glyphosate was retracted by a scientific journal. It was realized that, 25 years ago, this article was written, not by researchers who claimed to have written it, but by a company that paid researchers.
     I’m going to take another leap in time and bring you back to the tobacco industry, which was a bad corporate citizen at the time, and to your role as a member of Parliament. Imagine that you are a member of Parliament from 25 years ago, with the information you have today about the negative effects of tobacco. What would you do?
    Today, experts say there are as many negative effects from continuous exposure to social media and harmful online content as there were back in the day with tobacco industry products. It’s therefore extremely important to move forward today, and I think we need to start somewhere.
    The tobacco industry began conducting studies. Remember the tobacco industry’s response that doubt was its product. The idea was precisely to make people like us, and politicians, doubt by saying that it was more complicated than it seemed, etc.
    In my opinion, we have enough data today to move forward. Mr. Myles, you said that we would need to work on the issue of algorithms and the dependency they create.
    To answer the question posed by the member earlier, we must start by defining the most harmful content that we want to remove from the market. Is it child pornography? Is it the sharing of intimate images without consent? Is it the glorification of violence, terrorism and hate speech? Next, companies must be held responsible, obligated to act and be more transparent.
    That’s what the former Bill C‑63 was doing. It required that companies submit a report each year with a protocol, verified by a digital safety commissioner. If the commissioner disagreed or found that the report had not been prepared well enough, penalties were imposed.
     We have to start somewhere. Child experts will work on the issue of addiction, etc. In my opinion, we no longer have the luxury of waiting.
(1830)
     Thank you very much, Mr. Morin.

[English]

     Ms. Paul, I think you want to interject as well.
    Yes.
    One thing with regard to algorithms—keeping in mind that this hearing has a focus on children—algorithms are not there to help you, me or children. They're there to help advertisers and the companies profiting from the ads. When Facebook first started in 2004, it didn't have algorithms. You followed your friends, and the feed was chronological. It didn't introduce its first algorithm until 2006. The company has increased those algorithms and the kind of content it recommends so that it can maximize profit, not suggest things that are useful to users. We know, from multiple Facebook whistle-blowers, that the company uses algorithms that push toward increasingly divisive and extreme content because they're the most likely to get engagement. Engagement means you're on the platform longer, and that means they can charge more for ads. It gets back to the business model of these platforms and understanding that kids do not need algorithms to operate on a social media platform. The platforms need the algorithms to keep kids' eyeballs on them.
    I would encourage your committee to interview the whistle-blowers from Meta, who were speaking before Congress just a couple of months ago, because they specialized on harms to children based on the platform's designs and algorithms, and were directed by the company to investigate only certain things and not things that were going to harm the business model. They detailed this in their testimony, but I think it is important for the MPs to hear this from the Meta whistle-blowers themselves.
    That's very helpful. Thank you for both those answers. They were tremendous.
    It's so interesting.
    I know Mr. Généreux would like another short time. We have another 15 minutes, so we'll do three, three and two minutes. Does that work?

[Translation]

    Mr. Généreux, the floor is yours for three minutes.
     Thank you very much, Madam Chair. I thought the meeting was over.
    I talked with my colleagues and I listened to all the witnesses speak. By the way, your answers have all been excellent. It was very interesting.
    Laws have been adopted in some other countries. For example, you mentioned Australia, if I’m not mistaken. Could you tell us which countries, in your opinion, have good laws or are currently applying good regulations? Can you give us some examples?
    Also, I would like a follow-up response to the previous questions.
    As I just said, you are excellent witnesses. We have added two meetings as part of this study. That said, we would have time to hear from other witnesses, who could potentially be elected officials from other countries or specialists who have made the laws you are familiar with. If possible, I would like you and Ms. Paul to recommend countries we can draw inspiration from or, if you know of any, individuals who have enacted these laws. I don’t know if you’re able to do that.
    Mr. Morin, you’re nodding in approval.
     We met with them when we discussed Bill C‑63.
    Clearly, I would recommend the president of the United Kingdom Office of Communications, then the Australian eSafety Commissioner, and probably the European Union officials. Those are the three countries that have made the most progress on this issue.
    It still took time to implement the Digital Services Act, or DSA. I don’t know what evidence the countries have on the matter.
    As for Australia, we won’t have any evidence for several years concerning the positive and negative effects of prohibiting the creation of accounts on certain social media platforms before the age of 16.
    That said, I would invite you to meet with these three representatives. Certainly, they have a lot to say, a lot of evidence, and very interesting measures. I could find their names and pass them on to you without any problem.
     Ms. Carignan and Mr. Bridgman, would you like to add something?
    Yes.
    I agree with Mr. Morin: England, the European Union and Australia are models we should consider.
    Moreover, if we follow an existing model, we would be putting pressure on the platform that has already adopted that model. There would therefore be less pressure on ours if we show that it is already being done elsewhere. In other words, it shows that it can be done.
    It’s good to draw on countries that have already implemented policies, because the platforms can no longer absolve themselves and say that they’re unable to do it.
    I don’t know if you’ve met my colleague Pierre Trudel, a media rights expert in Quebec. It would be interesting to meet him to see how these measures can be applied in Quebec. In addition, I would recommend different actors, such as As de l’info or representatives from media education initiatives.
    Furthermore, another element had already been discussed in relation to the regulation of platforms, namely the importance of establishing an online observatory to monitor what’s happening and proactively prevent issues, misinformation trends, hate content and new movements. It’s about being more proactive. We should look into this area to see how to set up an observatory that monitors and acts a bit more proactively and preventively.
(1835)
     Mr. Bridgman, what do you think?
    Thank you, Mr. Généreux. Your time is up.

[English]

    Mr. Al Soud, you have three minutes.
    Thank you, Madam Chair.
    Mr. Cooper, I made reference to parasocial relationships earlier, and I've asked you so many questions that I'm sure this is starting to look like one. I hope you'll forgive me.
    You spoke of the harmful impact of excessive screen time. Increasingly, screens are part of everyday life. They are at school and everywhere we go.
    How should Canada balance screen time guidelines with the reality that schools, social life and extracurriculars now operate online?
    Thank you for your question.
    I want to address that it is an addictive behaviour, as your colleague indicated as well. One thing we know is that one addictive behaviour can be a catalyst for another addictive behaviour. We see, for example, that individuals who spend six-plus hours on screen time are twice as likely to show a high risk for alcohol consumption, cannabis consumption and a host of other addictions. This really does come down to how it can break down your ability to regulate yourself in the face of these addictive sorts of properties.
    What I would necessarily work on is that we don't, as your colleague also indicated, have really good social media or even media literacy anymore. In Ontario, I indicated there wasn't any. In Ontario, we have it in grade 10, but it is terribly outdated. We know that youth are consuming this sort of content significantly younger than 14, which is grade 10.
    I think the first thing we can do is start introducing literacy at younger ages and explicitly in curricula. I do agree with the comment that it has to be younger. I also believe very strongly that we're not going to beat all these social media companies. We're not going to be successful with the one avenue of simply trying to remove access. We're not going to be successful, because individuals are going to find a way to get access to those sorts of things. It's a combination of reducing access so they don't have the ability to do it six-plus hours a day and supporting digital literacy campaigns.
     I do agree with my colleague, especially about trying to find ways to demonetize. I know that the algorithm functions in a way that allows them to have local tweaks to countries, even on a regional level, so there's no reason whatsoever that they couldn't have a Canadian-specific version that would necessarily identify a group of particularly harmful activities. We could identify that it is primarily targeting younger adults.
    On a lot of social media platforms, you can't target children, but you can target ages 18 to 24, and that same group would overlap with a lot of youth who would have this content. You could find ways to microtarget it to, say, 18 to 24 on issues around anything that might lead to body dysmorphia such as makeup tutorials. You could create categorizations of things that would necessarily be demonetized in some way in Canada.
    If you put a suite of things like that together, that's your best adoption for addressing these things, because you're not going to get it down to zero. From my perspective, it's really about trying to remove that exceptional six-plus hours that individuals and young people are getting, or even three. There are risk factors at over four hours, so we need to get it down below that threshold.
    Thank you very much.

[Translation]

     Mr. Champoux, you have the floor for two minutes.
    Thank you very much, Madam Chair.
    We really have an exceptional group of witnesses today.
     I have the impression that, for years, our society has allowed hatred in all its forms to spread by not regulating digital platforms, among other things. When we tried to do it, we ran into all sorts of obstacles. We may be at a point where we have to tackle a huge challenge head-on. We can conduct studies like this, each on our own, but I feel that we will not make any progress unless we present a united front.
     Don’t you think we’re at the stage where we all need to take a step back as a society and set up an independent commission, similar to what was done for the future of telecommunications and communications with Janet Yale’s report at the end of 2018‑19?
     Isn’t it time to take a step back and set up a study group to see how society should position itself with respect to these major issues, which have and will continue to have enormous repercussions on the future?
    What do you think, Ms. Carignan, Mr. Morin and Mr. Bridgman?
(1840)
    I think it’s time for you to take action. In fact, there are many studies and tests that have certainly been done in other countries, but also in many states in the United States. The information is out there. It’s true, it’s time to tackle the huge challenge head-on. If that’s the most effective method, go for it.
    However, for now, we need to find a way to tackle this issue as quickly as possible, because the rest of the world is moving forward on this issue. Here in Canada, I feel like we’re trying, but not enough. We have reached a point where there is not enough protection for minors.
    This is where we are, and we need to take action.
     I agree.
    They say you have to eat an elephant one bite at a time, but maybe you should take big bites. Indeed, inspiration can be drawn from models that exist elsewhere to do better. We may need to collaborate more with other countries, talk to them, see how we can draw inspiration from their practices and show that it’s feasible.
    I think we should tackle the problem on two fronts. First, we need to act quickly to protect young people and regulate the hate speech, the violent and extremely harmful content that we discussed today. Next, we need to think more broadly about how to continue monitoring content, implement initiatives that will be preventive and continue to gather evidence, as my colleagues mentioned earlier. Furthermore, there’s a lack of data on the online practices of young people, among other things. I believe we must continue to act on both fronts.
    It’s relevant to establish a body that could continue to reflect while acting quickly.

[English]

     Thank you again to all of our witnesses today. This has been an excellent and very interesting panel. We've learned a lot.
    If there is anything you haven't mentioned, anything you weren't able to get on the record or anything you think of later, please send it to us via the clerk. We can use that as we discuss this study and as our analysts put together a report for us. Thank you again for your time.
    This meeting is now adjourned.
Publication Explorer
Publication Explorer
ParlVU