:
I call this meeting to order.
Welcome to meeting number 18 of the Standing Committee on Canadian Heritage.
Before we begin, I ask our two in-person participants to look for the green card in front of you. There are guidelines and measures in place to help prevent audio feedback incidents and protect the health and safety of all participants, including the interpreters. There is a QR code on that card, as well, if you need further instruction.
Pursuant to the routine motion adopted by the committee, I can confirm that all witnesses have completed the required connection tests in advance of this meeting. We do have some witnesses online today.
Welcome. Please wait until I recognize you by name before you speak. All comments should be addressed through the chair.
Pursuant to Standing Order 108(2) and the motion adopted by this committee on Wednesday November 5, 2025, the committee is meeting to study the effects of influencers and social media content on children and adolescents.
With us today David Morin, full professor and UNESCO Chair in the Prevention of Violent Radicalization and Extremism, Université do Sherbrooke. From the Media Ecosystem Observatory, we have Aengus Bridgman, director. Online, we have Michael Cooper, vice-president of data and partnerships from Mental Health Research Canada. We also have Katie Paul, director of the Tech Transparency Project.
Welcome.
I will note that we have another witness joining us at 5:30, Marie-Eve Carignan, also from UNESCO. We will give her five minutes to speak when she arrives at 5:30.
Starting now, each delegation, each witness, has five minutes to give some opening remarks.
We'll start with you, Mr. Morin. You have five minutes, starting now. You have the floor.
:
Thank you very much, Madam Chair.
Thank you for inviting me and giving me the opportunity to speak to you today about a rather specific aspect of social media, namely the link between exposure to hateful content and violent extremism, one of the dark sides of social media.
My daughter would be very upset with me for not starting by noting that social media has many virtues. Overall, it is often very helpful and great for young people. However, today I’m going to talk to you specifically about one aspect, namely the link between social media and violent extremism.
I will start with three very recent examples in Canada.
The first is the arrest of a teenager in Nova Scotia who was charged with child pornography, among other things. He was part of what’s now called “nihilist extremism”, which glorifies violence and cruelty by using references or codes related, among other things, to Nazism and jihadism. That teenager also belonged to an online movement called group 764, which recruits young people to commit violent acts, including mutilation and suicide.
I’m mentioning this example because, obviously, the 764 movement recruits a lot of people on digital social networks, and these individuals are getting younger and younger.
The second example is the arrest of a young jihadist this summer in Montreal. Radicalized online in favour of the Israeli‑Palestinian conflict, he pledged allegiance to the Islamic State and was preparing to commit a violent act. It reminds us that the virtual caliphate, that of the Islamic State, and online communities play an important role in this terrorist organization.
The third example is that of Patrick Gordon MacDonald, alias the “Dark Foreigner.” He was sentenced to prison for charges of terrorism and hate propaganda. He was promoting a violent far‑right ideology for the neo‑Nazi accelerationist group Atomwaffen Division. Here too, the Atomwaffen Division was an extremely active group online, which has also been added to the Canadian list of terrorist entities. This reminds us that, long before many other groups in the United States, the far right understood the enormous potential of social media to spread its extremist messages.
I’ll talk to you very quickly about the Internet today, digital social networks and violent extremism. What are the current trends?
I would like to emphasize three points.
First, it should be remembered that social media today knows how to exploit periods of polarization and attempts to recruit people by targeting younger and younger individuals. There is therefore a trend toward younger people becoming radicalized through the Internet in an increasingly short period of time.
Next, it’s important to know that mainstream platforms, where we find radical but nonviolent content, are being used as a gateway to then direct young people toward much more violent content on different platforms. That’s an important point.
Finally, and I want to stress this point, today, video games with online connectivity features are being increasingly used to ultimately try to recruit young people into all sorts of violent extremism. This last element obviously relates to the issue of generative artificial intelligence, which will multiply the possibilities for these extremist groups to radicalize young people.
I wanted to talk to you today about the results of systematic reviews on the potential effects on young people of online exposure to hate. What does the evidence say? It says that exposure to extremist content online today does indeed seem to be linked to the adoption of radical attitudes, regardless of the type of media in question. Exposure to extremist content online also seems to be linked to the adoption of extremist behaviour, not only in the virtual world but also in real life. It’s important to note that. Finally, I would like to add that exposure to hateful content on the Internet is not the only factor. We must also consider the other factors in an individual’s life that may lead to radicalization, such as personal crises, mental health issues, belonging to a radical group, etc.
Indeed, the evidence reminds us today that there are repercussions on social attitudes when people are exposed to hate speech. It increases negative attitudes toward targeted groups; it decreases general positive attitudes; and it has potential effects on mental health, and societal consequences on trust between social groups, aggressive behaviour or the normalization of violence.
I will note certain elements. According to Statistics Canada, in 2022, 71% of young Canadians aged 15 to 24 reported having seen hateful content online in the previous 12 months compared to 49% of the general population. According to the police, more than a third of the victims of hate cybercrimes were under the age of 25. The Royal Canadian Mounted Police, the RCMP, also noted that, between April 2023 and March 2024, 25 people were charged with terrorism, and seven of those accused were minors. In that context, obviously, the status quo is not acceptable.
I repeat, it’s not necessarily about having an approach that’s solely punitive and overly restrictive. There are examples elsewhere. We can see what’s happening right now in Australia, the United Kingdom and Europe. We need to take matters into our own hands and do it in a targeted manner. This is what we stress a lot, by first placing the primary responsibility on platforms to regulate harmful online content. Next, it’s up to other actors in society to work on prevention and awareness.
In conclusion, Madam Chair, I would like to note the importance of accountability for both women and men in politics. It is their duty to make responsible statements that do not fuel the growing polarization in our society; this obviously does not prevent politicians from addressing sensitive and controversial issues and engaging in politics, since politics is all about debate.
Thank you for today’s initiative, which is undoubtedly another step on this long and winding road.
Thank you.
:
Thank you, Madam Chair.
Thank you for the invitation to speak here today. I want to open by saying that my expertise is as a scholar of the information ecosystem and the overall information environment. I'm not an expert on children or youth. Nevertheless, I find our studies of influencers and the information environment very pertinent for this study and very pertinent for this committee.
Recently, we ran a study looking at the rise of influencers in Canada. We know now that amongst the youngest cohort we were able to survey, over four-fifths of Canadians, so 81% of youth, are getting their news typically from influencers now. They're getting their news, their political information and their entertainment content. That is the source of their political and social life. That is at its base. This has enormous repercussions for our political reality and for the training of youth in the political process.
I want to highlight two major findings from that recent influencer study that I think are particularly pertinent. The first is the way in which influencers spread and come to appear on the screens of youth here in Canada. The primary way in which influencers reach new listeners, new adherents, is through the recommendation algorithm. It is not through explicit preference. It is not through social relationships. It is through the algorithm. In your day-to-day behaviour on social media, it is the platform itself that is determining what you see, and not any intentionality. This reduction in intentionality, and the way in which particularly youth consume and think about information, is enormously important. We haven't really appreciated the consequence of it.
If we think back to 20 to 30 years ago, the way you chose to get your information and where you got your information was very much about a choice that you would make. You would go out and you would make a decision for a paper, for a TV channel or for people to talk to. It is not so today. For the youth of today, your choice is the platform. In some ways, that is determined by your social status and by your friend group. Then, once on the platform, your choices are much less important than your behaviours and your actions that you don't even know you're necessarily engaging in. That loss of intention is enormously important for political and cultural socialization.
Number two is that influencers are now central to the political conversation. They make up the majority of engagement. The majority of Canadian eyeballs that see political content online are now seeing influencer content. We have a system, a set of norms and rules, around speech, around disclosure and around transparency that grew up in an era when influencers didn't exist and when it was unimaginable that a private citizen with a telephone in their bedroom would be able to reach millions of Canadians, but that is the state we are in right now. Our regulatory approach, particularly during elections but outside elections as well, is completely unprepared and is ill-adapted to the new reality.
I have three recommendations for this study. First, this is what it is. This is not a phenomenon unique to Canada. Influencers and social media are now the primary sources of social life for youth. Any policy or approach that doesn't take adequate account of that is doomed to fail. We need to operate within that regime. We need to operate within this idea that youth like their social media. They want to continue to use it. We can better protect them, and we can better encadrer that space, but it is what it is.
Second, algorithmic discovery is the key mechanism and the key way this stuff is shared. That algorithmic discovery is not a neutral process. It is a process by which platforms have made a series of choices about what content gets amplified and shared and which influencers are seen. The idea is that they would like you to think that there is no decision, that it is some black box that has no control, but that is not the case. There are decisions behind that. That is one of the key levers available.
The last thing I want to leave you with is that, look, the line between entertainment, culture, community and political information has never been blurrier. For youth today, in their day-to-day consumption of information, politics, entertainment, culture and TikTok dances are all intermeshed and together. That creates an environment where they can become incredibly informed, but it also creates some dangers. Some of these dangers are that our media literacy training programs, the way we have taught people to consume news in this country, are completely ill-adapted for an environment where all of this is blended together. I urge this committee to reflect on and to account for that.
I'll leave it there.
:
Thank you for having me here, and my apologies that I could not be there in person. I very much would have liked to be.
Again, my name is Michael Cooper. I'm the vice-president of data and partnerships here at Mental Health Research Canada. We have been funded to collect ongoing trackers of mental health indicators since 2020, as a pandemic response, and since that time we've evolved to include a number of cross-sectional issues that intersect mental health. I can share some of them here today.
Specifically, I want to share a few things I've learned about age 16 and older. We don't collect any data for anyone under the age of 16. I can speak to a bit of other research on that topic. Specifically, I want to mention that we've been tracking online gambling specifically among youth. The algorithm is showing a lot of information about that particular issue and how that's driving problematic gambling.
I also can speak a bit about screen time. One of the things we've been tracking is the volume of screen time. We've identified that for a number of youth—essentially, for anyone who consumes more than six hours of personal screen time per day—there are significant mental health implications, from anxiety to depression and to suicide ideation. We've published some reporting on that. Of course, we've seen that youth aged 16 to 24 are the group most likely to spend more than six hours a day on screen time. Therefore, they would be the ones most impacted by these indicators.
The other issue I wanted to speak about a bit is how we have tracked social media specifically. We've tracked what youth are doing on social media: what sorts of activities; cyber-bullying; what their experiences have been along the lines of FOMO, the fear of missing out; and whether or not they're experiencing issues around comparing themselves to others as well. I've put together a deck and have sent it along to the group if you're interested in asking any questions about that specifically.
I do want to speak on a few other issues that are more general around mental health and specific to social media, where we would say that we've been tracking long-term trends since the 1970s on mental health. We have indicators that have tracked this since that time. They're not clinical in nature, but they do track general mental health indicators. We did see a significant shift in about 2004 for a lot of these youth in terms of their mental health, which would have corresponded to when a lot of these smart phones would have ended up in individuals' hands. We did see another movement again in 2020, through the pandemic, and not a recovery since that time as well. I want to highlight that this is another area of research we are privy to as well.
I want to highlight the social connection aspect of it. Individuals who are more connected to their community, to family and to loved ones are far more likely to have positive mental health indicators and to seek out help. We do know that for a number of individuals, the experience they're having online through social media is shallow, and not necessarily engaging with outside individuals could be one of those reasons why their mental health is poor if they're spending so much time on social media.
The other thing I wanted to speak to very quickly is this idea of influencers. I do not track influencers. However, I am a vast consumer of research, and I know that a tremendous amount of research exists on understanding how youth process, especially, advertising. We have this from past studies by Concerned Children's Advertisers.
We have a great amount of data on this. We know that youth are not fully developed in terms of their ability to discern between informational content and selling content. We also know that it becomes especially blurry if the line is blurred. If there's not a price tag at the end of an ad, most youth would not be able to identify that it is in fact an advertisement. When I think about influencers, I'm thinking about the fact that a lot of these influencers are being used to sell products to youth, essentially circumventing a lot of these Concerned Children's Advertisers laws that we've had over a period of time.
I'm more than happy to speak to any of these topics. These would be areas that we have data and expertise in. I probably have 300 stats on these issues. I don't want to just throw numbers at you, but I can assure you that we are tracking these issues and other issues such as body dysmorphia and eating disorders. We do know that about one in four young women is experiencing a high risk of eating disorders. A lot of that does tie in to social media as well. We're seeing connections with the high social media use as well. There are a lot of very troubling statistics around what's happening in mental health as it pertains to social media for youth.
Thank you.
:
Thank you so much for the opportunity to speak with you today about the impacts of social media on young people.
My name's Katie Paul, and I'm the director of the non-profit Tech Transparency Project in Washington, D.C. We are a non-partisan research organization that investigates the influence and impact of big tech on the public.
Our research has found that big tech platforms have not only amplified harm to children, but often profited in the process. Recent reports from a multi-district lawsuit in the United States revealed that big tech companies like Meta and YouTube are internally aware, based on their own research, of the potential harms of their content to children. That research was then buried by the companies so they could continue to profit from that harm.
The revelations from the lawsuit track with years of research from the Tech Transparency Project. Our investigations in 2021 and 2022 found widespread drug trafficking on Instagram that was algorithmically pushed to accounts for users under the age of 16. Meta's platform design and algorithms make it easier for kids to contact drug dealers than to log off the platform. The study found that while it takes only two clicks for a teen to find and connect with a drug dealer on Instagram, it takes five clicks to log out of the platform.
Instagram's automated technologies also undermine the company's own efforts to address drugs. For instance, while Instagram banned hashtags for popular drugs like MDMA, its search autofill recommended alternative hashtags for those drugs, driving kids directly toward dealers.
The problem isn't just platform design. Meta also directly profits from pushing drugs to users on its platform. A series of TTP investigations found that Facebook routinely approved ads pushing pill parties, alcohol, gambling and vaping, as well as extreme weight loss to kids under the age of 18.
Meta's primary business model relies on advertising. It's the company's main product, but it has little oversight and quality control. Meta does little to implement safety when it comes to ads. In July of last year, our organization published a report that found Meta has run hundreds of ads for deadly drugs like cocaine and fentanyl. These ads are not simply content posted by third parties. Meta has reviewed, approved and is profiting from these advertisements. These kinds of advertisements continue today, as was reported by the Toronto Star in a recent investigation.
The problem isn't limited to ads for drugs. In October of last year, TTP found that Meta was also running hundreds of ads for weapons, in some cases amounting to international arms trafficking. These were not ads for big box stores or local gun dealers. They were illicit ads selling ghost guns, fully automatic weapons and illegal gun parts. These ads not only help put illegal trafficked weapons into the hands of people across North America, but they also undermine the business of legitimate licensed gun dealers.
Ads for both guns and drugs follow the same pattern. They feature an image or a video of the illicit content and link to a private messaging service like Telegram or WhatsApp, which is also owned by Meta, to conduct transactions.
Meta is perhaps the most critical piece of this puzzle. These dealers buy ads from Meta to get their product in front of as many people as possible. They could not attain this reach without the help of Facebook or Instagram.
While these social media and tech companies are aware of the harms of their platforms, they don't take action to mitigate those harms until after the potential consequences have been raised. Companies like OpenAI, which is facing a major lawsuit for its AI chatbot's role in teen suicide, created a teen version of the chatbot only after it was sued by the family of Adam Raine following ChatGPT's AI chatbot providing instructions on how to make a noose and encouraging Raine to commit suicide.
In 2024, Meta launched its teen Instagram accounts, holding up the feature as a move for parents to help keep kids safe on the platform that they failed to effectively moderate. The move was largely part of a broader effort by Meta to stave off the implications of civil lawsuits and a wave of pending regulations from lawmakers in the U.S. and abroad. What Meta had pitched as new features to keep teens safe was simply a repackaging of things the company had already claimed it was implementing years earlier. TTP recently tested these accounts and found that the content Meta had claimed was barred from teens—notably graphic content and fight content—was served readily to teen accounts despite the heavily promoted claims of protections. This continues today.
As companies like Meta have come under pressure, they have funded organizations like ConnectSafely and the National PTA to ensure they launder their narrative through paid allies.
These social media companies and chatbots are among the most well resourced and technologically advanced in the world, but those profits have been built on decades of harm to children, which the companies are aware of but take no action to address unless faced with the potential of repercussions.
They have the capital and capabilities, but have proven time and again that they cannot be trusted to act in good faith. It's imperative for national governments to effectively regulate these companies for their role in profiting from the harms to the most vulnerable population.
Thank you very much.
:
Thank you very much for this question. Can you give me half an hour to respond?
Yes, absolutely. In the ecosystems we’ve been monitoring since the attacks of October 7, 2023, we have indeed observed a convergence of hate speech directed at the Jewish community, which is associated with the Israeli government without any nuance.
You refer to influencers. I’ll avoid naming names, but there are obviously radical Islamist groups in Canada and some prominent figures in Quebec who are trying to take advantage of the feelings of injustice and anger among some young people concerning the situation in Gaza to promote a narrative that emphasizes the supposed incompatibility between Islam and western values.
Obviously, this kind of speech tends to radicalize some of our youth, which is why it’s important to have extremely nuanced political discourse. Again, I don’t want to name names, but some groups do have a storefront, are present on major social media platforms, while others are on much more alternative platforms, and they still reach a fairly significant, albeit targeted, audience. I’m not sure, if I named them, that it would necessarily resonate.
As for antisemitism, as you know, it did not originate with the attacks of October 7. Antisemitism has been present in our societies for a long time, but this type of conflict indeed contributes to reactivating it. In my opinion, we should better regulate hate speech because—as the statistics show—hate speech against the Jewish community has significantly increased in recent years. It does not seem to be weakening. It has stabilized, but not actually decreased.
I hope I’ve answered your question.
:
Social media does several things. First, it obviously allows audiences to be reached and targeted, and I think my colleagues have said that well. It’s therefore possible to go into virtual spaces where we know, for example, that young people will be playing online war games, etc. That’s one example.
It’s known that they can be reached. Through Internet messaging functions, it’s possible to contact these young people, quietly ask them questions about their life experiences and their political views. Indeed, that’s where the most vulnerable individuals are identified and gradually radicalized.
We’ve seen it a lot from the Islamic State. I would say that it really invented a kind of banner that, even on digital social networks, made it possible to pledge allegiance to that group and to commit, without ever having been solicited to do so, a knife attack, a vehicle ramming, etc.
Again, social media is, among other things, one thing in the tool box of terrorist organizations. Many people go on social media and, fortunately, do not become radicalized. I wouldn’t want anyone to think otherwise. On the other hand, we see that, among young people who are being radicalized, there is indeed a very high consumption of digital social networks. There’s no doubt about it, and it’s a consensus among researchers working on issues of violent radicalization.
So it’s this ability that social media have to reach people. Obviously, there are also all the encrypted platforms that allow for the exchange of information. In addition, there are also all the sources of funding today using cryptocurrency, which make it possible to fund terrorist organizations or groups.
So it’s an extremely useful and powerful tool for terrorist organizations.
:
I'm happy to get started on this.
As you know, gambling is illegal for those under 18. We did not ask about that specifically, but what we can identify is that there is a tremendous amount of bending the rules. Typically, Ontario has been the one that has legalized single-game sports betting, but we're seeing that coming in from every province and, of course, it's not legal in those provinces. In some provinces you go into, they're running ads saying, “Don't gamble on Bet365, but bet on our local platforms.”
There's some great work that is coming out of UBC and the centre for gambling there. Dr. Clark is his name. He's done a lot of work looking at the neurology as to what's happening with youth and, specifically, with gambling-like activities. I would think it's along the lines of Roblox, going in there, buying a randomized loot box and, then getting some random item inside that, or going on Call of Duty or some video game and getting a randomized loot box. It's essentially the same dopamine hit you get when you get that sort of experience. You're essentially participating in a gambling-adjacent activity, and these are available at any age.
There are even reports coming in, which we're seeing in some other countries as well, where they're taking artificial currencies like Robux and are actually able to participate and gamble their currency in Robux. It's not regulated because it's not a legal currency, so there are lots of ways that organizations are getting around this.
We do know that youth are being inundated with ads for these sorts of things, and it is essentially rewiring their brains for both expectations and what they're prepared to do. It's basically, again, legalized gambling.
I'll stop now because I'm sure other providers want to answer that as well.
:
Thank you for your question.
The first criterion is not to look at the number of hours the child spends on social media, because it’s not necessarily a good indicator. It may be an indicator of a mental health issue in the child, but not necessarily that they’re becoming radicalized.
I’ll be careful with what I say because there’s no established profile. The important criteria or indicators that we observe include the young person’s social isolation, their psychological distress, a complete change in their habits, different friends, intolerance to any contradiction in a conversation, etc. Again, there are many false positives in these criteria that I’m citing. Anyway, I am the proud father of an 18‑year-old young woman. What I’m describing to you could have happened in my life, and I don’t think my daughter is becoming radicalized.
We’re looking at violent extremism, so the act of committing violence. I think what should concern us beyond that is looking at what happens beforehand. That’s why I invite us to take a step back. I believe that one of the current harmful effects of social media—I think my colleagues have said it well—is that the business model of a number of social networks, in the sociopolitical space, focuses on emotion, conflict, confrontation and stopping the debate of ideas. It also aims to trap people, ultimately, in echo chambers. I think that, collectively, we must recognize that this is a problem.
On Monday, I was testifying before another committee on the issue of anti‑feminist discourse. I give you the example of important influencers like Andrew Tate. The spread of this type of discourse on social media means that, today, many young men subscribe to an extremely unequal view of the relationships between men and women. They start their social, romantic and sexual lives with completely preconceived ideas that do not match those of young women and their expectations. I provided some quite shocking statistics on Monday. With reservations, I think more than 40% of young men believe, for example, that feminism is a strategy by women to control society. There are about the same number of people who think that equality between women and men has been achieved today and therefore that feminism, understood as a vision for the equality of men and women, is no longer useful and is no longer relevant.
You see, beyond the violent extremism that our security and intelligence services deal with, if we take just one step further, we collectively face a bigger challenge with our youth. I do not believe in the good faith of a number of platforms. Remember the Christchurch attacks. I won’t describe them to you again. After the Christchurch event, platforms and governments came together to try to remove content. I remind you that a man was able to massacre 60 Muslim individuals live online. It was filmed, so the massacre could be watched.
There have been advances, but in recent years, we have seen a regression. A number of platforms that had improved are now backtracking. You know them. This includes Twitter, where there is less and less moderation.
Excuse me, I gave a long answer.
:
Being on the Canadian list of terrorist entities already gives law enforcement additional powers to require platforms to provide information. As I mentioned earlier, the problem today is that there are small movements of influencers who are recruiting. It’s no longer always the large organizations with a storefront that can recruit people. I think that’s the problem today.
I must inform you, however, that I was part of the expert committee that advised the Government of Canada on the moderation of harmful online content for Bill .
I think that bill was a good starting point. It was a bill. It needed to be critiqued and improved. It was really a good copy on the table, with which it was nonetheless possible to move forward.
That working base did three things.
The first is that it gave platforms the responsibility to remove content and to demonstrate that they had indeed removed that content.
The second was that it appointed a commissioner who could verify that the content had indeed been removed. It was also supposed to provide data to researchers to help them better understand how content moderation works.
The third is that it nonetheless created an ombud position, which is very important because we all value freedom of expression. It was important for people who felt oppressed by the sudden removals by the platforms to actually have access to the ombud, and for their posts to be returned.
I think the bill was going in the right direction. The European Union does it. The United Kingdom does it. Australia does it. On the other hand, Canada is in a much more complicated situation because the United States, which is next door, is still putting pressure on Canada not to impose restraints. However, I believe it will now become a matter of public health and social cohesion.
:
I would like to add a slight nuance regarding the data.
For our part, we just finished a survey of over 6,500 people.
When we say that young people are on the right, we need to be careful. First, there’s a big difference between young men and young women. That’s major. The answers are very gendered on these questions. Young women are rather left‑leaning, clearly left‑leaning, on a number of social issues.
Then, when we say that young men are on the right, it’s not the majority of young men. There’s an increase in young men who are on the right, but they’re not yet in the majority. That’s very noticeable. I wanted to give these nuances.
Now, we do see it on digital social networks. I mentioned earlier that I also think that, due to the way digital social networks operate, they’re more favourable to a number of right‑wing influencers in their approach to political issues. I say this here without any political or partisan judgment.
Moreover, I also think that the American right—as I mentioned in my introduction—quickly understood how to effectively use digital social networks and make relevant use of them to reach young people. We saw it during Mr. Trump’s election campaign with the late Charlie Kirk, who was indeed able to mobilize a part of the American conservative right.
:
It's certain that there is a wide variety of perspectives. Influencers are a very diverse group.
In a recent study, we examined the Canadian influencer map. We were looking at over 1,000 prominent influencers in the Canadian context. One of the striking things about the Canadian context is that there's absolutely an ideological divide, but the largest divide we found in our analysis was about the type of content people focused on.
What we were trying to model is how often people engage in conversations with one another. Having a perspective is one thing. We all have a perspective in politics, but are you actually talking to somebody else about issues you care about? Is there that dialogue happening?
What we found was that in the Canadian context there certainly is a distinctive, smaller right-wing cluster that does not really have a parallel on the left. There are a few, but it's much smaller. The vast majority of engagement and attention is this core set of influencers who are really responding to the day-to-day of politics. They are really generating that conversation from their perspective, but often bringing nuance and interpretation and not bringing that ideology-forward approach.
In some ways, this was very heartening for us in the study. Over and over again, social media studies have demonstrated deep polarization in online spaces. That absolutely exists. There are echo chambers, but in the Canadian context, it does seem that there is this core—I'm sure many of the names are familiar to this committee—that actually just responds to the political news of the day and shares their hot take.
:
Obviously, you saw this summer that four people were arrested by the Integrated National Security Enforcement Team, two of whom were members of the Canadian Armed Forces. It is one of the main sources of concern for security services in Canada at the moment. Right‑wing and anti‑government extremism interact quite a bit. Currently, it’s a source of concern for both national security and public safety, as there’s a lot of resistance toward institutions and so on. It’s a movement that has obviously been very present in the western space for a decade.
I would say that in North America, right now, it’s as concerning as the jihadist threat for our intelligence services. It’s difficult to deter these elements.
I would like to add one point to answer your question and continue the thinking on the previous question.
When we look at the radical ecosystems and influencers, we realize that to a degree, it mirrors the two solitudes in Canada: an English‑speaking ecosystem and a French‑speaking ecosystem.
The English‑speaking ecosystem is heavily influenced by major American influencers, like the Tucker Carlsons of this world, among others. They really have a very significant influence.
The French‑speaking ecosystem, particularly in Quebec, is more linked to French influencers. During the pandemic, we saw a lot of exchanges, and not just virtual exchanges. Today, there are influencer invitations on both sides. These two ecosystems represent a very important trend.
The second trend, and I’ll stop there, is the desire of alternative influencers to reach a large audience, nevertheless. Obviously, I’m talking more about radical influencers who seek to enter the public space.
The member was saying earlier that he watches a lot of TV, which I do too. I think these influencers still have a desire to exist in the public space. They’re trying to get invited to more mainstream platforms, the public platforms, and we’ve seen that a lot since the pandemic.
:
The social media platforms will stand here and say, you can touch nothing about our business model. Nothing about our business model is adjustable to any national government. We are above any single national government. They will say that repeatedly and they will say that with regard to ads. They will say, this is an international problem. This is tricky. You can't.... Okay. Fine. That is their perspective, but it is not the perspective that the national government has to take.
The stuff that platforms are able to do, the ads that they are able to promote, the complexity of their systems, has made them feel invulnerable and ungovernable in this day and age, and I think countries around the world have struggled with governance. We are starting to see some of these pieces fall into line. Next week, in Australia, an under-16 social media ban is going into effect. This is very far-reaching and it's going to be very interesting, and I strongly encourage the committee to evaluate how that goes. It's very pertinent to this subject of study.
These things are absolutely controlled by dials, switches and decisions within these platforms, and they are very responsive to different governments who impose fines, who impose incentives. I think this is the one thing that I really would like to emphasize. The way internally these companies are going to think about this is this: What is the cost to implementing a solution here? What is the cost to an algorithmic change? What is the cost for better screening ads? What is the cost for changing things? How much engineering time and effort is that going to take? That is a business decision. It's a probabilistic business decision. As a regulator, what you have to do, what you have to think about, is how you change that knob in favour of youth health. How do you change that knob in favour of democratic interest? How do you impose on these platforms a cost associated with bad behaviour?
I just want to finish this remark by saying I am old enough to remember Saturday morning cartoons. There were ads on Saturday morning cartoons. If some of the ads that occurred on Meta had been shown on Saturday morning cartoons, those TV stations would have been shut down. There would not have been this hand-wringing. If the TV station said, “No, that's just our business model; we can't do that,” that would have been completely unacceptable, but for some reason, with these large tech platforms, we have ceded this ground. We've said that it's too complicated. I think we really need to remind ourselves that actually that's a choice. They are governable. They operate within our boundaries. They access an enormous Canadian market. It's incredibly profitable for them and they will respond to economic incentives to improve their behaviour.
:
Thank you, Madam Chair.
I thank the members of the committee for inviting me to appear before them today.
Indeed, I speak on behalf of the UNESCO Chair in the Prevention of Violent Radicalization and Extremism, and in my own name, as a full professor at the Université de Sherbrooke.
We are particularly concerned about the misinformation that affects young people through online content. This affects not only young people but also the entire population. I think it’s a global issue that needs to be addressed for both the population as a whole and for young people.
Several forward‑looking reports indicate that disinformation and misinformation are major issues, both in the short and medium terms, for our society and democratic societies. We can think of the World Economic Forum, which published a report in 2024, or of Policy Horizons Canada, which really highlights the major risks to democracy. People will no longer be able to distinguish the truth from lies, or from hateful content and polarizing content to which they’ll be exposed online. Furthermore, the rise of artificial intelligence makes it difficult for people to distinguish between what’s true and what’s false.
Young people are directly exposed to this disinformation, often without context and without a perspective that allows them to understand what’s false and what’s true. They don’t know if, behind that, there are economic, political or malicious interests that underlie the content they’re exposed to. Many young people are circumventing the rules of the platforms. They create accounts on platforms that are not suitable for their age. It’s very difficult to monitor and supervise. I’m sure we can talk again about how we need to regulate the presence of young people on these platforms.
Parents try to control the online content that young people are exposed to, but it’s a challenge for parents who don’t always have the tools to properly guide young people’s online practices. This framework is very difficult, as it can make young people feel like they’re being watched, which creates family tensions while we’re trying to have a good overview of the practices to which young people will be subjected online. Disinformation poses risks for society as a whole, both for the young and the not‑so‑young.
The report entitled “Fault Lines” from the Council of Canadian Academies, which I had the pleasure of participating in as an expert, indicated that there were consequences at different levels, namely for society, communities and individuals.
From a societal standpoint, disinformation can lead to political polarization, risks of democratic drift, a decline in public trust in political, economic, media and scientific institutions, and inaction on various issues such as climate change.
From a community standpoint, this can lead to a low adherence to public health measures, resulting in risks of epidemics and preventable diseases, vaccine refusal and a significant increase in health care system costs.
From an individual standpoint, this can lead to health risks, even the risk of death, due to poor decisions and money spent on products that are dangerous or even ineffective. This also concerns young people who are at risk of adopting these behaviours to which they’ve been exposed online, putting them at risk from an individual, community or social standpoint.
Disinformation and the role of disinformation actors, particularly influencers, are a matter of concern for all Canadians. The Digital News Report 2025 highlights that 54% of Canadian respondents consider influencers and online personalities as significant threats in terms of misleading information online, which poses a risk to our populations and to the youth who are exposed to it. We can think of the 2024 NETendances report, which highlights that 59% of young people aged 18 to 34 follow at least one online influencer. A proportion of 45% of young people say they spend more than three hours a day on social media.
In addition, data on young people under 18 is highlighted this month in a report from Sidaction in France, which shows that a large majority of young people are aware of and follow online masculinist influencers.
Media literacy is becoming an important skill to protect young people from this disinformation. This is not a uniform skill. It will be administered somewhat sporadically, as teachers do not all have the same abilities to educate young people about the risks associated with platforms and media literacy.
As part of the work of the UNESCO Chair in the Prevention of Violent Radicalization and Extremism, we’re particularly interested in media education and various initiatives implemented by stakeholders, such as the #30sec to check it out grants from the Fédération professionnelle des journalistes du Québec, the Départager le vrai du faux sur le web initiative from the Agence Science-Presse, or the initiatives from Les As de l’info.
We find that one‑time initiatives like these really have an impact on young people. They help them find more reliable information. This can even have a very direct and effective impact on countering radicalization, bringing a pollinator discourse into families that will help nuance opinions, even among more radical parents.
Exposure to the right information is important, but exposure to media education also allows young people to make their own choices as consumers of content. At least, it can enlighten them. They can know what content they’re exposed to, namely whether it’s professional or journalistic content, or if it’s ideologically oriented content. They can also know who is broadcasting it. Digital social networks are certainly a place of vulnerability for young people.
We can talk in particular about the exposure of young people to conspiracy content and offensive content that can disturb them and leave a lasting impact if they’ve been exposed to this. Several studies show that young people are even unintentionally exposed to sexual and violent content online. It can desensitize them, and it seeks to desensitize them and make them adopt radical ideas. Several groups operate deliberately like that. They will adopt social media and online gaming apps to manipulate and reach young people. Some platforms, like Kick, are used to promote violence freely and to make this violence acceptable to young people. A French influencer passed away this summer, precisely due to practices that normalize violence and encourage acceptance of it.
Social media is becoming an environment for recruiting young people. Different extremist groups, jihadists, those involved in organized crime or those focused on sex or violence, target young people online. Cyber-criminal networks like 764 or criminal groups like The Com primarily target young people aged 8 to 17, who are impressionable, to lead them to commit violent acts, self‑harm, torture or kill animals, produce child sexual exploitation material or even commit suicide. The violence of these acts often intensifies over time.
These platforms are sources of harassment and hate speech for young people. We must therefore be concerned about it.
We need to think about solutions such as funding quality media and information education for young people, which will allow them to recontextualize the information they’re exposed to, and media education initiatives, like the one I mentioned and that we studied.
We also need to think about initiatives to provide communication tools to parents to help them better interact with their youth, better assess problematic situations, discuss them with their youth and better evaluate the communication practices of young people.
We need to provide teachers with tools so they can better raise awareness about the media.
We need to better regulate access to platforms for young people and better regulate access to hate speech, cyber-bullying and violence, not only among young people but also across the entire population, as this is an issue that affects everyone.
Finally, I believe we need to conduct more research to understand online manipulation tactics and the groups that target the general population with hate speech, but particularly young people, so that we can intervene. We really need to continue researching these issues to gain a better understanding of them.
:
Based on that data from 2017—and I'm just going to rely on that—if one million Canadians were to have an eating disorder in 2017 and if one in 10 were to die from that disorder, then about 100,000 Canadians would die from an eating disorder, if those statistics bear out.
I'm sharing this only to say that it's a big problem, and it touches a lot of people. Very often, I think, for understandable reasons, people do not talk about it, but it's an element of what has been discussed at this committee that is really important for us to consider.
The private member's bill I had then, Mr. Cooper, was endorsed by a bunch of folks in the sector. The intention of the bill then was to require commercial content—paid content, advertising content—which was in any way manipulated or distorted, to have a disclaimer on it so that people knew the image was distorted. Specifically in the context of eating disorders, the idea was that a lot of young people, especially, would see images of unattainable beauty and it would contribute to their eating disorder. That's what the experts told me, and that was the intention of the bill.
Mr. Cooper, I don't know if you have thoughts on that type of measure. However, I would welcome your thoughts on that measure or any types of measures you would recommend to help us deal specifically with how social media is influencing young people in the context of eating disorders.
:
When it comes to Instagram, there are several different studies we've done focusing on the drug trafficking on that platform. For instance, we created a teen user who searched for things like the word “Xanax”, and before even fully typing it, Meta's autofill recommendations would recommend accounts called “Xanax For Sale”, “Buy Xanax” and things like that, which would just feature profile photos full of pills. It took very little effort for teen users to connect on the platform.
It's important to keep in mind that the way these companies' profit is through advertising, and the way that they can get more spend on ads is to keep eyeballs looking as long as possible. That's why you see things like AI slop proliferating, because that is one of the ways to keep people looking at the platform, whether it's in disbelief, shock or genuine interest.
When it comes to the advertising mechanisms, Meta's ad library is one of the ways we've been able to look at the drug trafficking ads, and we actually have a running hashtag on our X account—#MetaDrugAds—where we frequently post videos and photos of the ads while they are actively running after they have been approved by Meta.
The important thing to remember here is that even when the company's executives go before committees like yours and say that it's not allowed and they remove those ads after they become aware of them, they don't refund the drug dealers. They keep the money, and they clean up after they've been caught.
There are also several whistle-blowers recently, regarding issues with children, who have come out from Meta. There was a congressional hearing a few months ago where two whistle-blowers, who were tasked with internal investigations regarding harms to children, detailed how the companies very narrowly directed what they were allowed to research, how they were allowed to write their reports and what they couldn't research, even though they knew it was causing harm to children. Those things are also important to keep in mind.
The advertising mechanism is very important. One of the things about the Meta ad library is that ads that are not labelled as political are only viewable while they are actively running. The exception to this is that in the European Union, thanks to the Digital Services Act, that law has required Meta to leave every ad that runs on the platform available for view in the ad library for a year. That is one of the ways we were able to see the scale of drug trafficking ads on the platform, because at any given time, there is such a high volume of ads that the average researcher is not going to be able to simply scroll and see them. Because Meta is now required by law in Europe, and in the U.K. now as well, as a result of the Online Safety Act, to keep those ads in the library for a year, we can actually see the scale of the harm they're creating, and the fact that they profit directly from that content, don't remove those ads in many cases and let them run to completion.
:
Thank you, Madam Chair.
Thank you to all the witnesses for this fascinating testimony today. It's been really well delivered.
I want to continue on the regulation conversation that MP Diotte was just speaking about. It seems to me that there are three elements here now. We have the possibility to regulate access, which they've done in Australia. Nobody under 16 has access. There's also regulating the content. We've talked about certain users and certain influencers being considered too dangerous, or whatever, and trying to regulate the content from that side. There's also regulating the algorithm. Therefore, it's access, content or the algorithm itself and how it behaves, because that has changed.
Mr. Bridgman, you were speaking about suggested content, and Ms. Paul was speaking about advertising. We know the behaviours of these algorithms. Do you sometimes feel that the access question is a cop-out? You can't touch it, so we just have to say that no one can use it.
I'm trying to figure out what the right balance is here. It is obviously very disconcerting. I have young kids. Sometimes I want to say there's no access, but at the same time, I'm looking at the algorithm and thinking, at the same time, why can't we regulate how that algorithm behaves? We know it's driving people towards radicalization. We know what it's causing. These are facts that we know through academics, through study after academic study. We know about screen time alone and the relationship to mental illness and the addictiveness of the algorithm.
Maybe I can start with you, Mr. Bridgman. Where do you think our weight is best placed from a regulatory perspective—on access, content or the algorithm itself?
:
That is an excellent question. Thank you for it.
For the platforms, they are here. They are widely popular. They are widely used by Canadians. In terms of cutting off access for maybe under-18s, we'll see how that plays out in Australia. I think what I've said, and what others have said or hinted at as well, is that you can tune these algorithms to reduce harms. That is possible. That is a governable fact. That is an available policy option.
We've talked a lot today about the drug example on Facebook. We saw during the last federal election many AI slop ads using political content that were being promoted and that were political ads during the campaign. We ran a very small study using vision models: Is this political? During an election campaign, it's governed speech. Particular rules apply. The vision model can easily identify it. These tools are widely available. What is not being done is that there is not some pressure put on these platforms to apply that vision model effectively to ensure that the engineering and staff capacity time is put towards that. We have to turn that dial. We have to make it more expensive and more costly.
There are ways to tune this. There are ways to work with the platforms and say, look, this is not going to work. We're going to start imposing fines in this space. If you run an ad that shares drug content, not only can you not profit from that ad, but all of the revenue there will also be applied. There will be taxes and fines. If you apply the right lever, the company will change very quickly the number of resources that it's internally devoting to this.
The approach of the DSA is a harms-based approach that says we're going to identify harms on the platforms, platforms are responsible for proactively identifying harms and we're going to use those leverages, because we're going to try to reduce that amount of harm. They operate in probabilistic ways. For those of you who are kind of experiencing the new AI way, this is all probabilistic. We can play with those probabilities. We can post fines and adjust those probabilities in favour of reducing online harms, in favour of protecting children and in favour of a better democratic discourse. That's all possible.
:
I agree with what my colleague just said. These are really important options to consider, and indeed, financial penalties work well with all these platforms, which are looking to make money because, unfortunately, money talks for them.
However, I believe there are also other elements.
You mentioned access to content and algorithms. With regard to content, I think it is fairly easy to work to prevent hateful content, i.e., to legislate against hateful and violent content online and to moderate content that infringes on other rights and freedoms under the Canadian Charter of Rights and Freedoms. We must therefore be able to identify specific content without regulating disinformation, namely content that is problematic, violent, hateful, or contrary to other rights and freedoms. I think it’s easy to block.
Furthermore, during the COVID‑19 pandemic and the storming of the Capitol, specific accounts were closed because they were highly problematic in that they promoted violent and hateful ideologies. Since then, these platforms have been deregulated and the accounts have been reopened. I believe this shows that it’s possible to do so, but that it depends on the willingness of the platforms, a willingness that is not always there.
I will add a fourth element with respect to the environment. We need to provide good information and promote positive content. For example, Mark Zuckerberg and Meta recently stated that fact‑checking was politically dangerous and stifled opinions. But that is completely false. Fact‑checking does not block any content; it shows which content is reliable, verified by journalists and authentic. I believe we need to return to solutions like this and highlight content that is valued by media outlets belonging to internationally recognized journalistic initiatives or scientific content.
Platforms are therefore also capable of prioritizing good content and providing reliable sources of information to counterbalance all that. It would not be about blocking content, but rather about providing reliable alternatives so people online are also exposed to good information. In that respect, fact‑checking worked well, so there was no reason to stop it.
I believe we need to focus on blocking hateful content, implementing fact‑checking and promoting good content within algorithms and platforms. As my colleague said, I also believe we should have moderated algorithms to block certain hateful content and impose financial penalties if this is not done.
:
Getting to some of the questions we've been hearing regarding the age verification laws, one of the things that's important to keep in mind is that companies like Meta have heavily promoted that as a solution, which should be suspicious. All that does is pass the responsibility on to kids, on to their families and away from the platform for the fact that it is directly profiting from harm. It also takes the responsibility off the platform to invest in any sort of moderation or quality control of its main product, which is advertising. We don't see any of these levers being used.
It's gone so far that for some of these platforms we're seeing direct sanctions violations. Because they are so heavily operating without any sort of oversight, they're actually profiting from sanctioned entities for things like advertisements and subscriptions—sanctioned individuals who use their full names as listed in OFAC's list.
This is because these companies have been able to operate with complete impunity for over a decade now, passing the buck of responsibility to parents. Keep in mind that not every kid has parents who are going to understand how to use social media. Not every kid has parents who are able to be involved in their lives. Some kids have grandparents who may have never even seen one of these platforms before. Again, it's all passing the buck of responsibility away from these companies who are failing to do their job and the quality control of their product.
Starting there with their product, which is advertising, and ensuring that is controlled is one of the critical pieces because that is their main financial lever. That's one of things that's going to force further moderation, so they're not incurring harm, fines or regulatory financial burdens that are going to harm their business model.
Looking from the top down, which is where the money starts, is one of the most critical pieces toward any successful regulation. Obviously, the DSA has been mentioned several times as something that is new regulation. The fact that the companies have pushed back so heavily on it.... Again, the DSA is not meant to police speech on these platforms. It's targeting illegal activity and terrorism.
The fact that the companies are pushing that hard, even having the U.S. President do things like threaten tariffs against Europe for imposing the DSA, suggests that any sort of regulation is effective in a way companies are not willing to comply with because it would require them to give up profits they've been making from harmful content.
Starting with the profit-driving mechanisms is one of the most critical ways to have effective enforcement in any country. Looking at how these companies react to the threat of enforcement is a really important measure for understanding where the pressure points are.
:
This is certainly an important issue.
I’m going to make a kind of leap through time. You probably saw this morning, while reading the news, that after 25 years, an article showing the negative impacts of glyphosate was retracted by a scientific journal. It was realized that, 25 years ago, this article was written, not by researchers who claimed to have written it, but by a company that paid researchers.
I’m going to take another leap in time and bring you back to the tobacco industry, which was a bad corporate citizen at the time, and to your role as a member of Parliament. Imagine that you are a member of Parliament from 25 years ago, with the information you have today about the negative effects of tobacco. What would you do?
Today, experts say there are as many negative effects from continuous exposure to social media and harmful online content as there were back in the day with tobacco industry products. It’s therefore extremely important to move forward today, and I think we need to start somewhere.
The tobacco industry began conducting studies. Remember the tobacco industry’s response that doubt was its product. The idea was precisely to make people like us, and politicians, doubt by saying that it was more complicated than it seemed, etc.
In my opinion, we have enough data today to move forward. Mr. Myles, you said that we would need to work on the issue of algorithms and the dependency they create.
To answer the question posed by the member earlier, we must start by defining the most harmful content that we want to remove from the market. Is it child pornography? Is it the sharing of intimate images without consent? Is it the glorification of violence, terrorism and hate speech? Next, companies must be held responsible, obligated to act and be more transparent.
That’s what the former Bill was doing. It required that companies submit a report each year with a protocol, verified by a digital safety commissioner. If the commissioner disagreed or found that the report had not been prepared well enough, penalties were imposed.
We have to start somewhere. Child experts will work on the issue of addiction, etc. In my opinion, we no longer have the luxury of waiting.
:
Thank you very much, Madam Chair. I thought the meeting was over.
I talked with my colleagues and I listened to all the witnesses speak. By the way, your answers have all been excellent. It was very interesting.
Laws have been adopted in some other countries. For example, you mentioned Australia, if I’m not mistaken. Could you tell us which countries, in your opinion, have good laws or are currently applying good regulations? Can you give us some examples?
Also, I would like a follow-up response to the previous questions.
As I just said, you are excellent witnesses. We have added two meetings as part of this study. That said, we would have time to hear from other witnesses, who could potentially be elected officials from other countries or specialists who have made the laws you are familiar with. If possible, I would like you and Ms. Paul to recommend countries we can draw inspiration from or, if you know of any, individuals who have enacted these laws. I don’t know if you’re able to do that.
Mr. Morin, you’re nodding in approval.
I agree with Mr. Morin: England, the European Union and Australia are models we should consider.
Moreover, if we follow an existing model, we would be putting pressure on the platform that has already adopted that model. There would therefore be less pressure on ours if we show that it is already being done elsewhere. In other words, it shows that it can be done.
It’s good to draw on countries that have already implemented policies, because the platforms can no longer absolve themselves and say that they’re unable to do it.
I don’t know if you’ve met my colleague Pierre Trudel, a media rights expert in Quebec. It would be interesting to meet him to see how these measures can be applied in Quebec. In addition, I would recommend different actors, such as As de l’info or representatives from media education initiatives.
Furthermore, another element had already been discussed in relation to the regulation of platforms, namely the importance of establishing an online observatory to monitor what’s happening and proactively prevent issues, misinformation trends, hate content and new movements. It’s about being more proactive. We should look into this area to see how to set up an observatory that monitors and acts a bit more proactively and preventively.
:
Thank you for your question.
I want to address that it is an addictive behaviour, as your colleague indicated as well. One thing we know is that one addictive behaviour can be a catalyst for another addictive behaviour. We see, for example, that individuals who spend six-plus hours on screen time are twice as likely to show a high risk for alcohol consumption, cannabis consumption and a host of other addictions. This really does come down to how it can break down your ability to regulate yourself in the face of these addictive sorts of properties.
What I would necessarily work on is that we don't, as your colleague also indicated, have really good social media or even media literacy anymore. In Ontario, I indicated there wasn't any. In Ontario, we have it in grade 10, but it is terribly outdated. We know that youth are consuming this sort of content significantly younger than 14, which is grade 10.
I think the first thing we can do is start introducing literacy at younger ages and explicitly in curricula. I do agree with the comment that it has to be younger. I also believe very strongly that we're not going to beat all these social media companies. We're not going to be successful with the one avenue of simply trying to remove access. We're not going to be successful, because individuals are going to find a way to get access to those sorts of things. It's a combination of reducing access so they don't have the ability to do it six-plus hours a day and supporting digital literacy campaigns.
I do agree with my colleague, especially about trying to find ways to demonetize. I know that the algorithm functions in a way that allows them to have local tweaks to countries, even on a regional level, so there's no reason whatsoever that they couldn't have a Canadian-specific version that would necessarily identify a group of particularly harmful activities. We could identify that it is primarily targeting younger adults.
On a lot of social media platforms, you can't target children, but you can target ages 18 to 24, and that same group would overlap with a lot of youth who would have this content. You could find ways to microtarget it to, say, 18 to 24 on issues around anything that might lead to body dysmorphia such as makeup tutorials. You could create categorizations of things that would necessarily be demonetized in some way in Canada.
If you put a suite of things like that together, that's your best adoption for addressing these things, because you're not going to get it down to zero. From my perspective, it's really about trying to remove that exceptional six-plus hours that individuals and young people are getting, or even three. There are risk factors at over four hours, so we need to get it down below that threshold.