CHPC Committee Meeting
Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.
For an advanced search, use Publication Search tool.
If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Standing Committee on Canadian Heritage
|
l |
|
l |
|
EVIDENCE
Monday, December 8, 2025
[Recorded by Electronic Apparatus]
[English]
I'm going to call this meeting to order.
Welcome to meeting number 19 of the Standing Committee on Canadian Heritage.
We don't have any in-person participants, so I'll skip that part.
Pursuant to the routine motion adopted by the committee, I can confirm that all witnesses have completed the required connection test in advance of this meeting.
Please wait until I recognize you by name before you speak. All comments should be addressed through the chair.
Pursuant to Standing Order 108(2) and the motion adopted by this committee on Wednesday, November 5, 2025, the committee is meeting to study the effects of influencers and social media content on children and adolescents.
We have six witnesses with us online today, starting with Perry Mason. Yes, in Hamilton we have a detective named Perry Mason—not a crack defence lawyer, but impressive all the same. It's good to see you again, sir.
Tiana Sharifi from the Center for Exploitation Education is here with us, as are Dimitri Pavlounis from Civix, Ève Tessier-Bouchard from Les Coops de l'information, Stacy Hanson from Saskatoon Public Schools and André Côté from The Dais at Toronto Metropolitan University.
Welcome to you all. We will give each of you five minutes for an opening statement.
We're going to start with Mr. Mason. For full disclosure, Mr. Mason is occupying my constituency office today.
Very good. It's an excellent office you have there.
Sir, you have the floor for five minutes for an opening statement. You can start at any time.
I'd like to talk about the crisis of misunderstanding. I'm here today because I see the crisis through a unique, trifocal lens.
I'm a grandfather raising a digital native. I'm a former school resource officer who watched the digital transition hit our schools. Perhaps more importantly, my neurodivergence grants me a specific strength, which is pattern recognition. I don't just see incidents; I see the meta patterns connecting the last 30 years of youth culture. I'm here to report something simple and perhaps uncomfortable: The map we adults may be using may be wrong.
To explain the mistake I think we're making, I have to take you north. I had the distinct privilege of visiting the Cree nation in northern Quebec twice to facilitate restorative justice circles. I was even honoured to speak on CBC Cree radio. When I arrived, I had to check my assumptions. To an outsider, these communities look isolated, but when I sat in those circles and when I engaged with curiosity instead of judgment, I didn't find isolation. I found a vibrant, self-sustaining culture operating on a frequency that the south often fails to tune into. They weren't lost; they were sovereign.
This is a lesson I apply to our children. Our youth have migrated to a new territory. They have their own language, their own culture and their own economy. Just as we fail when we try to regulate cultures that we don't understand, I think we're failing to regulate the digital generation—I believe it's called gen Z—because we, I think, refuse or are unable to learn their language.
To prove how borderless this reality is, let me tell you what my grandson watches or has watched. Visualize a standard influencer vlog. There is upbeat music, slick editing and a young woman laughing while eating ice cream. Now here's the reveal: She is broadcasting from North Korea. To my generation, North Korea means a nuclear threat. To his generation, it's just another channel. He doesn't see propaganda. He sees content. He admires her not because she supports a regime but because she knows how to beat the algorithm and evade the restrictions. He respects the hacker ethic, for lack of a better term. That's the reality gap. We see a dictatorship; he sees a creator winning the game.
I guess a good term to call him and others is “digital sovereign”. We assume these kids are victims. A couple of nights ago, I sat down with my 22-year-old grandson for a two-hour, unfiltered and, to be frank, unexpected conversation, and he corrected me. He told me he hasn't failed to launch. He built an e-commerce business. He has travelled to Europe. He has lived in penthouses and sailed on yachts. He told me that he can go back to that luxury whenever he wants, but right now, the value proposition is here. He's a digital sovereign. He's physically present, but socially and economically, he's living on a plane that this committee may not even have mapped.
Here's the critical “but”. Digital sovereignty comes with a loss of consequences. In the real world, words have weight. In their world, words are noise. Slurs become punctuation. Morality becomes an algorithmic score. Because nothing seems to stick, they become uniquely vulnerable to predators.
I know this darkness personally through my relationships with Carol Todd, Amanda Todd's mother, and Leah Parsons, Rehtaeh Parsons' mother. The danger isn't that kids are wasting time. The danger is that they are being groomed in a world where the rules of real life don't apply. When a predator tells a child, “I'm the only one who gets you”, the child believes it because we have stopped trying to understand them.
My message is simple: You can't regulate a reality that you may not fully understand. If you try to legislate safety from the outside, you'll fail, perhaps. The only way to protect them is to do what I did with my grandson: drop the judgment, enter with curiosity, translate and don't punish.
Finally, I want to be transparent about how these remarks were prepared. As a neurodivergent thinker with ADHD, my mind works in patterns and curiosity, but struggles with linear culture. To prepare for today, I used artificial intelligence, not to generate my ideas, but to organize them. I supplied the lived experience; the AI supplied the executive function.
I share this for a reason. I used the tools of the digital sovereign to communicate with this committee. If a grandfather can partner with a machine to be understood, surely we can find a way to connect with the generation that lives inside it.
Thank you.
Thank you, sir.
We'll turn to Tiana Sharifi, from the Center for Exploitation Education.
Go ahead. You have the floor for five minutes.
Thank you for the opportunity to speak today.
My name is Tiana Sharifi. I'm the founder and CEO of the Center for Exploitation Education, specializing in child and youth sexual exploitation prevention. I've been a subject matter expert for over a decade. I'm a mother and I come in with a unique angle: I am in many ways an influencer myself. I've built a large online following on social media, where my educational content has reached millions of families it wouldn't have otherwise, and it has saved the lives of young people who have directly reached out for help.
Not all social media platforms are created equal, because they vary in different levels of risk. Some are proactive about child safety and want their platforms to be used for good, and others have no intention other than to monetize off of their users and consumers.
For those of us working in the anti-sexual exploitation field, the largest threats to the well-being of children online are unmistakably clear. First, influencer culture is feeding the sexualization and exploitation of youth. Second, boys are being pulled into harmful gender beliefs that normalize violence and predation. Third, grooming, sextortion and child sexual abuse materials are rising as offenders operate freely online.
I want to begin with the first. Influencers today have an enormous impact. Many create helpful content for adults, and as a whole, they're not inherently harmful. However, the problem is that the influencers children follow are not entirely appropriate for them. Young people look to influencers to understand gender norms, relationships, sexuality and even what level of objectification is considered normal, and this is inherently harmful.
Influencer culture has also normalized the commodification of people. Young girls in particular are being turned into products economically, not metaphorically. They see bodies, intimacy and personal lives monetized, teaching them that people can be bought, sold and consumed. This normalization has consequences. I have seen students being groomed into the idea of participating in OnlyFans and sugar dating. Of course, the moment they turn 18, we're just going to suddenly label that participation as consent.
At the same time, boys are learning who they're supposed to be as men, and they're learning it from some of the worst voices online. I've noticed a sharp shift in young boys when I give presentations, and I now hear rhetoric such as “girls must be submissive” and “rejection is disrespect”. These are messages straight from the manosphere, which includes incel communities, male supremacy, and anti-feminist influencers who package misogyny as empowerment for our young boys.
These ideologies are reaching our boys at the exact moment they are forming their identities. If you are a boy and you create an account on pretty much any social media platform, the content you receive is engineered to be shocking, violent and misogynistic. Boys are being shaped by these predators, misogynists and extremists telling them that aggression equals confidence and empathy equals weakness. This content is not coming from obscure corners of the Internet; it's coming from the mainstream platforms themselves.
The algorithms that are pushing sexualized content to girls and misogynistic content to boys are the algorithms that are making grooming faster, easier and more scalable for predatory people. Grooming, image-based abuse, and sextortion are skyrocketing, as we know, because offenders no longer need to seek opportunity. The platforms are creating it for them.
Many platforms use misleading language that creates a false sense of security for kids. Snapchat's “My Eyes Only” folder and Instagram's vanish mode suggest privacy, secrecy and control, when in reality, these features make exploitation easier and detection harder. Some apps are even predatory by design, encouraging kids to swipe to meet strangers or interact with AI bots that initiate sexual conversations.
We cannot keep asking children and parents to navigate a system that was never designed for their safety. The government must work with the platforms, and if a platform refuses to engage, it should lose access to the Canadian market, period.
My key recommendations include having monetary consequences for platforms that fail to comply with Canadian regulations, having straightforward and quick reporting channels with time restrictions on response times, holding liability on the platforms and having mandated minimum age requirements and enforced age verifications, to say the least. I have a number of others, but I know that I'm short on time.
You still had 20 seconds, but hopefully we'll get to more of those recommendations as we continue this study today.
We'll turn now to Dimitri Pavlounis from Civix.
You have five minutes. Go ahead.
Thank you, Madam Chair and members of the committee.
My name is Dimitri Pavlounis. I'm the research director at Civix. We are a national civic education charity dedicated to building the habits and skills of citizenship.
We work primarily with K-to-12 teachers from every province and territory, providing free programs in both English and French. We are best known for our flagship student vote program, but we provide many other programs around civic engagement, civic discourse and digital media literacy.
Over the last while, you've heard about many different social challenges, each intersecting with social media in distinct ways. Some of these challenges might be best addressed through regulation; others might require support or education frameworks. Many likely need both.
While I will focus today on the need for media literacy education, I want to be clear that we don't see this as the only or even the best solution to all of these challenges. Rather, we see it as an essential part of a broader national strategy to help young Canadians navigate their online lives.
Media literacy is of course a large umbrella term, encapsulating many different concepts and competencies, but at Civix we approach digital media literacy as a necessary component of informed citizenship. Our digital literacy program teaches empirically supported skills that have been shown to significantly increase students' ability to navigate information and make more informed judgments about the content they see online. This includes discerning between true and false, but it also involves navigating all of the agenda-driven, polarizing or manipulative material that blurs the line between true and false and that reflects much of what we and young people actually encounter online.
Since the program launched, over 8,000 teachers have registered and over 4,000 have attended one of our training workshops. Today, I want to share four things we have learned that help explain the current state of digital literacy education and what we believe is needed to better support youth in the future.
First, the good news is that this is not primarily a curricular problem. While curricula could certainly be strengthened and streamlined, media literacy is already in the curriculum in every province and territory in Canada. The problem, at least when it comes to teaching information evaluation, is that just because something is in the curriculum, that does not guarantee it is prioritized or being taught effectively. As a case in point, in our study of over 2,300 students from across Canada, we found that many students are being taught methods that either don't work or backfire in practice. These findings align with research from elsewhere, including research in the U.S. context from Stanford.
I want to be clear: This is not the fault of teachers. Much of this content is brand new to them, and educators often lack support to keep up with best practices, especially within a rapidly shifting media landscape. Most importantly, many ineffective methods still routinely appear in resources directed at educators, which just muddy the waters and cause confusion.
Second, even if our focus is on addressing the disinformation problem, we cannot focus on discernment alone. Media literacy education in all forms must address the social-emotional factors that make people susceptible to misleading and harmful messaging. Information on social media doesn't exist in a vacuum, and people don't typically incorporate information into their mental models simply because they are exposed to it, but rather because it resonates with them authentically or fulfills some emotional or cognitive need.
As such, it's not enough to just teach young people how to discern credibility. They must also have authentic opportunities to reflect on how the content they see online produces meaning and how, true or not and harmful or not, it may appeal to our cognitive biases and contribute to our sense of identity or community. These essential conversations could and should happen in many spaces, including in the home and in schools, where they can occur with trusted adults in safe and structured ways that are already supported by existing curricula.
Third, no amount of media literacy education can make up for a rapidly eroding media ecosystem. National efforts grounded in evidence-based practices are essential, but these efforts are futile without a healthy information environment to support them. Social media regulation alone won't improve conditions without significant investments in accessible, high-quality information that is meaningful and relevant to the lives of Canadian youth.
Finally, I want to advocate for including the voices of young people in your consultations and decision-making. Currently, I'm supporting a student-led project in New Brunswick on AI and education. These high-schoolers, like youth across the country, care deeply about the impacts of technology on their lives. Most importantly, their lived experiences provide insight that cannot be communicated through statistics, survey results or adult voices. If we are concerned that young people are turning to social media in part because they are disillusioned with our democratic institutions or with traditional forms of expertise, inviting them into these conversations about one of the defining civic issues of their lives can go a long way toward building trust and can lead to better policy outcomes.
Thank you for your time. I look forward to your questions.
Thank you, sir.
[Translation]
C Next we have Ève Tessier‑Bouchard, editor at Les As de l'info, Les Coops de l'information.
Ms. Tessier‑Bouchard, you have five minutes.
Madam Chair, honourable members of this important committee, thank you very much for inviting me to speak with you today.
I currently run Canada's only French-language daily newspaper for children, called Les As de l'info. This media outlet is aimed specifically at children aged 8 to 12 and explains the news to them every day on a web site. To date, there are exactly 26,291 children registered on our web site, which is free of charge.
We are not a social media network like the ones we are talking about today, but rather a secure community where children are protected by a username and an avatar, and where they can engage, learn, interact with each other, comment and give their opinions in all kinds of ways.
Les As de l'info is a small social media network for information that is moderated seven days a week. We ask for parental consent upon registration and foster an atmosphere of respect on the site, while encouraging the children, most of whom are still in elementary school, to express their critical thinking skills.
Our site is free, so it takes a lot of effort to finance it. However, we are determined to keep it that way because it opens the door to all curious children from all social backgrounds, which is essential. What we are trying to do is offer concepts of digital citizenship and media and information literacy, and above all, inspire children to get involved as young citizens.
By the way, they are not just the citizens of tomorrow. They may be young, but they are already members of our society and need to be included in the conversation. In fact, we conducted a Léger poll of children aged 8 to 12 in 2024: 60% of them clearly told us that they felt the government did not think enough about children before making decisions; 66% of them said they were capable of advising the government. So, take note.
We also want to instill a sense of competence in our readers, to enable them to be little pollinators of good information in their communities. Last week, you met Marie‑Ève Carignan from the UNESCO-PREV chair, and she told you about a joint study we conducted which shows that children can indeed become leaders in the fight against misinformation if they are well supported, if they feel confident, and if they understand the issues at stake.
We know that 25% of 8-year-olds say they have an account on at least one social media platform. For this age group, it is often YouTube and TikTok. In other words, it is part of their lives. Our collective concern should be this: How can we support them in discovering and using these tools, which are of their time, and how can we prevent them from becoming digitally illiterate while protecting them from negative content, conspiracy theories, or content that could lead to radicalization?
Les As de l'info believes that the best approach to dealing with the influence of social media on young people is to focus on the importance of information, education, support for children and teenagers, as well as the availability of verified content online. As we know, the media is blocked by GAFAM in our country, and this makes our task much more difficult. The discoverability of reliable content is really being hampered.
Investing in digital education, as the Coops de l'information that support Les As de l'info is doing, is not within the reach of all media outlets here, most of which are struggling financially. Is the solution to outright ban social media for young people, as some countries are doing? Do we need greater vigilance, a more restrictive security barrier at the entrance to social media networks? Should we perhaps establish age categories for accessing content? Do we need to make a significant investment in educating young people, in digital education for young people from childhood onwards?
For Les As de l'info and for me, as someone who has been working in youth content for 35 years and is a mother and grandmother, the education and skills of children and teens are key. It's bigger than just prohibition. You need to gain experience in navigation to be a good captain and to avoid pitfalls, both on the water and on the web.
Thank you.
Thank you.
[English]
Next we have Stacy Hanson, a high school counsellor from Saskatoon Public Schools. I can probably guess who invited you here today.
You have the floor, ma'am, for five minutes.
Thank you for the opportunity to speak today.
My name is Stacy Hanson. I'm a high school counsellor. The concerns I'm sharing today are from counsellors and restorative action program facilitators across Saskatoon public secondary schools. They reflect patterns we see regularly, not isolated incidents.
Our first concern is around targeted online harassment. In one case, a group of students manipulated a video in which they provoked a conflict, edited out their role and posted only the victim's retaliation to frame him as a racist. The online hate became so severe that he feared for his safety, carried a weapon to school and ultimately transferred schools and changed his name.
Targeted harassment also includes coordinated social exclusion. Group chats and Snapchat stories are used to incite entire peer groups against a single person or student. Students create warning pages about socially vulnerable peers, spreading accusations without evidence. The speed, reach and public nature of these attacks magnify the harm far beyond what was possible before social media. In one case, the victim of this harassment took his own life.
We also see teens impersonating other students and even teachers by creating fake accounts. These accounts post troubling images or messages that damage reputations, causing embarrassment, shame and significant mental health impacts.
Online harassment now extends far beyond traditional bullying. A common solution we recommend is for students to block the person targeting them, but teens have learned to circumvent this. They create new accounts or new group chats, or have friends add the victim back into conversations. Blocking no longer stops the behaviour. In fact, it often escalates it. As a result, young people have effectively no way to escape their harassers. It follows them 24-7, even in their own homes.
Our second concern is sexual exploitation and grooming. One 15-year-old gained more than half a million followers on a Chinese version of TikTok, and then was targeted by adults posing as executives from an adult content site. They coached her to send sexualized videos and profit-shared with her. She earned thousands per month without her family knowing. Police in Canada and in China intervened to prevent further harm.
In another case, a grade 11 student was groomed by a 35-year-old man she met on the online game Roblox. He flew from the United States to Saskatchewan to meet her while her parents were at work. When the school intervened, they learned that she fully believed that this was a loving relationship, demonstrating how easily it is that teens form bonds with people they've never met in person.
Our third concern is around sexting, sextortion and coercion. Teens share intimate images believing they will stay private, and they often don't. Images are used for manipulation or revenge after breakups. Where youth once threatened self-harm to prevent relationships from ending, many now threaten to release private photos. Girls especially experience deep shame. Students also hide explicit content in encrypted or disguised apps, making detection difficult. Families receive inconsistent guidance about when police will intervene.
Our fourth area of concern is exposure to harmful online communities. Students access content promoting self-harm, suicide, disordered eating and other high-risk behaviours. Algorithms push this content to vulnerable teens, who become drawn in quietly and privately. We saw a rise in this during COVID, and it remains a serious concern.
We also see dangerous social media trends. Recently, RAP facilitators in multiple schools reported a TikTok trend where youth cut their faces under their eyes and along their cheeks and cover the wounds with band-aids. Unlike traditional self-harm, this behaviour is meant to be visible and public.
Students are also forming emotional relationships with AI chatbots—romantic, friendship and even counselling relationships—leading to isolation, harmful advice and exposure to unfiltered content. Just this fall, a small committee and I created an infographic to highlight the many risks of these chatbots for students.
Influencer messaging is also affecting our youth. Figures like Andrew Tate have normalized misogynistic attitudes among some male students, influencing their behaviour toward peers and female teachers.
Across all these situations, students are chronically online. They stay up late on social media or gaming and maintain large networks of people they've never met. Nearly every conflict we address now has a digital component—fake accounts, impersonation, group chats, viral posts or altered images.
Our last concern is platform accountability. Harmful accounts using school names or staff and student photos are difficult to remove. Fake pages remain active for long periods, and AI-altered images circulate widely with little recourse.
Based on what we're seeing, we offer several recommendations: one, platform accountability and rapid response; two, age verification measures; three, regulation of algorithmic exposure; four, stronger protection against image-based abuse; and five, prevention of online grooming. We especially would like to have platforms detect and disrupt adult-minor contact and mandate reporting to law enforcement when grooming behaviours are detected.
These issues are widespread and deeply harmful. Without stronger protections and platform accountability, more young people will be exploited, isolated and traumatized.
Thank you for the opportunity to share what we're seeing on the front lines. I welcome your questions.
That's a very disturbing dispatch from the front lines. Thank you very much for that.
Finally, we have André Côté from The Dais at Toronto Metropolitan University.
You have the floor for five minutes.
Thank you very much, Chair.
Thanks, committee.
It's an honour to be here speaking with the other witnesses, with alarming but amazing perspectives on these issues.
My name is André Côté. I'm the executive director of The Dais. We are a think tank at Toronto Metropolitan University. We are focused on public policy and leadership developments at the intersection of technology, education and democracy.
We have been doing work on social media in Canada since back around 2018, with a major focus on kids and tech issues. This includes our survey of online harms in Canada, which is the longest-running survey of its kind and looks at social media use, the harms Canadians are experiencing and attitudes about what government should do about it.
We have a national screen break project that's focused on mobilizing for effective phone-free school policies across the country, and we have other research in areas like digital literacy, AI and children's privacy, social media labelling and AI deepfakes, misinformation and disinformation. I'll pull from some of this in my remarks.
Our survey of online harms reinforces that Canadian youth are the most significant users of social media, which is probably not really news. They are also significantly more likely to be exposed to various categories of harms, many of which we've heard about in more detail through some of the other speakers.
As a caveat to begin, our survey covers Canadians aged 16 plus, so we track young Canadians, capturing older adolescents to age 29. It's still highly relevant, I think.
Young Canadians spend far more time on online platforms than older generations, and they're more likely to use platforms like Instagram, YouTube, TikTok, Snapchat and X, whereas older Canadians are more so on Facebook and YouTube. Young Canadians are also about 50% more likely to report exposure to the worst types of online harms—or some of them. They're targeted by online harassment and hate speech and see violent content and hate speech, and intimate images are shared without their consent. We heard many more examples from Stacy and others.
They are subject to the manipulative design of these platforms, which we've heard about from other speakers. Our research finds that as AI-generated content is flooding our online spaces, deepfake labelling by platforms isn't effective, meaning that users can't tell the difference between what's real and what's fake. As a result, as we've heard, a fifth of Canadian youth aged 12 to 17 report negative mental health effects related to their online activity.
Second, I'd point out that beyond the impacts on mental health and physical health, social media is a threat to young Canadians' civic health and to Canada's democracy. Others have talked about the impacts on the information ecosystem. They are the only generation that is more likely to go to YouTube or Instagram for news than to traditional media like television or news websites, despite Meta's platforms, like Instagram, blocking news content. We also find that greater social media use is linked to greater belief in misinformation and disinformation. In short, younger Canadians are getting their media on platforms where the conversation is dominated and pushed by algorithms and influencers.
Third, Canadians are demanding that government take action to improve online safety for children and youth, but also for everybody. Our research finds that seven in 10 Canadians support government intervention to require online platforms to behave responsibly in protecting users, even where there are some trade-offs, and there's near universal support for specific types of measures.
We tested a whole bunch of them but focused on protecting children online. For instance, we found about 90% support for requiring platforms to quickly remove CSAM content and report it to police; north of 80% support for requiring platforms to develop measures for child users like parental control; and a bunch more specific things that Canadians really get behind.
From our phone-free schools work across Canada, a clear and consistent message is that it's not just the phones but the social media apps on the phones that are the challenge, driving distraction, bullying and other bad outcomes in schools. Stacy spoke to this in great detail.
I think a key point is that students, parents, teachers and school board leaders of all manners are calling for policy-makers to step in. This is out of their control. They need governments to step in. Coalitions and campaigns that are a part of safer online spaces are coming together across party lines to demand action on this.
In sum, my message would be this: Let's get on with it. The government needs to table a new online safety bill, and Parliament needs to move quickly to pass it.
We've now spent the better part of five years working through this with two bills that didn't quite make it across the finish line. The core of Bill C-63, from the previous Parliament, worked. Civil society, youth, experts and even opposition parties supported core parts around the duty of care and the transparency in parts 1 and 4 of the bill.
Holding these big tech platforms to account requires an independent regulator, in our view, such as a digital safety commission or something of that sort. The bill should include beefed-up youth protections, including age-appropriate design standards and rights to opt out from algorithmic feeds. I'm sure we could have a longer list.
Finally, the bill should include new provisions for AI, including bringing AI chatbots in scope as a regulated service and addressing deepfakes on social platforms.
The very last thing I'd say is on phone-free schools. One of the most inspiring parts of our work over the last year or so has been the engagement with youth and the extent to which they want to lead on these issues. I really liked hearing, from Dimitri, Ève and most of the other witnesses, this idea of youth really needing a voice in this. I think, to Perry's point, a lot of us don't fully understand the worlds they're living in, and we need to be respectful of their experiences and include them in this process.
Thank you very much.
Thank you so much, everyone, for being here with us today.
My first questions are going to Tiana.
I'm just curious. Toward the end of your opening remarks, you commented on recommendations that you would leave with the committee, and I know you were cut short. I invite you to repeat the ones you listed during your opening remarks and to go a little deeper into your explanation of those recommendations and why they would be so important to us. Of course, you may add any additional recommendations that you didn't have the chance to get to at that time.
I appreciate that question, especially because I definitely felt the need to get to a few that were left unmentioned.
The first is that there have to be monetary consequences for the platforms that fail to comply with Canada's safety regulations and fail to work with the government. I say that because, although they're what we call “platforms”, they're businesses and corporations. They make hundreds of millions and billions of dollars off their consumers. In any other space, we make sure we have regulations in place to keep corporations accountable to their consumers. Unfortunately, what we've seen is that it's the monetary piece that keeps these platforms accountable. In B.C., we have legislation with regard to non-consensual intimate image sharing whereby we hold companies accountable monetarily.
The second one is that there should be straightforward reporting channels, with time restrictions on response times. Reporting anything related to sexual exploitation, child sexual exploitation or sexual content should not be dealt with by an automatic or automated AI bot that decides whether or not it meets a threshold. It should be met with actual agents and individuals. There needs to be a very quick response time, especially with regard to child sexual exploitation reports, and again, there need to be consequences in place if the platforms are not meeting those needs.
Another is mandated minimum age requirements and enforced age verifications. Yes, social media is a powerful tool, but I don't believe it's appropriate at all under a certain age. I don't believe that children under the age of 16 should have access to social media platforms, because they're not made for children. The ones they are connecting with are not made for children. I know that's a contentious point for many people.
I strongly encourage algorithm protection so that minors are not exposed to harmful or inappropriate influencers and so that the content that will be put forth to them the second they sign up is not predetermined and predestined just because their identity is male or female or because of their age.
I would like to see a ban on targeted advertising to minors, because it's just a different ball game. It's not the same as watching TV or a movie and having an ad come in. I'd like to see oversight for child influencer accounts to prevent exploitation.
Also, although many of these platforms are trying to put this in place, we definitely need a default prevention of adult contact with minors. That's where that age verification piece I believe is incredibly important. This includes restricting direct messages and providing a heightened review of sexualized content for accounts with large followings by minors.
Thank you very much for that. I appreciate it.
I just want you to go a bit more into detail on that. You said “algorithm protection”. What exactly does that mean?
As it stands right now, depending on how you sign up on a platform, a lot of them ask for your gender—or assume your gender—with your age. What I've seen is that if you are a boy or a man and you sign up with a social media account, it doesn't matter what your interests are. You are fed a very specific type of content. What you're going to consume is predetermined and predestined.
That content is oftentimes shocking, violent and provocative. Then it kind of leads—especially for young boys, who aren't able to maintain much control over their consumption of content—to access that makes them vulnerable. When I say “algorithm protection”, I'm talking about these platforms not being able to predetermine an algorithm based on somebody's identity when they're signing up, particularly if they are a minor.
Thank you very much. I understand and definitely see your point there.
I want to dive a bit deeper into one more point you made, and that was with regard to increasing the minimum age a person needs to be in order to have a social media account.
Recently, we've seen Australia implement legislation that requires a person to be 16 years of age or older to hold an account. I'd be curious about your thoughts on that. You also talked about meaningful age verification, so I suppose that would be the way you'd suggest we make sure that's enforceable. Maybe you can just dive a bit deeper into that.
Absolutely.
We're seeing Denmark and Australia do that. I just don't want to see Canada left behind.
I think it's necessary because of the way these social media platforms work and the dynamics. These are companies that intentionally use their systems, and even the language they use to define the tools the platform provides.... It's harmful and it's going to result in harm.
It holds the platforms accountable when we have a minimum age. Again, not all platforms are created the same, so verification can be flexible depending on what kind of platform we're talking about. I think it would be effective.
As to age verification, people are concerned about security. They already have all the information they need. A kid showing their passport photo so they can be protected from exploitation, violence and self-harm is well worth it, when that information is already out there for these companies.
Thank you, Madam Chair, and thank you all for being with us today.
[Translation]
Ms. Tessier-Bouchard, I almost asked you a question last week before noticing that you were not at our meeting.
I am all the more pleased to be able to ask you questions today.
As you know, the goal of Les As de l'info is to give children the keys to understanding, so that they don't catastrophize scenarios that fuel anxiety.
I took the time to read a few articles, which are really very interesting. I have here a few titles:Tu ne rêves pas! Les ratons sont de plus en plus mignons!, POUR ou CONTRE les soins pour la peau à partir de 3 ans, La photo de la semaine: à Gaza, vive les mariés!.
How do you determine what is appropriate for a young audience? In your opinion, what can social media learn from your approach?
We are working from the premise that a child who understands things is already much less anxious. What causes anxiety are all the scenarios a person can imagine based on a headline, a snippet of information they hear on the radio or in conversations between adults. When we understand the basis of a piece of information, we experience less anxiety. This has been tested and proven time and time again.
So how do we decide what to talk about? If there is something in the news that is likely to be heard repeatedly on the radio, as I was saying, or even seen on social media—we know that children use social media—then we will definitely talk about it, regardless of how serious the subject is.
We have excellent support. We have psychologists, grief experts and philosophers for children. We are careful about the words we use and the images we choose to show and not to show. So we sort through the material. However, we do not sort based on the subject matter, but rather on the angle, i.e., how we are going to approach the subject. We ask ourselves what can be filtered out so that the essentials remain and the children understand what we are talking about without having all the details, especially those that might scar or shock them.
There is no topic that we don't talk about. We sometimes make choices. If the news is very negative, we make choices. For example, if we believe that we have posted enough negative stories for the week, we will post, as you mentioned, a little raccoon here and there, a small animal, or a nice story. We try to strike a balance.
We listen to children. Sometimes, very sensitive children comment on our posts. They say that it causes them a lot of pain. It's important to know that we have moderators on the site seven days a week. We sometimes communicate with the child, and if they tell us something that we consider to be dangerous for them, we take action.
Last week, we talked about bullying. A child wrote to us in a comment saying that his bullies lived next door to him and that neither teachers nor adults were helping him. So we did the following: since his email address was linked to a school, we contacted the school, told them that a child might need help, and gave them his email address. We also wrote to the child to let him know that we weren't abandoning him. We told him that there were resources available and we shared those with him. We provide a certain amount of support.
Children learn on the site how to behave as good digital citizens and how to comment respectfully. If a child makes comments that are not respectful, we don't ban them right away. Instead, we contact them to ask if they are proud of their comment, as it does not really comply with our rules. In 99% of cases, the child will say that they didn't even know their comment was being read. That's media literacy, that's digital life education.
We let him know that his comment has indeed been read and that there are real people behind the screen reading his posts. We tell him that if he isn't nice, people might get hurt. We ask him if he wants to rewrite his comment and tell him that, if so, we will delete the old one.
It's a lot of work. Obviously, it's an investment to have two people reading all the comments. There are 110,000 page views per week and there are a lot of comments. We do this because it's absolutely essential to educate children about digital life.
I draw a parallel to not giving children access to social media before the age of 16. Let's take the example of a child who has not had access to sweets before the age of 16. If they have not been informed about the harmful effects of sugar and the health risks of eating too much sugar, they will be no better equipped at 16 than they were at 8.
For our part, we focus heavily on education and support. I think the same applies to social media. My colleagues talked about this today. Age restrictions could be put in place at the entrance to web sites, but young people are crafty. They can enter a false age or use their older brother or sister's account.
We must therefore provide them with better support, but it is also essential that we educate them at the same time. We have lost an entire generation. We must catch up and invest heavily in digital citizenship education.
Pardon me for interrupting, but your time is up. I am sorry.
Mr. Champoux has the floor for six minutes.
Thank you, Madam Chair.
I want to thank the witnesses very much for being here today.
I'll continue with you, Ms. Tessier-Bouchard, because you ended your answer by raising a point that I find very interesting.
You mentioned that we may have lost a generation of this digital age. You have chosen to devote more of your time to younger people in order to build the next generation.
Have we abandoned the generation of kids who are currently 12 years old and up?
Was it not possible to include teenagers in your Les As de l'info project? Would that have required creating two completely different projects?
I say this somewhat cynically, but have we let them down by telling ourselves that we will make up for it with future generations?
I think the young people who got away from us are between 20 and 25 years old.
Young people currently in high school, particularly in Quebec, receive courses in Quebec culture and citizenship, with a strong emphasis on civics, information literacy and critical thinking in all its forms.
In the Les As de l'info project, as in any other project—and this is where my expertise comes in and where we need to be careful—it is essential to target your audience very carefully. If you try to reach everyone, you will reach no one. A well-targeted project will reach its target audience very effectively, as well as other people interested in that target audience.
So we can't make Les As de l'info a project that extends into secondary school. We would like to do something that bridges the gap between Les As de l'info and media for older children and adults. Right now, we have the project and the desire. We have everything we need, except the money.
In your opening remarks, you raised a few points that really struck a chord with me. You said that we should stop talking about children as the citizens of tomorrow, because they are the citizens of today who, as you said, can also be leaders.
You participated in a study that led to the publication of a report entitled “Children: leaders who can contribute to a better-informed and more critical society in the face of misinformation?” I'd like to talk about that a little, because it presents Les As de l'info as a great experiment whose successes can inspire similar initiatives.
Earlier, my colleague Mr. Al Soud mentioned an article on the front page of Les As de l'info, “Opération sauvetage de cadeaux dans un hôpital pour enfants!” about gifts being handed out to children at a hospital. The article really caught my attention because it tells the story of the Grinch trying to steal gifts. Everything ends well, however, because the police intervene. For a child, it's a fascinating story. Anyway, I'm getting a little sidetracked.
I want to come back to the study, and I also want us to talk about the potential repercussions it could have.
Can you measure today the impact that your project has had since its creation in 2022?
It's difficult to measure, but we do have certain indicators.
This study was conducted in the early days of Les As de l'info. If we conducted it again now, we would probably get many more responses. However, the response that we should focus on is that when children feel competent, they want to share their skills. When they share their skills, they share them with their family, friends and even in class. They then become leaders in the fight against misinformation, particularly in our case.
When their knowledge was tested through a survey and they were asked, for example, if they had ever heard of misinformation and if they could spot it, 84% of the children participating in the Les As de l'info project responded that they were familiar with misinformation and felt equipped to detect it. In the same survey, 44% of children who were not participating in the project answered yes.
This means that the feeling of competence, the fact that we repeat things in different ways and make children aware that they are competent, gives them power. Our slogan is “Your curiosity is a superpower.” That's what we want. Children feel empowered. They can tell their parents when something was made with artificial intelligence, for example, and explain why. They are happy to share this.
I really like the term “pollinator”. In fact, we discussed it in previous meetings with other guests we've had.
When we began this study, as I often say, I was convinced that banning social media for young people under the age of 16 was a very good approach. I still think it's an interesting approach because, as you pointed out, we've lost a generation, so let's try to limit the damage, so to speak.
It is certainly an interesting tool. However, the more we discuss and talk with people who, like you, contribute constructive ideas based generally on digital literacy and education, the more I wonder if that is the right solution.
Could this measure be temporary? The generation you are currently creating, thanks to your initiative, means that in 10 or 15 years' time, we may not need to restrict access, because we will have done enough to spread the word so that misinformation is no longer a social issue.
What do you think?
Personally, I think we need to ensure that our fellow citizens, young and old, develop a more critical mind, that they are able to understand what they read, to read between the lines, and that is done through education, by offering them content that they can access. That is where the pitfalls lie at the moment. As soon as our content comes from a media outlet, it is blocked by the digital giants.
Obviously, we don't target children on social media, because they're not supposed to be there. We target teachers and parents. We manage to get around the digital giants and their publishing bans a little, because we're registered as an educational organization.
However, the fact remains that echo chambers feed themselves, and we have few tools at our disposal, both for adults and for teenagers. We have few tools to provide them with so that they can read and see other things, so that they can get used to the idea that there is a media outlet in Quebec City, that there are media outlets in Montreal, and so that they know what they are called.
Some young people have never opened a newspaper in their lives and have never read a news story. It's important to point this out. In the past, newspapers were left lying around on tables or coffee tables, but whatever the case, they were visible. Now, they are no longer visible, and that is detrimental. Of course, a single initiative and education cannot do everything, but I think that if we are able to reach people, it can be a great help.
Thank you, Madam Chair.
I want to thank all the witnesses for being here.
Ms. Tessier‑Bouchard, I will continue with you.
I understand that parents have to give their children permission to be one of your readers, or, in fact, to become members of the Les As de l'info group. Is that correct? Did I get that right?
The parent inevitably gives their permission. That's how you control access for 8- to 12-year-olds who register on your site, if I understand correctly.
Yes. Obviously, some young people are older, but we let them find out for themselves. However, the child must have consent from a parent or teacher to register. We have a section for children and a section for teachers. The teacher can send a letter to parents seeking authorization for the child to create a profile. We offer educational activities that teachers can use in class. There are two ways to verify that a child has not registered on their own. Verification is always done.
In October, Les As de l'info posted an article about the war in Gaza. It was a journalist covering the war in Gaza who answered our questions. Caroline Bouffard wrote a very interesting article about it.
I have four grandchildren. One of them is 11 and another one is 12.
Yes, they are.
My grandson loves hunting and fishing. If I ask him any question about the type of weapon to use for hunting a particular type of animal, he could probably answer me. He understands all that. He knows all about it.
Clearly, some of the young people who follow your social media accounts are also very interested in international politics. I looked at the post written by one of the young respondents, and it was quite detailed and provided a lot of information.
What measures are you taking to ensure that it is in fact these young people who are writing the responses? You must have a way of verifying this, right? I don't want to take the time to read out the responses of the young person in question. I won't mention their name either.
His username is Eugènequestan, like Québéquestan. He has a keen interest in international politics.
I read his comments and I know that I would never have been able to respond like that when I was 12.
On the one hand, they love politics, that's for sure.
We have held election night parties for every election. Thousands followed the election all evening and told us that they had to go to bed, that their mothers wanted them to go to bed, but that they didn't want to. It's very surprising.
On the other hand—
Sorry to cut you off, but apparently in the election you held, the Conservatives won. That is excellent news.
No, it's not us. However, one kid asked why the Parti Populaire, the People's Party in English, is called the Parti Populaire when it is not so popular.
We see that sometimes.
Sometimes we write them in private to ask them if they got help writing that response and letting them know that it's no big deal if they did.
Sometimes we also check the email address to see if it looks like a child's address. We send an email to make sure and, if not, to ask if they know that it is a site for children. It doesn't happen often, but an adult could register saying they are 12 years old and make comments. That said, it's very easy to detect.
Yes, it becomes apparent quite quickly.
However, what measures can be taken to avoid, for example, bias toward one side or the other and to remain neutral in the types of questions asked? Obviously, when we ask a question, it is often because we already know the answer, or at least part of the answer. Here, the questions are aimed at young people aged 8 to 12.
Do you ensure that you remain neutral, at least politically and socially, in general?
Generally, yes, absolutely.
On Mondays, we have “pros” and “cons” that really give the arguments for and against. This week, it's cosmetics for children aged three and up. The arguments are very well balanced.
The questions we ask are designed to get them talking, to check whether they have read the article, and to elicit comments. Sometimes, when serious events occur that we know will greatly affect children, we may steer the question toward suggesting, for example, that they offer their condolences to the family of someone who has just experienced a tragedy, or ask them what they have to say. However, often we simply want to start a discussion with the children in the comments.
[English]
Ms. Sharifi, much of your work highlights the risks of appearance-focused influencer culture. At the same time, young people clearly see certain influencers, especially in fitness or lifestyle spaces, as positive ones. You even mentioned earlier that you're a social media influencer in many ways.
In your experience, how do young people balance these contrasting realities of the positive influences online and the more harmful content they encounter?
That's a great question.
The notion that we can ask youth to balance content that is pushed at them almost involuntarily is difficult. There is not a straightforward answer.
I would say that they follow health gurus and certain health-style influencers, absolutely, but there's no censorship for what type of content and exactly what kind of messaging is being put out to young people. We expect them to consume, interpret and critically think about content the same way that an adult would, and I think that in itself is problematic.
Also, I want to emphasize that we provide prevention education on digital literacy and online exploitation. I've educated over 100,000 people in Canada, most of them students, so the education piece is important.
They can critically think. However, with these influencers, especially the health ones, there's always harmful subliminal messaging, and that's where we see studies showing and demonstrating that kids who consume media have issues with body dysmorphia, eating disorders and things like that.
You've previously noted that children and teens are “constantly bombarded with images of influencers living their best lives”, and this changes how they see themselves. From your experience, how does this reality impact the way young people talk about their confidence or self-image?
It's being compared to a reality that does not exist, especially with the influencers we see right now and with the ones these kids follow. We're looking at a curated life, where money comes easily and where beauty standards are unrealistic without Botox, filler and filters.
They're comparing themselves to something that is absolutely not real and that is fabricated, and it's very difficult for them to tell the difference. Back in the day, when we read teen magazines, that in itself was harmful to our self-esteem. However, you're looking at constant hours of scrolling through these standards that are not based in reality.
It does have an effect. It does have an influence.
You noted in your opening remarks that influencer culture turns people into “products”. Could you expand on how this may impact the way young people present themselves online and then, further on, when they turn 18?
Influencer culture is normalizing the commodification of people. As an influencer, you are the product. That's where we see these brand deals and the ability to make thousands of dollars creating content.
When we start normalizing that you as a person with your body are a product for people to consume, the line starts to get blurred. People pay you for the content you're putting out there, and that content could be very fine and neutral. However, at some point, it becomes a blurred line where you're the commodity, and at what point does that branch into your body and into sexual, provocative activity and actions? That's where we're seeing the normalization.
We've changed the language of prostitution-like behaviours online. We no longer say “seeking arrangements”, “sugar dating” and “sugar-baby relationships”. We're not using those terms. The kids aren't using those terms anymore. They're not saying, “I have a sugar daddy.” They just have these followers, these fans, who are paying them money for the way they look and for the sexual conversations they're engaging in.
It feels more normal because of the influencer culture that, again, has normalized the commodification of people. It's the same way with things like OnlyFans. You're not engaging in porn; you're an adult content creator.
That language and this culture of influencing, where anybody is a content creator regardless of what kind of content they're putting out there, blurs the lines for these young people. That's what we're seeing. I go into schools, and you have no idea of the number of young girls who are, number one, already engaging in these behaviours and, number two, waiting until they turn 18 to engage in them because they're falsely glamorized.
Thank you very much, Madam Chair.
Ms. Tessier‑Bouchard, I want to come back to you. We won't let you rest for long, because I still have a lot of questions to ask you.
Earlier, I heard you talking about the young man who posted an aggressive comment in one of the sections of your web site. When you replied to him, he himself replied that he didn't know his comment would be read.
In fact, it is not only young people who feel that they are not being heard or read. This may explain much of the aggressive behaviour on social media. As members of Parliament here today, we can tell you that we receive emails from angry citizens who express their anger in emails or comments on our social media posts. As soon as we respond to them, the tone softens considerably, because, as these people tell us, they did not think we would read their comments.
So it's not specific to young people, but I notice that even young people feel that they are not being heard or listened to, or that they don't have a say.
I imagine you are well aware of this, because one of your missions is to make young people feel heard.
Absolutely. We often give them a voice and share their message with the people concerned.
Last week, Ms. Ruba Ghazal spoke out about the violence and hatred she receives online. We wrote an article to congratulate the children for being so respectful on the platform, and we explained to them that Ms. Ghazal was upset that she and other politicians receive hateful comments online. We asked the children what they would like to say to Ms. Ghazal. Over the weekend, more than a hundred of them responded. I lost count. We sent the responses to Ms. Ghazal and informed the children. Ms. Ghazal kindly made a video to respond to them, so we will post it on the platform.
Our goal is always to make them understand that they have power and that they must use it wisely. That's what we try to work on with them. We'll put the video online this week, if it's not already up today, because I haven't seen everything we've posted.
Children at a French-language school in Ontario noticed that their municipality's web site was in English only, so they asked for a translation of the site, and they succeeded. The mayor came to tell them they were right.
We try to show children that when they take concerted action, while respecting people and rules, they can change the world and that their voices matter.
Of course, we need to teach them to express themselves and articulate their requests properly, but the fact that they feel heard makes them less angry.
[English]
I'm going to direct my questions, as usual, to Ms. Hanson.
There are 28,000 students in the board of education. You mentioned the RAP. I'm going to mention it a bit because it started in 2003. Now you're in 14 high schools, not only in the public division, but also in the Catholic division. It's the restorative action program, where you have somebody from outside the school division in schools.
Talk about the trust that is needed for students with regard to this RAP person who, all of a sudden, is in these schools every day.
TRAP workers are trained in mediation. They're often social workers or teachers. In the Catholic division, they're teachers. In the public division, they're social workers.
Students will often approach them as counsellors or they reach out to them for help, especially with bullying and a lot of the online platform issues we're seeing. We're seeing so many different facets of technological harms. For instance, the Chinese kids will all have a massive group chat together. Then issues will come out of that. Our RAP workers will step in and help sort out those problems.
There is a lot of trust between students and RAP, yes.
How do you get involved? What percentage of your time or resource time is spent dealing with the social media-related issues with students in your school?
As counsellors, we often hear something first and then refer it to administrators or to RAP depending on the severity of what we're hearing. We then pick up the side of helping kids to deal with the fallout of the things that have happened. We support them to manage the anxiety and large emotional issues that can often come with some of the things we're seeing.
What strategies do you use in school to deal with some of these issues? Are they very successful? Some are and some aren't, I guess. Maybe you could talk about that.
Most are not.
I'm the mother of four daughters. My oldest child is 33. She went through high school when MSN first came in. Honestly, since then, our school resource officers will tell us all the time that the majority of our issues are coming from online behaviour. It happens outside of school, and then it gets brought into the schools. That's why schools are so intertwined in all of these things. Dealing with kids and the fallout of stuff that's happening online is a daily occurrence.
Certainly, we'd like to see platforms, when we approach them, to take down images. Right now, the only way we're able to get images and harmful things taken down quickly is by investigating and finding the perpetrator, and then getting that person to take them down.
When we've reached out to platforms because of privacy laws and those kinds of things, they circumvent the responsibility for removing images or false or fake accounts. That would be one, for sure.
Also, 100% we need age verification, just based on children's brain development. Kids under the age of 16 don't have the capacity to manage.... As Tiana was saying, these platforms are not designed for children; they were designed for adults. Now we're asking eight- and nine-year-olds to navigate that, and it's really scary.
The other piece is that we can teach them how to be better online and recognize when there are harms coming. In saying that, I know this is addictive. The kids are addicted. A lot of kids are being bullied through Snapchat and they can't stop looking.
The reality is that we can't put it in the kids' hands. We have to help support.... I think we need to enmesh the two—teach them how to be good citizens and how to be better at being online. We also need to protect them by putting in some significant measures, and I think age is a huge one.
No, I'm sorry. You're well over time.
It's interesting, though, that privacy rules supersede safety rules. That's what I took away from that.
[Translation]
Mr. Ntumba, you have five minutes.
Thank you, Madam Chair.
Ms. Tessier-Bouchard, in your speech, you said that 66% of the young people in your organization said they were willing to advise the government or elected officials as a whole. We are talking about approximately 17,000 people.
Today we are looking at the way things are going in the world. We were just talking about social media. By way of a quick comparison, I would say that it seems like real life is being copied on social media.
Let me explain. Today, some influencers pay to follow people who do things online that may be questionable. In real life, they are like sugar daddies. It's all the same, but under a different name.
I am the father of four children and I very much like your platform. How can we encourage young people to take an interest in reliable sources of information in a context where influencers dominate their attention?
For children, it's a little easier because parents are interested in what they consume online. Sometimes we go through the parents. We also go through schools a lot. We also run information literacy workshops with the Quebec Centre for Media and Information Literacy, or CQEMI. Journalists visit classrooms and play games and conduct investigations with children to help them spot misinformation.
Obviously, it's easier to reach children than teenagers. Teenagers are already on social media, they already have their habits, they already have an algorithm that has been established. It's definitely harder to reach them. However, by employing people who have charisma, who speak to them frankly in a non-pedagogical language, I think we can reach them.
HugoDécrypte, which is very popular in France and has over one million subscribers, recently arrived in Quebec. The channel already has some 130,000 subscribers in Quebec.
This shows us that there is interest and that we can do something for this age group. Most of HugoDécrypte's work targets teenagers and young adults.
It also shows us that, unfortunately, we are not there. The French end up dominating our market because we don't have the means to do what they do. There is even a sales team that comes to Quebec to seek advertising revenue for HugoDécrypte and then brings it back to Europe.
It's great that HugoDécrypte exists. We are not yet in a position to take that place. Unfortunately, we haven't taken it yet.
Earlier, my colleague Mrs. Thomas spoke about what Australia has done recently with regard to digital identity, namely prohibiting online access for young people under the age of 16. I wonder if we should adopt such an approach.
In your opinion, are we talking about having a real digital identity to authenticate individuals and their age? You work with minors. Could this approach also make it possible to better regulate these platforms in the future?
Maybe. You have four children. When children are forbidden from doing something and suddenly it's within their reach, they tend to go overboard. If they're not allowed to eat junk food and there's junk food on a buffet table, children will, to put it bluntly, gorge themselves at the buffet.
Yes, we can block access. That said, it's essential to supervise them. We also need to try to influence algorithms or be able to take the necessary steps to ensure that certain sites are not viewed by a specific target audience. I get the impression that this is already happening. I looked into it, and I was told that it's fairly easy to do.
Should we ban this access? I'm not convinced. It will cost a lot of money and take a lot of resources. Perhaps we should take some of those resources and put them into supporting teenagers and children in developing their digital identity.
Thanks, Madam Chair.
Ms. Sharifi, first of all, thanks for the work you do for the Center for Exploitation Education.
You told this committee that you believe kids under the age of 16 should not be on social media. Can you tell us why you believe that and what you recommend as wider solutions to protect kids who are on social media?
When we're talking about social media or access to the Internet, it's not something that happens at a certain designated time. Kids are online right now 24-7. It's just the online world and in-person world. They are intertwined. There is the idea that we can monitor our kids online, or we can teach them, and this is going to be enough.
As somebody who educates kids, I see the effect of education. It is effective. I'm not denying that. However, by not creating restrictions, especially age restrictions, we're putting the onus on parents. As an educator, I know education is effective. As a parent, I think it's not enough. If you are parenting a child, you're not around 24-7. My child is five. I gave her my phone for two seconds, and she ordered 12 banana loaves from Starbucks. The control and monitoring are just not there.
One other thing, as Stacy pointed out, is that we know kids have not developed their prefrontal cortex. Yes, they can critically think. Yes, they should have agency. Yes, they are brilliant. However, they are still youth. We cannot expect them to be able to deal with what comes with social media use.
We can teach them the signs of grooming. Again, it's effective. However, at the same time, if children have the need for belonging, attention, love, acceptance and community—which they all do—they're still vulnerable regardless of however much information and knowledge they have.
It's just the nature of the way these platforms work. Are we going to ask all of the platforms—and are they going to oblige—to put safety over freedom of speech? Are they going to be monitoring every single type of content being put out there? Are they going to be able to have control over every individual and the direct messages being sent to minors?
I ask because in my experience from seeing the levels of grooming and predators reaching out to kids, and kids being exposed to harmful content and their unlimited access to so much information they should not have access to, I think we're not going to be able to monitor and teach them enough that this in itself will keep them safe.
On age verification, I think there should be a minimum age. A couple of apps came out with age verification that used AI. It was extremely effective. Indeed, we tried to get around it in multiple different ways and we couldn't. If you tried to get around it, you were blocked from creating another account on the platform.
There are ways to do this. Yes, they will be costly, but the life of children is priceless. I think it's even more costly when we're seeing the cost of exploitation and grooming and the aftermath of having to deal with those things. Moderation is important. Education and equipping kids are critical, but we also have to hold these corporations, which have created businesses that are not safe for kids or not made for kids...to give free access to these children and youth to be on their platforms and leave it to the parents to monitor that.
I'm going to speak very bluntly here. I don't believe Mark Zuckerberg cares about our kids, period. Meta does not care about our kids. I think Instagram for teens is a problem. It's just another way for them to make money. The safeguards they are claiming to have in place and what they're going to do are not going to be enough.
There are social media platforms being created specifically for young people. Those are wonderful. We should do that.
Again, we are putting trust and ownership in companies like Meta that have shown time and time again they don't care. I think Instagram for teens is harmful, period.
Thank you to all the witnesses. This is always fascinating. It's a really good conversation today.
One thing I fear is that we're getting a bit mixed up between online activity and social media. They're not really the same. Your kid going on the Internet to google something is very different from going on Instagram or Facebook, and the algorithm is part of that difference.
I totally agree on digital literacy. Sometimes I wonder about the extent to which digital literacy can handle the algorithm. I feel like it can help us to a great extent. Of course, it helps us understand how the algorithm works, which is always valuable, but I think the question we're asking ourselves is this: Is the algorithm such that it makes regulation very difficult...or the nuance of digital literacy enough for a young person to handle it?
It's not just about disinformation and information. It's not just about the quality of the content they're getting. It's about being reinforced on feelings of acceptance or anger and all of these things.
André, maybe you could speak to the challenge of regulating social media or the algorithm itself, as opposed to online spaces. I think sometimes there is a confluence of the two that's maybe not always helpful.
It's a great question, and I really agree with the way you framed it.
There are broader questions around all of our addiction to our devices and the way we engage on devices. Then there are specific sets of questions related to social media platforms and other types of platforms or apps on devices.
Canada has been working at this for five years now: How do we get to a place where we regulate social media? Our view is that we were very close at the end of the last Parliament. We were very close to a bill that could get through. It was a bit of a compromise package, but basically, it would have put the duty of responsibility on platforms, with a focus on the seven categories of harmful content. It would have put requirements on them to deal with them, with a specific onus on the two most egregious and a specific focus on kids, with a regulator who could effectively oversee these things.
We just need to move forward with those things. What we've learned is that we need heightened provisions to focus on kids, and we've talked about a lot of those. We've learned a lot, even in the last year or two. Tiana speaks so thoughtfully about influence, for example, and how that space has been evolving. Obviously, the impacts of AI are something that we'll need to consider.
I really think we need to get on with it, and I don't think we need to go back to square one. We know the broad strokes of what we have to do. I hope the government, which has signalled it should be coming up with a bill soon, and Parliament, which has had a couple of cracks previously at going through this legislation, recognize the urgency of moving this through quickly.
In closing, I think all of us have to deal with these broader issues of how we're adapting to our screen time. I am a terrible influence on my kids. My wife and I will have our screens in front of our faces. There's a broader set of social issues and social norms that we all need to think about, and then there's a specific set that we need to deal with on social media. That's the way I think about it.
I think I'm speaking the same language, André. It comes from experience. I'm not addicted to a website; I'm addicted to social media services that are feeding me snowboard videos and guitar videos on my Instagram. They're very different things and I too struggle with it.
I've watched it change throughout my career in the last 20 years. I was a musician. In my life, the way I promoted my shows was by way of social media, but even control over the relationship between what I produced and the audience completely changed because it became about the algorithm.
This is part of the conversation we should probably talk about too. We're talking about even the narrative of the influencer being in a relationship with the fan. It's actually not. The algorithm now feeds suggested content, so the relationship is between the algorithm and the user, not between the creator and the user, whereas that was our way to build fans. That was our way to build a following. Now it's very much parsed, with the algorithm in between.
I hear you very much, André.
I wonder if Dimitri has any comments.
[Translation]
Thank you, Madam Chair.
Ms. Tessier‑Bouchard, I'm coming back to you again.
My colleague, Mr. Myles, raised an interesting point. There is a distinction to be made between accessing the Internet, surfing the Internet and browsing social media. The interaction is very different.
Based on your experience, do you think it would be appropriate to have an initiative aimed at young people, the same age group, 8 to 12 years old, that would be a real social media platform, a bit like X, i.e., a media platform where young people interact based on posts they have made themselves? Would it be viable? Obviously, there would have to be supervision if the goal is to educate. Could it work?
Could it work? Certainly. Would it be useful? Would it be desirable? I'm less sure about that. Elementary school children are still very young.
Let's put that off until a little later. Let's put it closer to adolescence. Maybe they're too young in elementary school, but in high school, they would definitely need some kind of guidance or education on the use of social media, because they're already fully immersed in it.
Would this be an attractive offer for them, but also useful for us as a society, as adults, and as parents?
I don't know if I have everything I need to answer that.
I get the impression that, unfortunately, what interests them is being part of the adult world, and that's also what plays tricks on them at the same time.
As for the world of adults in social media, I think we'll wait a little while. I'm not sure we want to send them there.
Let's go with Les As de l'info and the model you have put in place. I am sure you are not making much profit with Les Coops de l'information. It is an investment. A project like that is rooted in conviction.
If we want to talk about proposals and recommendations for the purposes of this committee study, do you think it would be a good idea to somehow require companies that produce information to create a module for young people dedicated to digital literacy and information literacy? Should this be part of our recommendations, in your opinion?
Actually, I don't know if that's necessary.
What is needed, in my opinion, is for the media to join forces. We are already collaborating with Réseau.Presse, the Association de la presse francophone, and all French-language media outlets representing minority communities in Canada. We already have a page in Le Devoir, which has joined forces with Les As and publishes a digital page and a print page for children every Saturday.
I feel that we need to encourage the media to work together for the good of children, rather than launching lots of initiatives that will cost a lot of money for more or less the same result. That's why we're trying to rally everyone around the project.
No, we don't make money from it, that's for sure. I've been creating content for young people for 35 years. We've never made money from content for young people, but it's still a first step into the world of information, into the digital world. If everyone gets together on a project, it already helps more children. The goal is to reach as many children as possible.
What you are doing is great. It's a nice web site, even for adults. Congratulations. I encourage you to keep going. I also encourage you to ask for and seek the necessary funding, because initiatives like yours deserve to be encouraged much more.
Thank you very much, Ms. Tessier‑Bouchard.
[English]
Tiana, I'm going to come back to you. I'm hoping you can provide a few comments with regard to the creation and use of deepfakes. I know this is something you have some expertise on.
On your website, you state that “over 90% of deepfake content online is pornographic” in nature, which is alarming, especially when it's often young people who are used to make these images. I would love for you to expand on what you're observing, the impact this is having on young people and how we might be able to avoid this in the future.
I would say this is one of the largest trends we're seeing. These AI deepfake models have completely changed the game, from peer exploitation all the way to predation and sextortion. Millions of child sexual abuse materials are being generated constantly through these deepfakes. We're seeing, number one, peers now using AI tools to create deepfakes of other peers. Sometimes it's for blackmail; sometimes it's for entertainment, oddly enough. We're also seeing that grooming for sextortion has completely changed. The number of boys who are reaching out to us because some person they were speaking to was able to take just one picture they had on their profile, use it to create a deepfake nude and then sextort and blackmail them shows there's an alarming rate of that happening.
When we talk about prevention, number one is that certain tools should not be accessible. That's just number one. Number two is the education piece for young people. It's something we're missing. We can't just talk about digital literacy and not incorporate consent and healthy boundaries into that conversation, showing the parallel of what that looks like in person and online. I think we're noticing, because of the nature of what kids are exposed to on social media, that there is almost a normalization of using somebody's content against them without their knowledge and consent.
Those would be my two recommendations: If there's a way to do it, flag these platforms—because they're not on the dark web—and start to engage in prevention education for our young people.
Thank you for that.
I'm going to take you one step back, just for the sake of this report and its content, and ask you to describe what a deepfake is and how it's created.
A deepfake is essentially any kind of imagery created by AI that is not real. It's not the actual person; it's based on a person. When we say “deepfake”, we're talking about an AI-generated image, often of a real individual. When I use the terminology “deepfake nude”, I'm referring to an AI tool that is using a person's picture, a real picture of a person, to generate a nude image or video of them.
Thank you very much. I appreciate that.
I'm also hoping that you can talk about some of the research you've done with regard to AI and how it can be used to detect human trafficking online. I read some research you did within B.C. You did some research with regard to sex work ads in British Columbia, and you found that 40% of the ads pointed to sex trafficking or child exploitation. Maybe you can comment on that.
AI is a powerful tool. Technology is a powerful tool. It can be used for good. Of course, when individuals like me and professionals in this space come to use it, it can have great effect.
In that research project, we were training web crawlers to detect situations and instances of human trafficking. My role, as an expert in the space, was to come up with the content that would train them. In black and white terms, how would we determine if a post on these common escort sites is indicative of somebody being trafficked or of child exploitation and trafficking? Yes, we did find that 40% of them were definitive instances of trafficking. It would have been a much larger percentage if our threshold for “definitive” was not being in at least three different categories that we had outlined to be indicators of trafficking, if that makes sense.
Thank you very much.
I'll continue on.
Dimitri, maybe you could take a shot at answering the questions we were discussing before about some of the regulatory challenges.
Sure.
I appreciate the point you brought up about the difference between the web and social media. That's an important nuance. It's one that is continually conflated in how these things are actually used by young people. That's one thing we've noticed from a media literacy and education perspective.
Even communicating what that difference is to young people in itself can be important, especially with younger kids who, in our work and studying with them, don't really know the difference between a social media platform and a search engine, for example. They are all places to get your information. They all intertwine, and one leads to another and leads in and out. I don't think you can silo them in the same way. We appreciate, especially in terms of regulation, that we want to do that, but it's important to think about how they are also intertwined.
On some of the broader issues, especially the algorithm, one thing comes up a lot when I think about regulations and age restrictions. One of the questions I always have is this: If there's going to be an age restriction, what else is going to happen alongside that policy? As we all know—I think we can all point to people in our lives on this—if we're talking about the personalization algorithm, turning 16 or 18 doesn't make one magically immune to the effects of that algorithm. Alongside any types of age restrictions, is there anything happening to address some of these larger underlying issues in terms of both the content of social media and the infrastructure of social media? That's something I would want to consider if I were thinking about that, and algorithmic personalization is central to it.
I also want to reflect on some things that were said before. I completely agree that not every issue we're talking about is an education issue. Certain things are media literacy issues and certain things might be regulatory issues. Parsing those things is one of the major challenges for the committee.
That's exactly the challenge we're facing right now. This has been such a great conversation, because the digital literacy piece is so important. As we've all said, there's a generation of us who didn't have it and we're catching up on it, but there's also the regulatory piece, and I fear it might not be enough.
A lot of it comes down to transparency as well, especially when we're talking about the infrastructure. Can you chat a bit about what your experience has been in social media infrastructure and algorithmic infrastructure regarding transparency? What are the possibilities there?
From an educational perspective and a media literacy perspective, we return to that as a key piece of the education for youth: understanding why they're seeing what they're seeing, how what they're seeing is not necessarily natural and that there is some kind of infrastructure underneath. It's about making this thing that is invisible more visible. That is continually a struggle. It's a struggle because we don't know, and researchers don't know and don't have access to, what's actually going on, so we can only ever talk about it in abstract terms. We have this mythic figure of “the algorithm” when we talk about digital media and digital media literacy, especially with young people, and it can sometimes seem like this constant, this entity, that can't be addressed, can't be understood and can't be regulated.
There's something happening strategically when that happens, but it does make educating about it difficult, especially because it can also become a scapegoat where everything gets blamed on the algorithm when there might be other underlying causes of some of the effects we're seeing.
For me, on transparency, that's exactly it. It came up in the AI conversation. Many of the questions we were asking on AI came down to the fact that we're fishing in the dark a bit, not knowing where we're going because we don't know what the infrastructure actually looks like under the hood. That's what I'm a bit challenged with.
Last week, we had a witness who spoke about the ads. I thought that was a really interesting way to get at it from behind, because it's all about the monetization of attention.
Ms. Sharifi, maybe you could speak to the commodification of attention in ads and whether that's an effective way to address some of this.
Could you clarify what you mean by addressing this through ads? Are you talking about educational ads?
I think that's one of the harms we are seeing. That's why one of my requests for regulation is that we should not be able to have ads targeted to minors, especially because, in my experience, I've seen certain ads that are completely inappropriate that would come across a child's account. I've reported them, but I've not received any kind of response or follow-up there.
Again, it's about holding the platforms accountable for their ability to allow anybody to create an ad and target it to specific populations that shouldn't be targeted to. One example I'll give is Seeking. The company Seeking used to be Seeking Arrangements. It is now doing tons of ads, and those ads have come across the burner minor accounts we made for research purposes.
Mr. Mason, I'm sorry you haven't had any questions so far today. Do you have any thoughts or reflections on the conversation we've had at the table?
Just cut me off when you will. I won't remember the time.
These problems are not new. These are problems that were occurring 20 years ago. I watch, I listen and I recognize that these are all the same problems I was dealing with 20 years ago. The amplification of the problems, of course, is true. Who can deny that we want to protect all of our children?
At more of a foundational level, I think whatever legislation is passed today is out of date tomorrow. We're dealing with a difficult problem, which is, to put it a different way, that traditional institutions move linearly. We know how that works: one thing after another after another. The Internet moves exponentially, and that's going to make it very hard to keep up.
I have many thoughts, but one thought that I think is important is that any legislation that's created has to be dynamic. It can't be static. The space moves so fast, at warp speed, so we won't be able to keep up with it. I can guarantee that. We have to think through that frame or through that lens. It's a whole different perspective when it comes to legislating. It's new. I think static legislation will fail. I even have my doubts...if dynamic legislation can fail, because this space moves so fast.
I think we have to figure something out to protect children, of course. I have many other thoughts about this. I don't do studies, and I value studies, but I've had first-hand experience for nearly 30 years. I've had many intimate discussions through restorative circles. I've been in so many offices with parents and kids for bullying, and all these things are not new. They're not new at all. I know the patterns.
Mr. Mason—and I'll extend this invitation to all the witnesses at committee today—we would like to hear any other thoughts you have that you didn't have a chance to express today. Please send us an email through the clerk. Anything that maybe you forgot to say and any other information you may have, please send it to us. We can use that information. The analysts will use it as we prepare our study in the new year.
That is all the time we have for today, but I truly appreciate your participation. Once again, please send us any final thoughts you have. Thank you.
With that, I will adjourn this meeting. Merry Christmas, everyone.
Publication Explorer
Publication Explorer

