Skip to main content
Start of content

SECU Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Public Safety and National Security


NUMBER 023 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Tuesday, May 10, 2022

[Recorded by Electronic Apparatus]

  (1100)  

[English]

     I call this meeting to order.
    Welcome to meeting number 23 of the House of Commons Standing Committee on Public Safety and National Security.
    We will start by acknowledging that we are meeting on the traditional, unceded territory of the Algonquin people.
    Today's meeting is taking place in a hybrid format, pursuant to the House order of November 25, 2021. Members are attending in person in the room and remotely using the Zoom application. Members and witnesses participating virtually may speak in the official language of their choice. You have the choice at the bottom of your screen of floor, English or French.
    Pursuant to Standing Order 108(2) and the motions adopted by the committee on Thursday, February 17, 2022, the committee is resuming its study of the rise of ideologically motivated violent extremism in Canada.
    With us today are Vidhya Ramalingam, co-founder of Moonshot, and Adam Hadley, executive director of Tech Against Terrorism.
    You will each be given up to five minutes for opening remarks, after which we will proceed with rounds of questions.
     Mr. Hadley, you now have the floor for up to five minutes for your opening remarks, sir, whenever you're ready.
     Good morning, and many thanks for the invitation to speak at the committee hearing today.
    I'm Adam Hadley, executive director at Tech Against Terrorism. Over the next few minutes, I'd like to explain more about who we are at Tech Against Terrorism and what we do, and provide some clarity about our position on some of the discussion points.
    Tech Against Terrorism is a not-for-profit based in the U.K. Ours is a public-private partnership. We were established with UN CTED, the counterterrorism executive directorate, in April 2017. Our mission is to work with the global tech sector, in particular smaller tech platforms, to help them tackle the terrorist use of their services while respecting human rights. Our work is recognized in a number of UN Security Council resolutions, including resolution 2354 and resolution 2395. As a public-private partnership, we work with the major democracies—governments such as the Government of Canada, the U.S., the U.K., Australia and New Zealand—alongside the tech sector, which includes big tech and smaller tech platforms.
    The reason we focus on smaller technology platforms is that many of these platforms have limited capacity and capability to deal with terrorist use of their services. Our mission is to support these smaller platforms, free of charge, to improve their response to terrorist activity and terrorist content. In particular, over the past two or three years, we've seen a significant increase in migration from the use of very large platforms to smaller ones. This represents a strategic vulnerability in response to the terrorists' use of the Internet.
    Tech Against Terrorism monitors over 100 tech platforms on an hourly basis. We also monitor around 200 terrorist-operated websites. Overall, we work with 150 platforms, providing a number of services to help improve their response. We also work alongside other organizations focused on online counterterrorism, such as the Global Internet Forum to Counter Terrorism.
    In detail, our work at Tech Against Terrorism focuses on understanding the nature of the threat. This is based on open-source intelligence, in order to understand the detail of how terrorists use particular platforms. We use this intelligence and insight to establish relationships with these platforms, reach out to them and evaluate the extent to which we can provide support.
    This results in a mentorship service that we offer free to platforms. The mentorship service is designed to build capacity. We do this alongside the GIFCT. Of note, we've developed some software, called the terrorist content analytics platform, which helps alert small platforms of the existence of terrorist content. The TCAP, the terrorist content analytics platform, has so far been funded by the Government of Canada. This has resulted in 30,000 URLs—individual items of terrorist content—being referred to platforms, with more than 90% of this content on smaller platforms removed. We've also built a knowledge-sharing platform, which is designed to share best practice information and guidance to smaller platforms. We actively work to have terrorist websites removed from the Internet.
    I should stress that we focus on violent, Islamist extremist organizations and, of course, the extreme far right. The basis of our work is typically focused on designation. In upholding the rule of law, we believe that designation is a critical mechanism to ensure that platforms remove content in a timely fashion. Therefore, we applaud the Government of Canada for its pioneering work in designating organizations from across the terrorism and violent extremism spectrums.
    In summary, we call for governments to focus on the rule of law and how they regulate, with a focus on providing definitional clarity to tech companies so that they can improve their action. We believe that designation is a crucial tool that can be used to help provide that clarity, so that small tech platforms get better at dealing with terrorist activity.
    Finally, we would stress that proportionate measures are important. Often, regulation in this area is primarily focused on big tech. We understand the concern here. However, the current threat picture is such that there is a significant amount of terrorist activity from across the spectrum on smaller platforms. Often, regulation fails to take this into account and fails to take into account the nature of adversarial shift—in other words, when terrorist activity changes or adapts according to the measures that are being used to avoid terrorists' use of services.
    In summary, many thanks for the invitation to speak today. I look forward to participating in the session.

  (1105)  

    Thank you.
    Thank you very much.
    I would now invite you, Ms. Ramalingam, to make an opening statement of up to five minutes whenever you're ready.
    Thank you, Chair and members of the committee.
    My name is Vidhya Ramalingam. Eleven years ago, when a far-right terrorist murdered 77 people in Norway, I led the EU's first intergovernmental initiative on far-right terrorism. It's in that role that I first started working with Public Safety Canada and saw first-hand the resilience and strength of Canadian practitioners working to ensure that no more Canadians take a violent path.
    I now lead Moonshot, an organization working with the governments of Canada, the U.S., the U.K., Australia and other global partners to build online prevention capabilities fit for the challenges of the 21st century.
    The threat posed by IMVE actors and groups is undoubtedly growing more sophisticated both online and off. Moonshot started studying Canadian engagement with this content on search engines in February 2019. In little over a year, we tracked over 170,000 individual searches for IMVE content across Canada. As Canadians spent more time online as a result of the COVID-19 pandemic and lockdowns, the engagement increased. Searches for far-right content increased 19% weekly during lockdown measures. In Ottawa we tracked a 35% increase after Ontario's state of emergency was declared.
    We have seen greater engagement with conspiracy theories. Over a year we tracked over 25,000 searches across Canada for white supremacist conspiracy theories such as the Kalergi plan, the great replacement and white genocide.
    In partnership with Public Safety Canada, we also produced the first systematic online study of the Canadian violent incel community online. The Canadian incel ecosystem is spread across both niche and mainstream platforms, including Twitter, YouTube, Telegram and Reddit. Canadian users on incel sites were 65% more likely than global users to post news stories about incels and were especially celebratory of incel violence that occurred in Canada.
    However, we are not without tools to respond. Perhaps the greatest challenge for governments today is how to bring our prevention models into the 21st century. We have to intervene where extremist groups are seeking to recruit: online. In 2022, every prevention model needs a robust digital component. This must be delivered safely, ethically and responsibly, with user privacy at its heart.
    Our recommendations for Canada are, first, strengthen pre-existing behavioural health and other wraparound services for prevention, specifically mental health support, community outreach as well as adjacent fields such as suicide prevention. Frontline practitioners such as Équipe RAPS and CPN-PREV in Quebec, OPV in Alberta and Yorktown Family Services in Ontario are best positioned to intervene.
    Our second recommendation is to adapt the entire suite of prevention services for online delivery. In a 2017 study, Moonshot found that only 29% of Canadian practitioners were using social media in their prevention work. We need to build the digital literacy and capacity to deliver their work online. There are an abundance of online tools and methodologies we can use. For example, from 2019 to 2020, we worked to ensure that every Canadian searching for extremist content online would be offered a safer alternative to terrorist content. We used advertising tools to safeguard approximately 155,000 violent, far-right searches and around 16,000 Daesh and al Qaeda related searches. The natural evolution of this work should see the use of these tools to connect Canadians with prevention services that can work with them to change their paths.
    Finally, third, signpost terrorism prevention services such as hotlines, counselling and exit offers online. Evidence shows us that this works. Moonshot found audiences at risk of far-right extremism in the U.S. were 48% more likely than the general public to take up offers of psychosocial support services online. In the last year alone, Moonshot has channelled over 150 individuals at risk of violent extremism across the U.S. into text message counselling sessions via online engagement. Now we're working with the U.S. government to launch state-level models to off-ramp at risk Internet users into local support programs, starting with New York state.
    Here in Canada, we need to signpost local services to Canadians engaging with extremist content online. To do this, local providers and networks like CPN-PREV need sustained investment to run interventions, extend their service hours and support the professional and mental well-being of staff. These organizations fill a critical gap in Canada's public safety infrastructure. The government should invest in these models and support efforts to take their interventions online, where their services are needed the most.
    Thank you for your time today.

  (1110)  

     Thank you very much.
    Both witnesses were right on time.
    I'll now open the floor up to our first round of questions. The first lot goes to Mr. Lloyd.
    You have six minutes, sir, whenever you're ready.
    Thank you, Mr. Chair.
    My first question is for Moonshot.
    Do you receive funding from the Government of Canada?
    Yes, sir. We receive funding from the Government of Canada.
    Do you exclusively focus on the far right, or do you focus on extremism across the political spectrum?
    We focus on extremism across the political spectrum. Our work in Canada has always focused on the violent far right as well as Daesh and al Qaeda inspired terrorism.
    I wouldn't accuse Daesh of being far right or far left.
    Do you focus on anything related to anarchists or environmental terrorism? Last year there were dozens of churches burned down in Canada. Have you done any research on those specific instances?
    Absolutely, sir. We look at domestic violent extremism across the ideological spectrum. As an organization, we follow the evidence. This doesn't mean we're seeing the threat from other kinds of ideological groups diminish. We just look at where the evidence base takes us, and we'll proportionately invest in prevention based on that.
    Can you give any examples of your group's investigation into environmental extremism or anarchist extremism, or into anti-religious groups specifically targeting Christian or Jewish groups?
    We absolutely do. In every country where we deliver both research and interventions, we cover far-left extremism, as well, where there's violence or where violence is incited.
    Can you give some examples of that in Canada?
    In Canada, our funded projects specifically look at the violent far right and al Qaeda and Daesh inspired terrorism. More recently, we've looked at incel-inspired violence. We have not yet done work funded by the Canadian government that looks at far-left extremism, but I would welcome the opportunity to do so.
     You're saying the Canadian government is only funding you to look into Islamic or Daesh related extremism and far-right extremism. Is there no funding from the Government of Canada to deal with environmental extremism or bigotry against Christian or Jewish groups in Canada?
    I can't speak to the full range of funding the Canadian government is currently providing across those issues—
    Speak about your group, specifically.
    In our group, specifically, work has focused on al Qaeda, Daesh, far-right extremism and incel violence. As I mentioned, we would welcome the opportunity to do work across the ideological spectrum.
    Are you aware of any of your peer groups receiving funding from the Canadian government to look at far-left extremism in Canada?
    I'm not personally aware of the full extent of programs that have been funded by Public Safety, but I believe all those projects are publicly listed online. I would welcome questions around that to look at the public releases around funding by the Canadian government.

  (1115)  

    I appreciate that.
    As somebody who works in this field, you're surely familiar with other organizations that also receive government funding and do similar work. To your knowledge, there are no funded studies from the Government of Canada to deal with environmental extremism or far-left extremism.
    I wouldn't have the background to answer that question, sir.
    Answer as far as you know.
    I can only speak for—
    No, you don't know.
    I would not have awareness of the full range of funded projects, so I wouldn't feel comfortable saying one way or the other. Our group has not received funding to work beyond Daesh, al Qaeda, far-right extremism and incel terrorism, but we would certainly welcome it.
    Prevention should be proportionate based on the data.
    Would you say that—if it were proven the government was not funding research in this area—this is a blind spot of the government?
    My belief is that research should span the entire ideological spectrum. We should use data to inform where prevention should be based. I would also mention that most prevention programs should really be cross-ideological. Every prevention program should be equipped to handle any case of violence, whether it's coming from violent far-left groups, violent far-right groups, or al Qaeda and Daesh inspired terrorism.
    I couldn't agree more.
    I'll move on to Mr. Hadley.
    In terms of your work in countering terrorism.... We had a recent case in Montreal. It's still under investigation. A former Conservative cabinet minister and staffer for RBC, Michael Fortier, had his two vehicles torched in Montreal. An anarchist environmentalist group claimed responsibility for the attacks because RBC is funding oil and gas projects and pipeline projects in Canada.
    We've been told that attribution is a key thing we need in order to deal with this. Can you comment on the importance of unmasking who is truly behind these attacks?
     Many thanks.
    Could you clarify who you mean, in terms of importance?
    For the attribution of who is behind the attacks, how important is it to unmask the actual people behind these terrorist attacks? How would you go about doing that?
    I think that's probably a question for law enforcement and intelligence agencies. Certainly the work at Tech Against Terrorism isn't focused on identifying individuals, but rather supporting tech platforms in reducing their activity online. Where appropriate and where there is a realistic threat to life, we ensure that the alert is sent to the relevant authorities, including in Canada.
    As to the attribution, I think that's probably a question for law enforcement in terms of the measures that they may have and the mechanisms they have available under Canadian law in order to conduct surveillance and carry out intelligence operations.
    Thank you very much.
    I would like to invite Mr. Chiang to take the floor for six minutes.
    Go ahead whenever you're ready, sir.
    Thank you, Mr. Chair.
    Thank you to the witnesses for taking the time to be with us today.
    My question is for Mr. Hadley.
    In 2017, your organization launched a knowledge-sharing platform, which was a collection of tools that start-ups and small tech companies can use to better protect themselves from terrorists' exploitation of their services.
    Could you provide this committee with some more in-depth information about how this platform works and some of the results you have seen?
    Of course. Many thanks for that.
    The knowledge-sharing platform is designed as a tool that's free to access for tech platforms. Its objective is to improve the understanding that those running small platforms have of the terrorists' use of the Internet. It spans the spectrum of terrorism and violent extremism. Within the scope are violent Islamist extremism, the extreme far right and a number of other terrorist organizations that are designated by other international organizations.
    In detail, the KSP provides information on logos associated with designated groups, the terminology associated with them and phraseology that may be typical of the content that appears. There's also detail on workflow in order to support platforms in making better content moderation decisions. There is also a significant amount of information about designation lists at the international level and a summary of global online regulatory efforts and many other elements. For more information, the website is ksp.techagainstterrorism.org.

  (1120)  

    In essence, does anybody have access to this website of yours?
    We are careful to vet access in everything that we do. In fact, in everything I will say during this committee meeting, I will assume that terrorists and violent extremists are aware of what we're saying, so there is always concern about not disclosing too much.
    Tech Against Terrorism is distinctive in that much of our work is done confidentially and privately. In order to build trust and confidence with smaller platforms, much of this must be done in private. In particular, there are grave concerns about access to the methodology and information that small platforms have. We know that terrorists and violent extremists are extremely adept at changing their use of the Internet. The more information they have about content moderation, the easier it is to change their methodology and therefore subvert mechanisms designed to stop that activity, so we have to be careful.
    In detail, for every individual who applies for access to the knowledge-sharing platform, we will ensure that they belong to a real platform. We will email them, call them and ensure that the knowledge that's being shared is appropriate for that audience.
    Excellent. Thank you, Mr. Hadley.
    In 2018, your organization launched a data science network, which your website calls “the world's first network of experts working on developing and deploying automated solutions to counter terrorist use of smaller tech platforms whilst respecting human rights.”
    Could you tell this committee more about automated solutions to counter terrorism?
     Of course. Automation can cover a number of separate activities. Often we might discuss algorithms, which certainly are part of automation. However, in our experience, the biggest challenge that small platforms have isn't in the basics but in the workflow. Content moderation automation is a simple mechanism in principle. It's identifying content that may fall afoul of the law or terms and conditions. It's then assessing whether this content does pass those thresholds. It's taking action, recording that action and reporting on it. It's also providing an opportunity for a user to appeal that decision. For the workflows, the complex ones, with smaller platforms in particular, most of our activity in supporting platforms is with that basic infrastructure.
    You could argue that this is all about automation. It's about trying to ensure that small platforms are able to accurately identify and moderate content in a scalable way. Unlike big platforms, smaller platforms have very small teams. They often have no or limited revenue or profitability, and they tend not to have particularly sophisticated technical infrastructure. That explains partly why terrorists and violent extremists will often use smaller platforms, because they know it's so much harder for those smaller platforms to remove the material.
    When we're working with smaller platforms, we provide a number of recommendations about how they can best use technology and automation to make the content moderation process more accurate and more successful as a result. Automation can include various other mechanisms such as hashing or hash-sharing. Potentially it can ultimately include searches of keywords and terminology, and it could involve more sophisticated mechanisms to understand whether a symbol is in an image or a video.
    However, most small platforms rarely have the capacity or capability to build complex automation. The automation that we typically support with is fairly simple and it's about helping them make the right decisions and record the decisions that they're making. An important principle in all content moderation, at least in our view, is transparency. Therefore, we recommend that platforms of all sizes invest in transparency reporting and, for that, automation is required to understand what has been removed and what's been left up.
    Thank you, Mr. Hadley.
    I would now like to turn to Ms. Larouche who has a six-minute block.
    Whenever you're ready, please proceed.

[Translation]

    Thank you very much, Mr. Chair.
    I thank both witnesses for taking the time to appear before the committee today.
    My first question is for Ms. Ramalingam.
    Ms. Ramalingam, in your opening remarks, you mentioned the tragic event in Norway. To enlighten the committee, I would like to know what you think about the recently passed European legislation on illegal content online. What can we learn from that?
    I'd also like to hear what you have to say specifically on the issue of liability for technology companies.

  (1125)  

[English]

    Thank you very much for your question. That tragedy, which I referred to, from nearly 11 years ago now was really a wake-up call for European governments. That was really the first moment that European governments realized that there had been a threat that had been completely overlooked.

[Translation]

    I have a point of order, Mr. Chair.

[English]

    Yes.

[Translation]

    I'm sorry, Ms. Ramalingam, I will have to ask you to repeat yourself.
    Mr. Chair, there is no interpretation.

[English]

    We did not have interpretation, so please go back to the beginning of your answer, and let's ensure that we have interpretation.
    Proceed, please.
    Thank you for your question, Madame Larouche.
    I was mentioning that the tragedy you referred to from 2011 was really a wake-up call for European governments. That was the first moment they realized that they had really been overlooking this threat.
    I really welcome the recent legislation in Europe and the efforts to hold tech companies accountable. Tech companies are often very reactive rather than proactive. It often takes a tragedy for the tech sector to be compelled to act. We saw this after the massacre in Christchurch and after January 6. They so often wait either for tragedy or for governments to impose legal and commercial imperatives to act. Legislation works. Legislation is absolutely required to hold the tech companies accountable, and I welcome the recent EU legislation.

[Translation]

    Ms. Ramalingam, in your opening remarks, you mentioned the “incel“ movement, meaning involuntary celibate. As the Bloc Québécois critic for the status of women, I am very concerned about the radicalization of this movement, particularly as it pertains to women.
    I'd like to hear a little bit more about what the study of this movement can contribute to the committee's deliberations on the study of online radicalization.

[English]

     Thank you very much for the question.
    We really admire the Canadian government's forward planning around this threat. This is an emerging threat that not only the Canadian government but also global governments need to be concerned about.
    Some of our main findings around the Canadian incel movement I mentioned in my briefing, but I want to talk a bit about prevention here. Some of the main findings we have discovered in the early stages of that work are that incel communities are open to mental health interventions and behavioural health interventions. This is actually no different from other forms of violent extremism—really across the spectrum. Whether we're talking about al Qaeda and Daesh inspired violent extremism or whether we're talking about the far right or the far left, we have consistently found, across the spectrum, that these audiences are open to behavioural health interventions.
     With the violent incel community, in part because we found high levels of discussions around their mental health and well-being already on platforms, there is an opening for us here to use mental health interventions as a way of starting a conversation with people who are at risk of violence.
    We would really encourage the Canadian government to invest heavily, as I mentioned, in behavioural health models, in building on the existing prevention and social service provision organizations across the country, and also equipping them to be able to handle cases coming from this violent misogynistic movement as well.

[Translation]

    Thank you very much, Ms. Ramalingam.
    In your remarks, you also talked about a gap in digital security and the importance of filling it. You touched on this in the answer you just gave us.
    What more can the government do to fill that gap?

[English]

    Thank you for your question.
    I think Canada is very well placed, actually, to take the long-standing programs that the Canadian government has been investing in for the last 10 years and start to build their digital capacity to deliver their work online.
    I mentioned some of our findings from a study that we ran five years ago, which was looking at what was then the current level of digital capacity among Canadian prevention practitioners, and it was very low. We need to work to improve that, so I would suggest that the Canadian government work to deliver training and capacity building to organizations that need to start using social media to signpost their services online.
     I would also suggest that we start to look at large-scale programming across the entire country, and not just focus on the few territories and provinces that have been heavily invested in and already have these programs on the ground but also really start to look at parts of the country that don't have these programs—in particular, Manitoba, Saskatchewan, the Atlantic provinces and the territories. We need to build up specialist teams that can cater to audiences that are at risk in those regions and start to bring services for those audiences online.

  (1130)  

[Translation]

    Thank you very much.
     It is therefore necessary to ensure better distribution of resources across the entire country, because there are still gaps to be filled.

[English]

    Yes, that's correct.

[Translation]

    Thank you very much.

[English]

    Thank you.
    I now invite Mr. MacGregor to take us to the end of this round of questioning with his six-minute block.
    The floor is yours.
    Ms. Ramalingam, I'd like to start with you. Thank you for joining our committee today.
    You were talking about the focus on al Qaeda and Daesh. I'm going to assume that is because over the last couple of decades with those two groups and their perverse interpretation of Islam and their barbaric ways of enforcing that interpretation, there were very clear and worrying growth trends in both of the ideologies. Is that right? I'm assuming that's why we had the focus on them.
    We set out.... The evidence that led to that first project being focused on al Qaeda and Daesh inspired terrorism and far-right terrorism was largely because it came off the back of several worrying events in Canada and globally that had been inspired by Daesh and al Qaeda.
    When we first ran that version of Canadian redirect, we found that the vast majority of searches for extremist content in Canada were for far-right extremist content as opposed to al Qaeda and Daesh related content, but that's not the only—
    Thank you.
     I'm sorry to interrupt, but my time is limited.
    Sure.
    What I guess I'm trying to get at is that there is a reason that far-right extremism and violence is a subject of focus right now. It is manifesting itself physically around the world in many violent acts. Can you expand on that, please?
     Yes, absolutely.
    My career for the last 10 years has been focused on working with governments to reprioritize their prevention funding specifically to take into account the rising threat coming from the far right. This is not a problem that affects only Canada. We only need to look south of the border to the United States to see the clearest-cut evidence of this. In the last several years, the level of attacks coming from violent far-right actors has increased substantially in the United States, as well as across the globe. We would strongly recommend that the Canadian government, as well as global governments, invest in prevention proportionately based on what the data tells us around the growth of far-right terrorism.
    Thank you for that.
    With the January 6 Capitol attack specifically, there was evidence that U.S. federal law enforcement and intelligence agencies knew about the potential violence as early as November 2021. Here in Canada, before the illegal occupation of our capital city and the many examples of violence that came from that, reported by our police agencies, there was evidence that the occupation was coming in early January. We need to learn lessons from our past so we don't repeat the same mistakes in the future.
    Do you have any specific recommendations using those two specific examples of what we really need to be on the lookout for before this manifests itself in a very violent and physical way?
    Yes, absolutely.
    We are in a moment of prolonged crisis. Domestic extremist movements, IMVE movements across the ideological spectrum, thrive on moments of crisis, and they basically use these moments to turn anxiety and fear in society into an opportunity for them to grow. That is what we saw on January 6 in America. We saw extremists grasping onto the insecurity and anxiety following the U.S. presidential election, and that's what we saw with the convoys in Canada. We saw extremist groups taking advantage of social polarization and using that moment to manipulate and to grow in Canada.
    We need to be a bit more front-footed and looking ahead at the crises on the horizon. We need to ensure that our prevention programming is equipped to pre-empt those crises so we're not just reactive and dealing with violence after the fact, but we're pre-emptively going out to individuals who may be at risk in our community and working with them to ensure they know violence isn't the way.

  (1135)  

    Ms. Ramalingam, in the United States, of course, there is a very real attempt of revisionist history of what happened on January 6. I see the same narratives playing out here in Canada, a different storyline compared to what actually happened. How, in your opinion, do we best fight these revisionist histories?
    This is a moment where we're seeing a very worrying blending and metastasization of disinformation narratives and violent extremist narratives. Some of these narratives have been pushed heavily by violent extremist groups over the last several years, and then we're now starting to see the mass movement of disinformation and conspiracy theory narratives.
    To get in front of that, we need to do work not only with the people who are pushing these narratives, but with the wider community that is encountering them online. We need to build their digital literacy and their ability to critically consume media when they come across it online. For that, I would recommend that the Canadian government be thinking about much larger-scale programs to build critical media consumption skills across the entire population.
    Thank you.
    Very quickly, my last minute goes to Mr. Hadley. You said that designation is a critical part of removal; however, what do we do about followers, like former members of Proud Boys who are still using platforms to share content? It may not be “terrorist”, but it still qualifies as violent extremist messaging. I'm worried that we're playing this game of whack-a-mole. How do we effectively deal with that and go beyond relying on just designating groups?
    Certainly, designation is a blunt instrument. What we would recommend is the reform of the designation process. We recognize that legal processes do take a while and require resourcing; however, in many jurisdictions, there is—
    I'm sorry. You have just 10 seconds, please.
    —very limited focus on the designation processes, so we would recommend revisiting those processes so that can be done more quickly and more accurately so it is appropriate for the Internet era.
    Thank you very much.
    We move into our second round of questions. We might be able to get almost all of it in. We'll see how disciplined everybody can be.
    Mr. Shipley, I will start with you with a five-minute slot.
    Go ahead whenever you're ready, sir.
    Thank you, Mr. Chair. We'll try to be disciplined.
    I'd first like to start off with Ms. Ramalingam. I was looking at a brief description of what your organization focuses on, and the analysis of gender-based violence caught my eye. We've heard from other witnesses in this committee that genders participate differently with extremist groups. Could you expand on whether this is true? Does it depend on what type of extremist group we're talking about?
     Thank you for your question.
    Whenever I talk about gender, I think it's really important for us not to go in with assumptions. I tend to hear, and I've often heard across the policy spectrum internationally, this notion that only men are really getting involved here and not women. I do want to say here that we have evidence to show the counter. Globally, in fact, across the United States, Canada, the U.K., Australia and New Zealand, we tend to find that 25% of the audience engaging with right-wing extremist content is actually women, people who self-identify as being women. That's not to diminish the fact that we do tend to find that on average 75% of the folks who are engaging with this content online are men.
    In addition to that, we need to recognize the real intersections between the misogynistic violent movements—I mentioned violent incels—and far-right extremism communities. We've also seen violent misogyny intersect with other forms of extremism, including al Qaeda and Daesh inspired extremism and across the ideological spectrum.
    I would encourage us to really look at the data here as we're designing prevention mechanisms but to recognize the gender-specific interventions that are required.
    Thank you for that.
    I'd like to go back to your opening remarks. You're going to have to excuse me for these comments, because I am really out of my element. I don't spend a lot of time on the Internet. I don't search a lot of things, so I need some clarification.
    You talked about 170,000 searches for IMVE. Can you explain what that means, please? More specifically, what could some of the searches perhaps entail?
    Sure. What we mean there is individuals who are searching for terrorist content, in some cases. They're searching for terrorist manifestos or propaganda put out by white supremacist groups. They're searching for information about how to join the Base or how to join Atomwaffen. These are people who are indicating intent in some form.
    This does not include people who might have just read about something in the news and are searching for information generally on Atomwaffen or the Base. They need to be actually indicating through their search behaviours that they're taking an active interest and are possibly interested in consuming, because they would like to join or get involved. Those are the sorts of searches that would be included here.
    I hope that answers your question, sir.

  (1140)  

    It does a little bit. Quite frankly, it really raised a flag with me, because over the last few months I've been searching a little bit, obviously, to do research on this. Exactly where do you take into account that someone is just doing research and not actively wanting to join or pursue?
    In part because we are not accessing or engaging with any personally identifiable data when we run these kinds of campaigns, we can't say for certain that the person we're offering a safer alternative to is a researcher or someone who is at risk, but because we are not actually moderating their searches and we're not seeking to move anything from the Internet, we are simply ensuring that any time someone searches for this content, there is a safer alternative available to them. They're given the option to consume non-terrorist content.
    We're willing to take the risk that some of the individuals we engage with may actually just be researchers. We'll offer them the safer alternative as well.
    Again—I'm sorry I have to belabour this a little bit—someone is doing these searches, you're monitoring these searches and then you're reaching out to them to try to assist them with help. Is that correct?
    When we're running advertising campaigns on search, we're using the same commercial methods that any big brand uses to ensure that their content comes up first—for example, when you're looking for information on how to buy a pair of shoes. If you're looking in Canada for information on how to join Atomwaffen, we would ensure, through advertising, that the very first option you see, which is labelled as an advertisement, is a piece of safer content than Atomwaffen content that might otherwise surface through the search algorithms.
    Okay. Thank you.
    Mr. Hadley, you too had something interesting in your opening remarks. You mentioned that you monitor over 100 platforms. I have to be honest. I use a couple. I didn't know there was anywhere near that many platforms.
    You mentioned that you are monitoring over 200 terror websites. Why are these websites just not being shut down?
    That is an excellent question, one that we ask ourselves on a daily basis.
     In terms of these 100 platforms, the point to stress is that many of them are very small indeed, the sorts of services that can be created by someone in their own room. Terrorist-operated websites are a significant issue. They remain online for many years, in many cases.
    Thank you very much.
    Ms. Damoff, I will now turn the microphone over to you for five minutes, whenever you're ready.
    Thank you to both of our witnesses for your testimony today.
    My first question is for Ms. Ramalingam from Moonshot.
    We've been trying to get you here for quite some time. I want to thank you for the work you're doing and for being here today.
    Since 2014, CSIS has identified 10 plots—seven attacks and three disrupted plots—that killed 26 people and wounded 40 on Canadian soil. They identified all of these plots. Four were incel. All of them involved far-right or incel attacks.
     When NSICOP tabled their report, they mentioned that in the last two years, “CSIS has uncovered extensive ideologically motivated violent extremism...(notably right-wing extremist groups)...through online activity and physical attacks. The sizable increase in this activity throughout 2020 suggests [that] the terrorist threat landscape is shifting. The primary physical threat to Canada remains low-sophistication attacks on unsecured public spaces.”
    Given what independent agencies like CSIS are reporting, does it not make sense that the Government of Canada would be funding your research on those threats?
    Thank you very much for the question. I'm very happy to be here.
    We believe the Government of Canada should be not only funding research on these threats but also working to build practitioners' capabilities across Canada to intervene across these ideological spectrums. While I mentioned that the opportunity to intervene is no different with incel communities from what it would be with someone on the far right, or with al Qaeda-inspired or Daesh-inspired terrorism, there are some unique requirements of mental health practitioners and counsellors who are going to be having conversations with someone coming from a violent misogynistic background.
    There is quite a lot of work to be done to equip practitioners in Canada with skills and to build their confidence to deliver interventions across this threat landscape. We would welcome the Canadian government's investment in both research and prevention on these emerging ideologies of concern.

  (1145)  

    Thank you for that response.
    In spite of what Mr. Lloyd is tweeting about Government of Canada investments, it seems that we're investing where the threat actually exists for Canadians.
    In the work that you did with Norway—you mentioned it, and Madame Larouche also asked you about it—were there any recommendations you made, given that investigation, that you think the Government of Canada should be implementing?
    At the time, some of my main recommendations were based on the reality that far-right extremism so often falls in a policy gap between the community safety initiatives and counterterrorism. Counterterrorism practitioners and the counterterrorism community across Canada needed to be equipped at the time with the skills to engage with far-right terrorism. I think that has dramatically improved in the last 10 years, both to Canada's credit as well as that of the international government community.
     That said, I think where the threat has evolved since 2011 is in the online space. There is this worrying risk that members of the wider public are coming into contact with this content that was once relegated to very niche spaces online, or even to niche communities off-line.
     My major concern is that the content that's being pushed by violent far-right groups and also violent incel groups is suddenly emerging into mainstream communities online. This is where we need to invest not only in prevention but in broader programs, to build, as I mentioned, critical media consumption skills amongst the wider public to prepare them for the possibility that they will encounter this.
    I don't have a lot of time left, but you mentioned in your testimony safer content and directing people to safer content. Is there anything the government can do to assist with that, or is that solely within the purview of the companies themselves?
    The government can invest in Canadian practitioners taking their skills from an off-line context and creating digital content that will be those safer alternatives. When Moonshot delivers this work, we are not creating those safer alternatives. We actually want to be directing at-risk audience members towards Canadian practitioner content.
    That's where I would encourage the Canadian government to invest. Help Canadian practitioners create better content that can serve as that compelling counter-narrative and compelling counter-offer to terrorist content online.
     Thank you very much.
    I now move to Ms. Larouche for two and a half minutes.
    Go ahead whenever you're ready.

[Translation]

    Thank you, Mr. Chair.
    For my second round of questions, I would like to return to some of what Mr. Hadley said.
    Mr. Hadley, it has been shown that smaller and medium-sized companies face a bigger challenge in terms of being well protected against online risks and threats. You've explained why very well, but I'd like to know a little bit more about how exploitation of their sites by terrorists affects small technology companies.
    Can you give any other examples to help us to better understand this reality?

[English]

    Many thanks.
    Recognizing the short time available, there's one particular Canadian messaging app, which I won't name, that became totally inundated by terrorist activity. We estimate that at one point, 80% of its user base was associated with ISIS a number of years ago. As a result, that platform was simply unable to operate in any functional way because it had been taken over by terrorist activity.
    Increasingly, we see that terrorist-operated websites are a big issue. We're talking about hundreds of terrorist-operated websites, the majority of which are owned or operated, based on our assessment, by extreme far-right actors. The reason these stay up online so much is that the legal infrastructure to guide governments in helping them understand how to go about taking down these websites is very unclear.
    The private sector does co-operate to some extent on terrorist-operated websites. I believe that only recently, a website that was highly likely owned or operated by American Futurist, which is an organization closely linked to designated NSO and James Mason, was removed. There are some successful efforts to have terrorist-operated websites removed. However, a lot more needs to be done. It's not just about smaller platforms but also terrorist-operated websites.
    Thank you.

  (1150)  

[Translation]

    In response to a question, you talked about algorithms, which worsen the problem for small and medium-sized companies. What impact can algorithms have?

[English]

    Algorithms are typically not a big part of the terrorist use of smaller platforms. The use case for smaller platforms is typically really simple and straightforward. It tends to be copying links or copying material, or where the extreme far right is concerned, having an alternate site to upload a video or audio.
    Algorithms certainly are of concern. However, where small platforms are concerned, they are a relatively insignificant factor.
    Mr. MacGregor, you have two and a half minutes, sir, whenever you're ready.
    Thank you, Mr. Chair.
    Ms. Ramalingam, I'd like to continue with you.
    I really appreciated your recommendations for our committee about strengthening mental health and community intervention and making sure that we adapt those services for online use. Our committee recently completed a study into gun smuggling and gang warfare. We heard a lot of testimony about the effectiveness of community-based programs to help vulnerable populations avoid a life with gangs. I think we can use the same model on this.
    I want to ask you specifically about the subject of deplatforming.
    We had Mr. Imran Ahmed before our committee last week. He is with the Center for Countering Digital Hate. I'll read a quote from his testimony. He said, “Deplatforming these people and putting them into their own little hole, a little hole of anti-Semites, anti-vaxxers and general lunatics, is a good thing, because [actually] you limit their capacity to infect other people. Also, for trends such as the convergence and hybridization of ideologies”.
    You're proposing a set of recommendations where it's a positive intervention. Do you have any comments on the concept of deplatforming to try to, I guess, cauterize the wound and prevent some of these crazy ideologies and violent extremism from spreading to vulnerable groups?
    Thank you for your question, sir.
    Deplatforming works. There's plenty of evidence to suggest that deplatforming does work in limiting the spread of terrorist content on platforms, but it's not enough on its own. In order to effectively prevent terrorist abuse of online platforms, we need to accept two things. First, there will always be some content that falls in the grey zone and will not be liable for removal and these groups walk the line very carefully.
    Second, there will always be some spaces on tech platforms that are not liable for moderation. I've mentioned “search” a few times now—that's a great example here. Search engines don't prevent you from entering anything you'd like into the search engine box. That search engine box is a great moment to intervene with someone who is searching actively for terrorist content.
    For these kinds of cases, in addition to moderation efforts, we need to be thinking about how we deliver safer alternatives to users who might be at risk of getting involved in violence. You can delete the user and you can delete the account or the video, but that person still exists in the community around us.
    Thank you.
     Thank you very much.
    Mr. Lloyd, I can offer you two minutes. Take full advantage of them. Go ahead.
    Thanks, Mr. Chair.
    For Moonshot, you were talking about the search engine results you track, how to join so-and-so far-right organizations. Do you track any search engine results that you would classify as on the left-wing side of the political spectrum, and can you give examples?
    We do, sir.
    In our international work, we do track search terms that are affiliated with anti-government left-wing extremist movements specifically inciting violence against the government.
    Do you do it in Canada?
    We have not done that work in Canada, not at this stage.
    When you were given funding from the Canadian government and a mandate to look into this, were you asked to look into left-wing extremism or was it specifically mandated to look into far-right wing extremism?
    The funding that we received at the time for Canada redirect was specifically to cover far-right extremism and al Qaeda and Daesh inspired terrorism.
    There was a previous project that we delivered back in 2017 that was about the digital capacity of prevention practitioners across the country, and that was very much cross-ideological in nature—
    I only have a minute left. I'm sorry.
    When you made the claim that the vast majority of the search engine results are for far-right groups, you were talking about that in the context of Canada. Is that correct?

  (1155)  

    Yes, that's correct.
    If you're not measuring left-wing extremism, how can you make the claim that the majority of extremist search results are right wing, if you don't have a mandate and haven't been looking into any left-wing extremism search results?
    You're absolutely right, sir. I would encourage research on far-left search activity or any far-left activity in Canada. We would go where the data leads us. That particular project was an investigation into far-right, al Qaeda and Daesh inspired terrorism. Within that context, the vast majority came up far right.
    Thank you.
    I'm not suggesting that it isn't important research, but I think we can get a recommendation to the committee that we need an objective look at extremism across the political spectrum to be funded by the Canadian government.
    Would you agree with that?
    As I mentioned earlier, sir, I believe every prevention capability should be cross-ideological in nature, so yes, I would welcome that study.
    Thank you very much.
    Ms. Damoff, you will take us to the end of this panel. You have two minutes whenever you're ready.
    Thank you.
    For Moonshot again, last week we heard from Tony McAleer who, as you know, is the co-founder of Life After Hate. I understand he is one of your board members.
    Is that correct, or are you on his board?
     I am personally on the board of Life after Hate. That's correct.
    He talked a lot about how we remove people from extremism, and he mentioned that someone's ideology is intertwined with their identity. I'm just wondering if there's any influence you've seen from your work with Mr. McAleer that you can recommend to us moving forward.
    We tend to find, and some of these findings have been in partnership with Life After Hate over the years, that, while ideology is important, usually when you're delivering interventions, it's most important to get at the underlying drivers, and if you can get at the underlying drivers, the ideology will fall away.
    In our digital work when we're engaging with at-risk audiences online, we tend to find the most effective way to reach out to them online is not through countering the ideology or telling them that they're wrong. It's talking about their emotional state. We found that the most effective ad we ran in the United States last year with white supremacist audiences was one that begin with, “Anger and grief can be isolating.”
    I would very much support Tony's statements on this that ideology is important, but ideology will fall away if we can get at the underlying drivers here.
    Chair, I only have 15 seconds left, so I'll give them back to you.
    That gives me a chance to thank them even more robustly.
    To the witnesses, thank you so much for the insight. This is fascinating, timely and so important to the country. On behalf of all members of our committee and all parliamentarians, thank you for sharing this last hour with us. It's been very valuable.
    Colleagues, this is a reminder that the next meeting is the final meeting of the IMVE study. Departmental officials will appear in the first hour, and only two witnesses will appear in the second hour to allow time for instructions to the analysts on drafting the IMVE report. This portion of the meeting will take place in camera.
    Thank you. We will now suspend.
    The clerk will do his magic and line up the witnesses for the next panel. I don't think that's going to take anything more than a minute or two. We're almost there.

  (1155)  


  (1200)  

     I now call this meeting back to order.
    With us on this second round, as an individual, we have Navaid Aziz, imam. From the Canadian Race Relations Foundation, we have Mohammed Hashim, executive director; and from MediaSmarts, Dr. Kara Brisson-Boivin, director of research.
    Each of our guests will have up to five minutes to make an opening comment. I will start with Mr. Navaid Aziz.
    Please, sir, you have five minutes for an opening comment.
    Thank you so much, honourable Chair and members of the committee. I appreciate this opportunity allowing me to share with you today.
    As mentioned, my name is Navaid Aziz, and I am a classically trained Muslim scholar. I have served as an imam for over 10 years in Calgary. From 2012 to 2015, we saw a surge of young Muslims travel overseas to join extremist groups and factions, and it was at this time that I began my own personal study of violent extremism to develop an expertise as much as I possibly could.
    I have served as an expert witness with the Supreme Court of British Columbia in a terrorism-related case. I have mentored and helped in the rehabilitation of several individuals charged with terrorism offences, and I've published two papers, one on the reintegration and rehabilitation of Canadian and foreign fighters and a second on a brief guide to right-wing extremism in Canada.
    I'm hoping that my perspective today will be unique in the sense that it will be primarily focused on a community-focused point of view.
    Starting off with 2012 to 2015, an insurmountable amount of pressure was applied to the Muslim community as to why these problems were happening in the Muslim community, why Muslims were not better integrated, and what the Muslim community was doing to solve this problem. A community that is not homogeneous or monolithic was asked to deal with an issue that it was not responsible for. It was not given any further support other than being told what to do, and it had no prior experience in dealing with such issues.
    Law enforcement and policy-makers had securitized the relationship with the Muslim community. It infiltrated mosques with informants, which created a sense of distrust. Relationships were built on the basis of collecting information to facilitate the collecting of information for prosecution, and no support was provided when needed. It also created a perception of good Muslims and bad Muslims. Those who co-operated were good, and those who didn't were bad. The average community member was not afforded any neutrality.
    Multiple experts have also pointed out throughout the years that there was a disproportionate number of terrorism-related prosecutions on the Muslim community within Canada.
     I struggle with this introduction, my dear committee, to point out that, in what we have seen in 2016 onwards in the rise of populism and right-wing extremism, the Muslim community was a primary target. In 2017, we witnessed the Quebec mosque massacre, and in 2021 the Afzaal family in London, Ontario, was murdered in cold blood. May we never forget these people.
    We did not see the same questions being posed to other communities. Why was this happening? What are they doing to solve their own problems?
    We did not see the securitization of relationships in the sense that informants were proposed and put forth in very high numbers, nor did we see a dichotomy created of people being labelled as good people or bad people. This is not to say that this is the response that should be expected, but this is to point out that we have some serious problems at an institutional level that need to be addressed.
    What am I proposing and what do we need to look at? With regard to my proposal, I suggest that when we look at funding, we look at three approaches.
    Number one, with regard to the security infrastructure proposal, we need to understand that not all minority groups will be able to access this grant or this bursary because there is very little history in terms of them actually applying for such grants and the support is not provided. It is very difficult for them.
    Number two, with regard to sustainable funding for CVE initiatives across Canada, particularly in the province of Alberta, the Organization for the Prevention of Violence saw an influx of numbers come in, particularly in March and April of 2020, after January 6 and after the freedom convoy. Oftentimes we may think that this may increase right-wing radicalization, but it also created an opportunity to be introspective, where people were seeking support for themselves and their family members when they saw them go down a dangerous path. These programs do not have sustainable funding but are dependent on grants and bursaries as well.
    My last proposal for funding is with regard to research to look deeply into what the environments are that create such forms of violent extremism, and this needs to be the primary research.
    My last proposal in terms of a recommendation is that, when we look at relationships, we need to have a community-focused addition to this so that, as we look at equity, diversity and inclusion, it's not just that at a physical level or the physical representation is increased, but even representation in terms of thoughts and ideas and sources need to be included as we include equity, diversity and inclusion in our infrastructures and in our boards at that level.

  (1205)  

     That is what I wanted to share with you in my five minutes. Thank you so much.
    That was exactly five minutes. Thank you very much.
    Mr. Hashim, I now turn to you and invite you to make an opening statement of up to five minutes, sir.
    Go ahead whenever you're ready.
    Thank you for having me here today. I want to acknowledge that I'm speaking to you from the traditional territories of the Mississaugas of the Credit First Nation in Mississauga, Ontario.
    My name is Mohammed Hashim. I'm the executive director of the Canadian Race Relations Foundation. The CRRF was born out of an apology to Japanese Canadians who were wrongfully imprisoned in internment camps during World War II. Part of their redress agreement involved the creation of the CRRF as an independent federal Crown corporation in 1996, which now lives within the Department of Canadian Heritage.
    Our organization does research and community engagement, hosts policy discussions, provides funding to community groups and is currently supporting the creation of Canada's renewed anti-racism strategy, new anti-hate strategy and the strategy on combatting online harms with the government.
    When we think about the ecosystem around IMVE, what ends in violence isn't always the full story. There was a journey that preceded the violence. We see many actors over time who start by being involved in hate incidents, then move up into hate speech, sometimes go further and commit hate crimes, and even commit violence as part of that journey.
    We are not experts on IMVE, but we think the story starts far before the violence, specifically with hate, and that is where our work is primarily focused. It's work we know we can't do alone, and that's why we, along with the RCMP, are co-chairing a national task force on hate crimes. We are bringing together some of the brightest minds across law enforcement to improve training, increase public awareness and build standards for the police and community.
    Hate is a growing concern in Canada and globally, and its targets are always changing. Racialized communities have been ringing the alarm bells for years. The night the Quebec City massacre happened, I was speaking to a friend who told me she was not surprised by what had happened because of the ongoing hate that had been targeted at Canadian Muslims and other minorities for years in this country.
    The anticipation of violence towards that community was constant and is being felt by many today. There have been consistent failures on the part of institutions to take these harms seriously, which brings us to this moment. While it is crucial that we are here, it is equally important to note that this discussion is long overdue. When we look at hate and the administration of justice, it is hard to have faith that the system will right the wrongs.
    For far too long, online platforms have provided safe environments in which hateful rhetoric has been able to spread without recourse. Those spewing such hate feel powerful, above the law or consequences, and those targeted are left feeling helpless and alone. According to the StatsCan survey on victimization, there were over 200,000 hate incidents, almost half of them of a violent nature. Hate incidents reported to the police over the past few years represent only a fraction—probably about 1%—of that number. There is a major gap between what people are saying they're experiencing and what is actually coming to the justice system's attention. There are real impacts on individuals and communities when there is so little faith in the system, even when the system actually works.
    There was a recent case presided over by Judge Cidalia Faria. In this case, there was a woman who stepped in to intervene in a situation in which another woman and a child were being mistreated by a man. The man then focused on the intervenor, ripped off her hijab and assaulted her by hitting her in the face while yelling hateful rhetoric. The victim, who was known for being a strong community volunteer, said her voice was taken away from her and that the man said that if she spoke up there would be some horrible consequence for standing up. She is a very outspoken person and she doesn't feel as though she has been herself since then.
    I share this with you because I think we failed the victim. I'm not going to question the judge's decision to let the guilty party off with a suspended sentence because of mitigating factors, but I do know that the victim in this case did not receive adequate support to restore her faith in this community.
    She isn't alone. Victims of hate are often let down in this country, and, by extension, so are their communities. Canada needs a robust system to support victims of hate. We need this system not only to help individuals recover but also to ensure that communities feel supported through the process—from reporting a hate crime to getting support through a trial and afterwards to finding help to get back on their feet. We know that hate crimes are message crimes. It is time we sent a counter-message to the victims that they are seen and heard and will be supported.
    I focused my remarks on victims today because far too often we look at hate crimes and IMVE with a focus solely on the perpetrator, while mostly ignoring victims. We must address prevention, investigation and prosecution as we are doing through our work on the national task force on hate crimes. We must realize what is at stake if we don't address the reverberating harms left on victims. When we leave victims, either individuals or whole communities, without faith that their concerns are being heard, we see people lose faith in democratic systems.

  (1210)  

     You have 10 seconds, please.
     If we want people to feel like they belong to this country, that their safety and well-being matter, that when they are the victims of hate and IMVE they won't be left to fend for themselves—
    Thank you very much.
    I would now invite Dr. Brisson-Boivin to take the floor for an opening comment of up to five minutes.
     Good afternoon, committee members, and thank you for this opportunity to speak with all of you.
     MediaSmarts has been working in the field of online hate for nearly two decades. Our research has consistently found that Canadian youth are frequently exposed to racist and sexist content online and that they feel it is important to do something about it, but also that they are not prepared to critically engage with hate content or to push back when they encounter it.
    Our research with youth examined their attitudes and experiences with hate online—specifically, why they do or don’t intervene. We found that what’s more common than overt hate are cultures of hatred, communities in which racism, misogyny and other forms of prejudice are normalized. When hate online goes unchallenged, users may believe that intervention is overreaction. A community's norms are largely set by the most committed 10% of members.
    When cultures of hatred are masked as consensus and the behaviour is not seen as harmful, the majority of witnesses may not believe intervention is worth the risk of social exclusion. Youth are particularly vulnerable because they are worried about disrupting social harmony, losing their social capital or status with their peers and drawing unwanted attention to themselves.
    Hate groups take advantage of this as well as the digital architecture of online spaces, working to make hate appear more mainstream and acceptable to expand their pools of potential recruits and create an online environment hostile to their targets. Our most recent study with young Canadians shows that 2SLGBTQ+ youth are almost twice as likely to report having been bullied and to have seen racist and sexist content online.
     Our study on algorithmic awareness highlights how design, defaults and artificial intelligence are shaping our online spaces. Recommendation algorithms can diminish our capacity to verify whether or not something is true online, as users may perceive content that is delivered algorithmically and curated for them as more trustworthy.
    Online hate has the power to change how we know what we say we know about scientific and historical facts, social norms and even our shared reality. As youth overwhelmingly turn to the Internet as a source of information, they run the risk of being misled by hate content. If that misinformation is not challenged and users do not have the critical thinking skills to challenge it, some youth may come to hold dangerously distorted views.
    Youth need to be supported in developing the skills and knowledge to be able to recognize online hate. This means learning general critical thinking and digital media literacy skills, as well as the techniques and ideologies of hate. In order to talk about controversial topics and have healthy debate, users need to be able to distinguish between arguments based on facts and those that appeal to dehumanization and fear of the other.
    Youth also need clear examples of how they can respond when they encounter hate and prejudice online. Interventions should emphasize that even small efforts to push back against online hate can have profound impacts on motivating others to intervene. They need to feel that their opinions and experiences matter and will be considered by those with decision-making capacity.
    Youth believe platforms and technology companies have a responsibility to set clear rules and community standards to make it easier for users to report hate and then respond to those reports through publicized enforcement metrics. They also feel that policy interventions should give youth and the trusted adults in their lives more opportunities to learn digital media literacy in Canadian classrooms, homes and communities.
    I'll conclude my comments by expanding on that final point.
     The value of an educational approach to online hate cannot be overstated. While governments and online platforms have important roles to play, we cannot legislate, moderate or design our way out of these challenges. We need to ensure that all people in Canada have the tools and critical capacities to safely and positively engage as ethical digital citizens.
    In this way, digital media literacy is a preventative measure and a harm reduction approach to ideologically motivated violent extremism. This approach does not let either platforms or regulators off the hook by laying the burden of the challenge on the shoulders of individual users. Rather, what’s needed is a whole-of-society approach that holds platforms and governments accountable, both in their role in combatting online harm as well as in supporting digital media literacy.
    MediaSmarts has been advocating for a national digital media literacy strategy for over 15 years, a recommendation consistently endorsed by key stakeholders and community partners and reconfirmed in our report on building a national “Digital Media Literacy Strategy for Canada”, released last month. This strategy would provide experts, advocates and service providers with a unified but flexible approach for preventing and responding to online harm—

  (1215)  

     You have 10 seconds, please.
    —through education and critical skills development, which is at the heart of active and engaged digital citizenship.
    Thank you.
    Thank you very much.
    We will now move to the opening round of questions.
    Leading us off will be Ms. Dancho, for six minutes.
    Thank you, Mr. Chair.
    Thank you to the witnesses for being here today. My questions to start off are for Dr. Kara Brisson-Boivin.
    Thank you very much for your testimony. I found it very interesting. Why do you focus specifically on young adults or youth? Is there a difference between how youth interpret online information in their critical thinking capacity? Can you expand on why youth is your focus?
    Thank you very much for the question.
     MediaSmarts is Canada's centre for digital media literacy. Part of our mandate as an organization and a national non-profit has been to focus on youth. A lot of the work we do is in the K-to-12 sector, although in the last five years we have engaged in much broader public service campaigns for all Canadians.
    As for online hate, in the work we have done, we have focused on the young Canadian experience. That has been part of our mandate. We do believe that is a unique experience that deserves to be studied in its own right. The research we have done does suggests that there are interventions that need to be built and designed for young people in particular, because of some of those things I mentioned in my opening remarks, particularly in regard to the prevalence they give to peer supports and their relationships with other young people.
    Would you say that the school environment ups the need for these intervention tools and critical thinking capacity? Is that what you mean?
    Yes, I would say so. The work we were doing was in particular focused on youth aged roughly 12 to 17, which is a critical moment in a young person's life for a variety of different reasons. Often young people are shifting between, in some cases, middle and high school. Also, young people are at a critical point at which they are exploring different identity plays and looking for community in all different sorts of ways, so those critical thinking capacities I was mentioning are absolutely prevalent.

  (1220)  

    Thank you.
    Of course, we see suicides through many ages, but it's not uncommon to hear about suicides among young people, particularly in high school and middle school, as a result of bullying and that peer pressure. Those reasons for suicide are not as common, it would seem, for adults, for example.
    Can you comment on that, the impact of the online universe on the mental health of young people and how your services support that?
    Thank you.
    I would say I am not an expert. Nor is MediaSmarts an expert in young people's mental health and well-being. We focus particularly on digital media literacy, although focusing on digital well-being and our online relationships.... For example, the empathy that we need to keep at the forefront of those online relationships is something that we do. We work very closely with the K-to-12 education sector, as I mentioned, as well as other community organizations who are working on the front lines of digital media literacy service delivery across the country.
    Thank you very much.
    You alluded in your opening remarks a little bit to anything that others people.
    Can you comment a little bit more on what you meant in terms of exclusion and how that influences young people being drawn perhaps to extremism or the like?
    Yes, thank you.
    We know that one of the benefits of the online community is also one of our biggest challenges, and that is anonymity.
    I can use the example that we know from our work in the 2SLGBTQ+ community. In that context, young people have told us that the online environment, and in particular being able to remain anonymous in spaces, is a huge benefit as they can, again, engage in identity play and find community in ways that they may not be comfortable doing in a face-to-face context.
    However, we know that it also poses a great challenge because for many perpetrators, from bullying all the way up to hate groups, anonymity is a huge tool that those groups can use to their advantage, both to test the waters in various communities that young people are engaging in—for example, in gaming communities—and as an attempt to recruit potential new recruits to movements.
     For young people, in particular—and I think this is probably true across the spectrum—when you're teaching them how to avoid extremism online, is there any consideration for those who are the loners in school or who don't feel included? I don't know the polite way to say that.
     Do you give any specific attention to how to build confidence in digital literacy with those who aren't fitting in and feel excluded?
    The resources we create are not necessarily designed for any particular demographic. For the most part, they are for a young person in the K-to-12 sector, again with the exceptions of some of the broader campaigns we have done for all Canadians.
     However, I would say that one of the key lessons, or moral—if you want to call it that—or ethical objectives of these lessons is to talk about inclusion, healthy relationships and digital well-being in the online space with young people as early as possible. That is something we talk about in a broader context, but our content isn't necessarily designed for a specific demographic of use in that regard.
    Thank you very much.
    Thank you very much.
    Mr. Zuberi, it's always good to see you, sir. You have the next six minutes, whenever you're ready to begin.
    Thank you, Mr. Chair.
    Thank you to all the witnesses for being here.
    I'd like to start with Dr. Brisson-Boivin. What you shared with us is really interesting. You said that digital literacy is important and that we need to educate young people on how to distinguish, online, what's sound and what's not sound and how to respond, which is also quite important. You said that small efforts make profound impacts.
    Can you drill down deeper into that and share with us what you have seen and how we can make that happen?
    In the context of online hate with young people, the biggest factors we found for why young people do not intervene is, one, because they struggle to recognize when something is definitively online hate and, two, because they don't know how to respond. This is impeded by what I mentioned with regard to young people being understandably concerned with maintaining social harmony among their peers.
     However, at the same time, we know that the norms or community morals, if you will, within an online community are typically set and driven by the loudest 10%. What we found was that even a very small action within an online community to demonstrate that there wasn't consensus around, let's say, a particular viewpoint was incredibly motivating and encouraged others to respond as well.
     Young people responded to this sort of peer-to-peer.... They had the opportunity to recognize and realize that other young people—or anyone in the community—were responding to the contrary. That pushed the dial within the community and demonstrated how valuable it is to let the community know that this was not the consensus.
     At the same time, I want to mention that we also want to make it clear to young people that we need to set parameters around what kinds of content we should engage in, because we might suggest that a particular subject is worthy of debate, which hate groups can utilize to their advantage as well. Part of the resources, tools, lessons and critical thinking capacities we provide are to help young people determine facts from fiction or to be able to distinguish arguments based on fact from those that are attempting to sow doubt and denialism, for example.

  (1225)  

    Thank you for that.
    The point you raised about the quiet bravery, nudging things forward online and giving some other perspective is really powerful.
    I'd like to shift gears for a moment and go to Imam Aziz. I found your work around deradicalization within the mainstream Muslim community really interesting. I want to touch base on some remarks you made in your opening statement around the narratives and framing of this community in particular.
     I want to put forth this question. Do you find that terms like “Islamic terrorism”, “Islamism” and “Islamist” are accurate? That's number one. Number two, do you find that the use of these terms is harmful in seeking our objective as a country to mitigate and reduce extremism or a movement towards it?
     I'd like your thoughts on those terms in particular, please.
     Thank you so much for your question.
    I'll break things down into two separate parts. With regard to the terms used, such as “Islamic terrorism” and “Islamic extremism”, I believe they're very detrimental to the Muslim community and other minority groups in general. The onus is put on the religion itself. The blame is put on the religion itself, but studies have shown that this couldn't be further from the truth. This has been proven in theory and in practice. The vast majority of Muslims are law-abiding citizens and contributing members to their communities and societies. It's the same thing at a theoretical level. If you study Muslim texts and the literature of Muslim scholars, you see that they are always pushing Muslims toward a balanced way of life.
    The challenge here comes from an academic perspective. For the longest period of time, terms like “Islamism”, “Islamist”, “Islamic terrorism” and “jihadism” have been used. They have become mainstream and a part of the vernacular in this field of study. Trying to change the language is a very uphill battle, but I believe it is detrimental and that an effort should be made to come up with more inclusive language that does not blame a religion or a particular community altogether. As we've seen in previous testimonies, there are underlying issues that need to be addressed, and further research needs to be done on more accurate terms to use.
    Thank you.
    I have two minutes left.
    You have 20 seconds left.
    I'll leave it at that and give it back to you, Mr. Chair.
    Thank you.
    You actually have 30 seconds if you want to revisit the possibility.
    Can you comment briefly and very quickly about the off-ramping of those who have gone into extremism?
    With regard to off-ramping, particularly with IMVE, we've seen, again, underlying factors such as security, education and social inclusion. All those things are very important. As soon as we take care of those issues, the ideology naturally mitigates and disappears. Rather than focusing on the ideology, we focus on the underlying drivers. When we do, the whole-society approach that was recommend previously is very effective.

  (1230)  

    Thank you very much.
    I would now invite Ms. Larouche to begin her six minutes of questioning.
    The floor is yours.

[Translation]

    Thank you very much, Mr. Chair.
    I thank all three witnesses for being with us today.
    My first questions are for Mr. Aziz.
    You spoke a great deal about the importance of rebuilding trust to encourage victims to report online crime. We know that it is important to raise awareness throughout the entire system about these issues.
     What do you think the federal government should do? What steps should it take to restore the confidence of communities and people who want to report these crimes to the police?

[English]

    Thank you so much for your question, and thank you to the interpreter for facilitating that.
    Trust needs to stem from a place of non-heightened emotion. Oftentimes, engagement with law enforcement comes at a time of heightened emotions. My approach to this is to recommend that there should be community advisory boards with law enforcement at all times—when emotions are heightened and when they are not—to guide them and facilitate their conversation with communities. That is the first thing to do.
    The second thing is reconciling and apologizing for mistakes that have been made. We have to understand that communities are constituted of human beings with human emotions. If people are hurt, progress cannot be made. Mistakes that have been made need to be recognized, and apologies should be issued for that.
    The third thing is education. It's very easy to say, “This is what you need to do in order to report a hate crime,” but in terms of the actual process, people need to be guided through that. Training sessions for community members and community leaders on how to report hate crimes should be there.
    The fourth thing is the soft bedside manner that is needed. Oftentimes, people who have gone through a traumatic experience are unable to articulate what they have gone through, or they may forget what actually happened. Police officers need to remember that. You're dealing with someone who's just been through a traumatic experience, or they may not remember all the details right away. Try your utmost not to treat them like the perpetrator. Treat them, rather, like the victim. Oftentimes, because people feel as if they are the perpetrator when they are the victim, they shy away from reporting. The way they're treated goes a very long way.
    Those are some of the recommendations I would make regarding the law enforcement question. Thank you so much.

[Translation]

    My next question is for you again, Mr. Aziz.
    As well as awareness, it is likely that increased surveillance is needed. Beyond these four recommendations, how should federal police services reinforce their surveillance? Do you have a bit more advice on this?

[English]

     Thank you so much.
    With regard to surveillance, there are two points to keep in mind over here. Number one is the high cost of surveillance. Surveillance is very cost-ineffective. It's very expensive. We have to look at other avenues in order to get information when needed.
    Number two, the basis of a relationship, if information is shared both ways and support is provided both ways, is that, naturally, information that may be imperative for law enforcement will be provided. Communities will recognize that it is in their best interests to provide information to law enforcement and to agencies. It will only serve their interests and their own protection. That information needs to come from a place of safety and from a place of equal platform.
    One of the examples I like to give in my presentations is of a bus being driven. In a pre-criminal space, the community leads the bus and drives the bus. Law enforcement takes the back seat and just supports the Muslim community, or rather communities in general. In a post-criminal space, or when criminality has taken place, then law enforcement leads the way. They drive the bus. The community is there in a supportive role of what is needed.
    That collaborative approach, where everyone is equal and on the same page, is very important, but that can only be done with relationships being built on an equal platform. The key over here is the collection of information and not so much the focus on surveillance itself.

[Translation]

    Thank you very much.
    To conclude this first round, I'll turn to Mr. Hashim.
    You spoke a great deal about the research you've done. I'd like to know what you've learned from that research, especially how Canadians feel about the spread of online hate speech and radicalization. Do you have any data on manifestations of racism connected to extremism?

  (1235)  

[English]

    Thank you very much for the question.
    Yes, we've done a number of surveys in regard to online hate, in particular whether or not Canadians are in favour of supporting online hate legislation and their experiences of facing online hate. I think some of the more striking numbers that come to bear from that research are around who the victims are. The number one victims of online hate, according to the research that we have, are women, women of colour and youth between 18 and 30 years old. Those are the number one targets of online hate. They experience more hateful content, more misogynistic comments and more racist comments than anybody across the spectrum.
    There's also a tremendous sense of disappointment in terms of what our communities expect the online experience to look like and what they are experiencing. There's a lack of confidence that a safe space can be provided. However, there is significant support to see greater legislation in this environment, because people want that space to become safer.
    Mr. MacGregor, I will now turn the floor over to you for six minutes of questions.
    Thank you very much, Mr. Chair.
    Thank you to all of our witnesses for helping guide our committee through this study.
    Mr. Aziz, I would like to turn to you for my first question, if I can. I was present as a member of Parliament in the 42nd Parliament, and I remember the furor over the debate involving motion 103, which was using the term “Islamophobia” and calling it out for what it is. It always struck me as very strange that we have a general acceptance of what the term “anti-Semitism” means, but the word “Islamophobia” created just such an uproar and furor.
    I guess what I want to know from you, sir, is what the legacy has been of that very charged debate on Islamophobia for the Muslim community. Where are we at now in the years that have passed since that debate?
    Thank you so much for your question.
    I think it's important to highlight the different perspectives with regard to this debate. One perspective of this debate is that we need to call it out for what it is, which is anti-Muslim hatred. It's not this fear that people have. It's clearly targeted against the Muslim community and it should be called anti-Muslim hate. Another perspective of this is that there is a fear that if we start deeming things to be Islamophobic, then one cannot criticize the religion or criticize the religious texts, which is a right that people have.
    That being said, I believe this debate is ongoing. I don't see a resolution coming any time soon. That is at a theoretical level. On a practical level, what I think needs to be understood is that all citizens and all human beings deserve those equal rights. They deserve the rights and freedoms that everyone has.
    What we label it in particular is not as relevant. What are we doing to keep everyone safe, to keep everyone included and to make sure that everyone has the opportunity and freedom that everyone is afforded? That's what needs to be looked at.
    Unfortunately, I don't have the good news of sharing that there's a resolution to this debate any time soon.
     Thank you. I appreciate your mentioning the term “anti-Muslim hate” as, I think in your words, a more preferential term. Am I getting you right on that?
    I am undecided. I think, depending on the context, both terms, Islamophobia and anti-Muslim hatred, may be relevant and pertinent.
    Thank you.
    In your opening remarks, you were talking about the need for sustainable funding and, really, for those community-based ones. We have certainly heard testimony in a previous study about the very real value in funding intervention programs for youth who may be susceptible to joining violent criminal gangs and the incredible success that has been there. I want to ensure that this committee comes forward with very sincere recommendations that honour what you and others have presented to our committee.
    Are there any specifics that you would like to see our committee mention in a recommendation to the federal government with respect to that sustainable funding part?

  (1240)  

    The Canada centre for community engagement and prevention of violence has funded many programs across the country, particularly a few in each province. Number one is that there needs to be a deeper study in terms of what the results of this funding have been, how many people were part of the programs, and then based on that, decide which programs need to continue and which programs need to be shut down because they're irrelevant and are not providing the results for the money being put forth. That is, I would say, the biggest thing that needs to be looked at.
    Number two is in terms of expanding these programs to remote areas. That has always been the challenge. A lot of these individuals who go down this path of violent extremism will live in remote areas, and they do not have access to these programs. How do we make those programs more accessible in these remote areas and not just in the metropolitan cities or in the larger urban areas?
    Those are the two biggest things that I would suggest on that front.
    I appreciate that. Thank you very much for your answers on that.
    I'd like to turn to MediaSmarts and Dr. Brisson-Boivin.
     I have your printout here, the recommendations for platforms. You mentioned that creating and implementing rules that help to set the values of a community can change how people behave, and that if platforms don't set clear rules and standards, the norms of the community will be set by users.
    Throughout this study we found that there's this conflict. Of course, social media platforms make a lot of their money through advertising revenue, which is really pushed by user-generated content, and the more exciting or extreme it can be, the more engagement you get. There's this conflict. Social media companies say that they have clearly written terms and conditions, but it didn't stop people like Pat King from basically using Facebook to livestream on his way to the occupation of Ottawa.
    I don't have a lot of time. I guess my question to you is this: What's the federal government's clear role here in helping accountability and transparency to be set by these companies for those clear rules and standards?
    Thank you very much for the question.
    Yes, I think there is a role for both government and platforms to work in tandem here, for example, to set forth some of those recommendations we made. There needs to be regulatory or legislative structure or law put in place such that it holds platforms accountable for setting those community standards. I think the other thing that's equally as important is some kind of metric whereby we can see that the standards are actually enforced.
    Thank you very much.
    We now move into our second round of questions. The leadoff questioner will be Mr. Van Popta.
    Sir, you have five minutes. The floor is yours.
    Thank you very much.
    Thank you to all the witnesses for spending time with us here today and for sharing their wisdom and knowledge with us as we seek to develop a report around ideologically motivated violent extremism.
    I'm going to start with Dr. Brisson-Boivin.
    Thank you for the very important work that your organization MediaSmarts is undertaking.
    I'm reading from a publication by your organization called “From Access to Engagement”. There's a great working definition, which I'm going to read into the record. It says, “Digital media literacy is the ability to critically, effectively and responsibly access, use, understand and engage with media of all kinds.”
    To narrow it down a bit, this is a study about the rise of violent extremism. Your work is particularly with young people and to bring MediaSmarts into their lives. Perhaps you could tie those two together: your research and the rise of violent extremism in our communities.
     Thank you very much for that question.
    I would say that the two are related insofar as we see digital media literacy being the crux, or oftentimes a measure that is sometimes thought of as an afterthought or a response. We really do see it as a preventative harm reduction approach for both young people and the trusted adults in their lives.
    The report you're referencing is focusing in particular on how Canada needs to take a stance around digital media literacy, on in which we view it as a lifelong learning process. We are talking about supports from pre-K through to seniors facilities.
    Many jurisdictions across the world are in the process of developing strategies for digital media literacy. These strategies include some of those key critical thinking skills I was mentioning around authenticating and verifying information, and recognizing online hate in terms of the cultures of prejudice and some of the ideologies and tactics of hate, including the use of misinformation and disinformation.
    The strategy report you're referencing is one in which we are advocating for the federal government to come together to support Canadians in their digital media literacy journey, which is a lifelong journey.

  (1245)  

    Thank you.
    It's a lifelong journey, not just for young people. I think adults as well could benefit a lot from MediaSmarts education.
    You said in your testimony that we can't regulate our way out of dangers on the Internet. The Internet is a great gift, but it's also full of dangers. I'll use the example I've used with my children. We expect our police to keep our streets safe, but at the same time we don't walk down dark alleys on our own because it's dangerous.
    I am looking for your expert opinion on the allocation of responsibilities among schools, educators, parents, communities, government and the individuals themselves.
    Thank you very much for the question.
    I think this is one of those big questions. It's the big messy challenge we are facing today. How do we create that whole-of-society response that I was mentioning?
    First and foremost, we need leadership at the federal level where the federal government takes ownership in building a strategy that will impact other government departments in setting budgets, for example, that would include digital media literacy as a key budget objective.
    We also need to map the field of digital media literacy in Canada. That has yet to be done. There are hundreds of organizations doing this work on the ground, including MediaSmarts. We need to better understand what those are, what they are doing and how we can work together.
    That also informs my comment about budget. We need to create funding for this work that doesn't pit these civil society organizations in competition with one another, but allows us to work together in synergy to combat these various issues, which include the variety of online harms that we've all been talking about as well as ideologically motivated violent extremism.
    We see the education sector as being a player in that but not the only player. We see regulation as being very pivotal and important but not the only solution. It is the same for platform responsibilities and for technological design in that as well. That is part of the solution here but not the only approach.
    Thank you very much.
    Thank you very much.
    I'll give the time back to the chair.
    Thank you.
     I appreciate the 10 seconds because it gives me a chance, on everybody's behalf, to congratulate Mr. Noormohamed on the birth of his boy.
    Voices: Hear, hear!
    The Chair: I think he is three weeks old. Do I have that right?
    It's three weeks tomorrow.
    Congratulations. You look like you're getting some sleep. I don't know why that is, but don't worry, you'll have a chance to lose it.
    For now and for the next five minutes, you have the floor for questions.
    Congratulations on behalf of all of us.
    Thank you very much, Mr. Chair. It's a pleasure to be back with everybody.
    I want to thank the witnesses for the testimony this morning and my colleagues for allowing me to come back for a little bit today.
    I just want to ask Mr. Hashim a few questions. Thank you very much for your testimony, sir.
    I just want to clarify, because there has been a lot of conversation about how government-funded groups do research or work in the world of IMVE. I want to confirm that your organizations has been funded by governments of all political stripes—Conservative and Liberal—over the last number of years. Is that correct?
    That's correct. We've been in existence since 1996 as a federal Crown corporation.
    You are non-partisan and you do the work you believe is in the best interests of Canadians in dealing with these issues. Is that correct?
     That's correct.
    A lot of folks have asked questions. There's a lot of whataboutism in these conversations about IMVE and racism, and “fine people on both sides” types of arguments.
    What would you say about the ideologically motivated violent extremism or hate that you see from the left? Do you see a lot of that in the 200,000 cases a year that you see?
    The number I was quoting was from Statistics Canada, and I'm not sure whether they break it down between left or right. I think the threat for extremism comes from many different places.
    However, when we are briefed by the RCMP in terms of some of the focus that they have, right-wing extremism and white supremacist groups definitely are at the top of that list. I'm not too sure about the numbers or proportion around left or right, but in terms of threat assessments, I understand those are the highest.

  (1250)  

    Do you think that the RCMP would have any reason to conflate one side over the other?
    I think you would have to ask the RCMP that.
    Just so I'm clear, from the work that you see, the vast majority that you've described comes from the far right. Is that correct?
    From the work that we're seeing, yes.
    Let's then talk about the victims. You mentioned that the victims are a very important part of this.
    Could you reiterate for us who the victims are of this ideologically motivated violent extremism? Could you take a quick second to describe that to us again?
    To be honest, I think that a lot of the conversation we're having right now is a bit backwards. We hear a lot from academics. We hear a lot from law enforcement. However, the victims are not at the centre of this conversation.
     Twenty years ago, I used to work for Mothers Against Drunk Driving, and I could see how when you centre victims' voices, that integration supports not only the social services sector but the policing environment and the justice system. The level of intelligence that the system gains as a whole by centring the voices of victims is pivotally important.
    Who do we see as victims? As I said, in terms of online hate, we've seen women, women of colour and particularly young people, as those who are being targeted the most, but hate is an evolving target. Sometimes you see Muslims. Sometimes you see members of the Jewish community. The highest numbers have typically been Black and indigenous communities, and now you see anti-Asian racism rising in very high levels. I think it evolves over time.
    With that, then, I think it would be great for all of us, as we try to think about how to improve the situation, to focus on victims.
    How do we, as folks in the political conversation, depoliticize this a bit to really focus on and acknowledge that this is a very real problem? What should we be doing, in your view—from the work that you do—to be leaning into trying to address this problem head-on?
    Could you, in a nutshell, describe what we could do, where we could invest, what would help to solve this problem once and for all, so that we can get to the type of society that allows everyone to feel as though they can live their best self, their most authentic self, without worrying about these types of challenges?
    I think we need to stop saying to some people that we believe you and to others that we don't. I think what we need to do is focus strictly on victims and hear what their perspectives are and to actually believe them. That doesn't come from just saying I believe you or I don't believe you. I think we create a system to support victims.
    We've seen really good examples across the world, particularly in Germany, which are now being exported to the OECD. Understandably, their context is significantly different. However, I think there are ways that we can reframe our social services infrastructure to support victims of hate. That could have tangible impacts, not only in determining who a victim is, what a victim feels, what the supports need to be, but in terms of having the system as a whole acknowledge the wrong of what is happening.
    I see a number of components to this. Mr. Aziz pointed towards the sensitive and respectful treatment of hate crime victims. There's a real need to be able to understand that. If you want people to go to the police to report it, there needs to be a specialist to address hate crimes. I think that victim support is a key and pivotal portion of that response.
    Thank you very much.
    I would now invite Ms. Larouche to use her two and a half minutes.
    Whenever you're ready, please go ahead.

[Translation]

    Thank you very much, Mr. Chair.
    I too congratulate my colleague, who is now the parent of a three-week-old baby. I have a three-month-old. We could talk about that and the exhaustion that comes with a new child. Again, congratulations.
     I would like to come back to some of what you said, Mr. Hashim.
    You saw a gap between the urban environment and what happens in rural communities. Do you have any more recommendations for us to bridge that gap between cities and more rural areas?

[English]

     Was that to me or Mr. Aziz?

[Translation]

    Yes, Mr. Hashim, I'd like to know if you have a recommendation.
    Mr. Aziz could answer the question as well.

[English]

    I think there's a real divide. To be frank, when we look at police forces who are responding to hate, because police forces in urban areas have more exposure to racialized communities and have more racialized police officers, their ability to understand the impact of what's happening is typically better than it is for those in rural areas.
    I can give you the example of a rural police agency where an individual was targeted and was murdered. The police within 24 hours said it wasn't a hate crime. They then went back to say, after listening to the community, that they were going to investigate it as a potential hate crime, but the harm was already done in terms of what the community was told in haste, which was “Look, yes, one person was targeted, and yes, one person was killed, but we don't think it was hateful.” That had real repercussions on that community.
    In terms of responses from rural versus urban, I think the urban ones are more developed. Part of the work we're doing with our task force is to create national standards around investigations and help those rural agencies that don't have the resources or hate crimes units. I will give you the example of London, Ontario, where four people were murdered. They today have a one-person hate crime unit. I'm not even sure if that job has been filled yet.
    There's a huge divide across Canada in terms of rural and urban responses to hate. I think creating national standards and being able to support the small local police jurisdictions is an important intervention that hopefully our work will contribute towards.

  (1255)  

    Thank you very much.
    Mr. MacGregor, we have two and a half minutes before the top of the hour, and that's how much time you have.
    You will take us home today. It's all yours.
    I'm taking us all home. Okay. Thank you, Mr. Chair.
    I will direct my last question to you, Dr. Brisson-Boivin. You talked about how it's really important for websites or apps or social media platforms to have those clear and easy-to-use tools for reporting unacceptable behaviour.
    We have also seen examples of where, I remember, during the height of the COVID-19 pandemic, for example, there was a lot of misinformation being spread about the nature of the pandemic, its source and whether or not it was even a serious pandemic. Particularly on Facebook, I remember, whenever COVID-19 was mentioned, a little disclaimer was posted at the bottom of each post that would direct people to factual information.
    Can you comment a little bit on that specific example? Do we need to have similar ones here, for example, if a post is alluding to white supremacy or is anti-Semitic in nature or something? Do we need to have little educational tools that point people to a verified true source on those things? Could you perhaps expand on that point, please?
    Thank you very much for the question. I will try to keep my comment brief.
    Yes, when we think about technological responses, platforms tend to respond in terms of affordances, which is exactly what you mentioned, design fixes around, for example, this is labelled a piece of misinformation. We could entertain the idea of labelling something hateful or against community standards. I think that's a good starting point. However, we need to ask ourselves this: What then?
    What happens when we say to a person that this is a piece of misinformation or hateful content? The user needs to then have other skills and tools to be able to, for example, find the original source of the piece of misinformation to feel confident in being able to verify. Similarly, with online hate, what then? What happens after we have flagged for them that this is a piece of hateful content?
    From our perspective, again, those are helpful, but they aren't the end of the story. I think we need other tools and critical thinking skills that will allow people to verify and authenticate information and/or respond to online hate.
    Thank you, Mr. Chair.
    Thank you, Mr. MacGregor.
    Thank you to the witnesses for a very interesting and important hour of reflection from your experiences both as individuals and as leaders of organizations who are immersed in this subject. On behalf of my colleagues on the committee, I want to thank you for your insight and your time.
    Colleagues, I will remind you that on Thursday we will have witnesses and a panel. We will go in camera for the last bit of the meeting to give drafting instructions on this study to our analysts.
    Looking forward to that and looking forward to the glorious weather between now and then, this meeting is now adjourned.
Publication Explorer
Publication Explorer
ParlVU