Skip to main content
Start of content

SECU Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Public Safety and National Security


NUMBER 019 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Tuesday, April 26, 2022

[Recorded by Electronic Apparatus]

  (1100)  

[English]

    I call the meeting to order. Welcome back. I hope everybody took full value of the two weeks back home staying in touch with constituents or finding a few days to do something completely different. Here we are back at work.
    Welcome to the 19th meeting of the House of Commons Standing Committee on Public Safety and National Security. We will start by acknowledging we're meeting on the traditional unceded territory of the Algonquin people.
    Today's meeting is taking place in a hybrid format pursuant to the House order of November 25, 2021. Members are attending in person in the room and remotely using the Zoom application. Members and witnesses participating virtually may speak in the official language of their choice. You have the choice at the bottom of your screen of floor, English or French.
    Pursuant to Standing Order 108(2) and the motions adopted by the committee on Thursday, February 17, 2022, the committee is resuming its study of the rise of ideologically motivated violent extremism in Canada.
    With us today by video conference we have Evan Balgord, executive director of the Canadian Anti-Hate Network; Barbara Perry, director, Ontario Tech University, Centre on Hate, Bias and Extremism; and Dr. Heidi Beirich and Wendy Via, Global Project Against Hate and Extremism.
    Welcome to all. Up to five minutes will be given for opening remarks after which we will proceed with rounds of questions.
    I now invite Mr. Balgord to make an opening statement of up to five minutes.
    Mr. Balgord, the floor is yours.
    My name is Evan Balgord. I'm the executive director of the Canadian Anti-Hate Network.
    We're an anti-fascist and an anti-racist non-profit organization. Our mandate is to counter, monitor and expose hate-promoting movements, groups and individuals in Canada. We focus on the far right because it gives rise to the most issues of ideologically motivated violent extremism.
    Today, I'm going to give a recent history of the far-right movement to explain in part how it escalated to the convoy and the occupation, and then I will describe the threat we are currently facing today.
    I started doing this work originally as a journalist about five or six years ago. Today, our far-right movement was really born out of a racist anti-Muslim movement. We had hate groups spring up that were emboldened by Trump's election and his rhetoric about Muslims, and then they took to the streets to protest against our Motion No. 103, which was to broadly condemn Islamophobia.
    At the time there were groups involved that you might recognize, like the Proud Boys and the Soldiers of Odin, and there were two threats largely emerging out of this space. The first was that they were assaulting people at demonstrations. Those could get quite violent. The second was that they were harassing Muslims in their places of worship, which was quite concerning to them.
    Of course, Motion No. 103 passed and the sky didn't fall, so they needed a new issue. They rebranded and started calling themselves Yellow Vests Canada. When they did that, they added new grievances. They said it's not just Muslims, but also also about oil and gas, and western separation. But, of course, make no mistake: If you went into the Facebook groups at those times, you would find regular occurrences of largely anti-Muslim racism—although you'll find every form of racism and anti-Semitism present—and you would also find calls for violence, oftentimes towards politicians.
    They also had a convoy, interestingly enough, called United We Roll. A lot of people who organized that convoy would later organize the more successful occupation of Ottawa. You can see how you can draw a straight line from one thing to the other.
    This was also around the time we saw the rise of livestreamers and content creators being more important than “hate” groups. These are individuals like Pat King, who would go on to have an oversized impact on the occupation.
    Their convoy, United We Roll, was a bit of a flop. It did not meet their expectations, and the Yellow Vests Canada movement dwindled, although they were still holding weekly demonstrations in most of our cities. Then came the pandemic, which was like manna from heaven for these groups.
    Far-right groups and racist groups are also conspiratorial groups at their core, right? They believe there's this Muslim or this Jewish or this globalist takeover of Canada or of the world. At they core, they are conspiracists. So, when COVID came around, they very genuinely adopted COVID conspiracy theories. But this was also very dangerous and led to very awful second-order effects, because regular people were being fed misinformation and disinformation about COVID, and they would go out and find groups of like-minded people. Who were those groups of like-minded people? Well, they were started by our right-wing extremists here. We had more normal people coming into contact with our far-right movement. That was bad because a lot of those people got radicalized and we started to have marches in the hundreds and the thousands in our cities to protest things like public health measures. That all kind of culminated with the convoy, and we saw that they were now capable of occupying Ottawa.
    One of the things I want to point out moving forward is those people haven't gone anywhere. They're back to their regularly scheduled programming. They are still holding their large demonstrations in various cities and some of them are returning this weekend to Ottawa as part of a Rolling Thunder convoy, which will not be as significant, but the point is that this just continues and it grows.
    I want to describe two threats we're facing today. We are talking about ideologically motivated violent extremism. That means extremism that gets violent or criminal. That's a lot of what we're talking about here. We have threats like the threats of a terrorist attack or the threat of a mass violence incident. We have the threat that this movement of convoy-supporting COVID conspiracists. They're not all racists; they're not all violent. Not all the people on January 6 were either. There were groups in those midst that decided they were going to try to do a coup, and they swept up a lot of the other people there.
    The same thing is kind of happening here. We have more extreme elements of our far-right movement than others, but as a whole they are becoming a threat to our democracy. The goal of the whole thing is an undemocratic overthrow of the government so that they can take power and persecute their perceived political enemies. That would mean putting doctors, journalists and politicians on trial and perhaps executing them. That's what a lot of them want to do.

  (1105)  

    That's a pretty significant threat. That's the ecosystem threat, right? We can't just talk about ideologically motivated violent extremists in a vacuum—
    You have 10 seconds left, sir.
    Certainly.
    I'll just end by saying that we need to be focusing on shrinking the size of that far right ecosystem, because then we'll have fewer IMVE threats coming out of it.
    Thank you very much.
    I would now like to invite Ms. Perry to speak for up to five minutes in her opening comments.
    Ms. Perry, the floor is yours.
    Thank you very much, and thanks for the opportunity.
     Evan, thanks for providing a good segue for me. I really want to emphasize the lessons we can learn about the far right movement more broadly from their engagement in the convoys or the occupation.
    There are really four points I want to stress here. One is what it tells us about their organizational capacity. We really saw the capacity to organize, in a Canadian context, unlike we've ever seen it before, on a large scale, largely facilitated by both the encrypted and unencrypted social media platform. That theme will sort of be running through what I say today, because that was also the venue through which they were able to display this adeptness that they really have in terms of their ability to exploit broader popular concerns, grievances and anxieties and weave them into their own narratives. As well, there are the implications of social media platforms for the deployment and, disturbingly, the ready acceptance of the sorts of disinformation, conspiracy theories, etc. that we see underlying much of far-right activism but particularly in the context of the convoy and COVID much more broadly, as Evan suggested.
    The convoy and the occupation also tell us a great deal about the risks and threats associated with the right-wing movement in Canada. Obviously we have the threats to public safety, as we saw in Ottawa in particular, not just in terms of the disruption of the whole downtown community but also in terms of the harassment, the hate crime, the threats, the intimidation of people of colour or LGBTQ+ people or even people who were wearing masks in the downtown area.
    We see threats to national security. Obviously the fact that they occupied that space so close to Parliament Hill is paramount, but also very important to keep in mind is the threat to border security that we saw in the border blockades, especially with the discovery of the artillery and weapons in Coutts associated with far-right groups.
    On dangers to democracy, there's obviously the threat that Evan referred to in terms of attempts to overthrow a democratic government, but even more broadly than that the far right in this context is also very concerned with enhancing that erosion of an array of key institutions—surely the state but also science, media and education and academe as well.
    The next point, the final key point in terms of the pattern, is the failure of law enforcement in this context to properly evaluate and prepare and understand the risks associated with the far right and, more broadly again, their failure to intervene and counter right-wing extremism generally. In fact, in the convoy and in other contexts, we've seen sympathy for the far right, and here, with the fundraising donations coming from law enforcement. We've seen social media platforms and pages that are devoted to law enforcement also sharing some of these conspiracy theories and this disinformation.
    The last point I want to make is about what the points of intervention are, given what I've identified here as some of the key lessons. The first is the need to enhance not just critical digital literacy but civic literacy as well. There was an awful lot of misinformation and misunderstanding about the nature of the charter, about the role of the Governor General, about how governments operate generally. Both of those pieces are important.
    Another point of intervention is around the law enforcement/intelligence community enhancing their awareness, their capacity and their willingness to intervene around right-wing extremism.
    Finally, there is a need to create opportunities and incentives to engage in civil dialogue and engage across partisan sides whether we are talking about the general public or whether we are talking about politics.
    I will end there. Thank you.

  (1110)  

     Thank you very much.
    I would now invite either Ms. Via to give us an opening statement for five minutes.
    Good morning, committee members. Thank you for the honour of inviting us to speak today on the important issue of ideologically motivated extremism.
    My name is Wendy Via, and I'm joined by my colleague, Heidi Beirich. We co-founded the Global Project Against Hate and Extremism, an American organization that counters ideologically motivated extremism and promotes human rights that support flourishing, inclusive democracies. We particularly focus on the transnational nature of extremist movements and the export of hate and extremism from the United States.
    The United States, Canada and many countries are currently awash in hate speech and conspiracy theories like QAnon, anti-vax, election disinformation and “the great replacement” spreading on poorly moderated social media. It is indisputable that social media companies are major drivers of the growth of global hate and extremist movements, conspiracy theories, the radicalization of individuals and organization of potentially violent events.
    The consequence of this spread is a polarization of our societies and violence in the form of rising hate crimes and terrorist attacks. The tragedies of the Quebec City mosque shooting, the Toronto van attack and others, such as the shootings at the Tree of Life synagogue in Pittsburgh and the mosques in Christchurch, are a horrific reminder of the toll that hate and online radicalization can take. These movements also manifest in direct threat to our democracies, as we've seen so clearly with the January 6 insurrection and the trucker occupation that held Ottawa hostage for weeks.
    Canada and the United States have long had similar and intertwined white supremacist, anti-government and other hate movements. In recent years we have seen American hate and militia organizations, including the neo-Nazi The Base, the anti-government Three Percenters, the misogynistic and racist Proud Boys and others establish themselves on both sides of the border. Because these organizations attempt to infiltrate key institutions, both countries are facing the issue of extremists in the military and the police, though to varying degrees.
    In the U.S. and other countries, political figures and media influencers with tremendous online reach, and in particular, former president Donald Trump, have legitimized hate and other extremist ideas, injecting them into the mainstream political discourse and legitimizing bigoted and fringe ideas across borders. Research shows that Trump's campaign and politics galvanized Canadian white supremacist ideologies and movements, and his endorsement of the trucker convoy, along with media personalities like Tucker Carlson, undoubtedly contributed to the influx of American donations to the trucker siege.
    In addition to the key role of social media, a more systemic driver of extremism is the growing demographic diversity in both countries which, along with histories of white supremacy, though different in each country, fuel nostalgic arguments that a more successful white past is being erased and intentionally reconstituted with communities who do not belong. The movements pushing these ideas will likely become stronger in the years to come, as they have a historical foundation and sympathy that other extremist movements will never achieve. It is for this reason that countering them is of the utmost importance.
    If I may, I'll offer some recommendations here with a broader list in our written testimony.
    This growing problem will not be solved without taking on the online social media and financial spaces. Absent a domestic law with teeth, tech companies will not reform their practices. Importantly, the tech companies must be held to account in all languages, not just American English. A sovereign democracy cannot thrive when there are massive ungovernable spaces. Most research into the impact of social media on our democracies and societies is generated by civil society and focuses on the U.S.
    Independent research of online harms should be funded. We should improve cross-border co-operation, particularly in terms of transnational travel and sharing of intelligence and threat assessments. We should fully implement the Christchurch Call commitments, of which Canada was an original signatory. We should put in place and enforce strong policies against extremism in the military and police forces, from recruitment to active duty to veteran status.
    Finally, extremist movements are emboldened by endorsement of their ideas from influential people. They can also be diminished by public rejection and publicly and forcefully condemning hate, extremism and disinformation whenever possible.
    I hope these suggestions will be helpful.
    Thank you.

  (1115)  

     Thank you very much.
    We'll now move to our first round of questions from colleagues around the table.
    We will begin with Mr. Lloyd.
    Sir, you have six minutes. The floor is yours.
    Thank you, Mr. Chair.
    My first question is to Mr. Balgord from the Canadian Anti-Hate Network.
     Mr. Balgord, would you say that your organization is an objective organization?
    We wear our biases on our sleeves. We are very proudly anti-fascist, and we focus on the far right. We focus on the far right because, if you speak with anybody who is a researcher of this or an expert in national security threats, they will agree that ideologically motivated violent extremists and threats today are primarily coming out of far-right organizing.
    Thank you.
    I appreciate the honesty, Mr. Balgord. It's important. I'm not diminishing some of the work that you do.
     I come from an area where last summer we had a hundred-year-old church burn to the ground, and dozens of people had to be evacuated from an apartment building close by, which nearly went up in flames and killed dozens of people, but you just don't hear it talked about in this country. I understand that it's not your organization's mandate to talk about these things. As you've said, you're clearly focused on the far right.
    During the convoy protests, your executive director—I believe that's his position—Bernie Farber, posted a tweet with a photo of a vile anti-Semitic flyer and claimed that this was a picture of the flyer being circulated in Ottawa among the trucker protesters. Upon further examination, it was proven that this exact same photo was taken in Miami, Florida, weeks before the protests ever began.
    Can you explain why the executive director of your organization was claiming that this photo was being circulated at the protests when, in fact, it was a photo that was from a completely different country weeks before the protests?
    Thank you very much for giving me a way to address this.
    First off, that was our chair. I'm the executive director. I was privy to the email chain that led to him tweeting that out. What had occurred was that somebody in Ottawa had reached out and said that they saw that flyer there, and they provided the photo. At that moment, Bernie was not aware that the photo itself was taken from an American source.
    What the person was trying to communicate to our organization was that they saw the same flyer, but they had attached the photo from the States. It was our error in not communicating that more clearly, where the photo itself originated from. What the person was reporting to us was that they had seen the same flyer in Ottawa.
    Thank you.

  (1120)  

    You have no evidence other than hearsay that the flyer was being distributed in Ottawa, correct?
    That is correct. We took the report from somebody on the ground, and our chair put the information out there.
    I would say that we did see very similar messaging in Toronto. There was somebody who was wearing a billboard with essentially the same messaging.
    Let's move on here. We're talking about the Ottawa protest, but I appreciate your clarification on that matter.
    You've raised some pretty disturbing allegations about the potential for a terrorist attack, a mass violence event. I think we can all be thankful that this didn't happen during the protests, and I think it sort of undermines the argument that was being made by many, including by organizations such as yours, that this protest had violent motivations, that they had a desire to commit violence. The fact that we didn't see a terrorist attack or mass violence event sort of undermines the claim.
    You've connected the United We Roll protest, which came to Ottawa in 2018, I believe.... A lot of people from western Canada concerned about the carbon tax, pipelines being blocked.... How do you draw this connection between white supremacy and fascism with people who are concerned about protecting their livelihoods?
    We don't, and I am always very careful at every juncture to point out that not everybody who was involved in the convoy is necessarily racist or necessarily violent.
    There's a broad generalization here about a protest in saying that it was organized by the same people—
    Yes.
    —and that this is a conspiracy to spread white supremacist and fascist views under the guise of pipeline and carbon tax politics. What evidence do you have to back up that claim?
    If you look back at the Yellow Vests Canada movement, and this is well documented, you'll find hundreds of examples of death threats and racist comments towards Muslims. That's the Yellow Vests Canada movement that I just addressed.
    In terms of your earlier comment about the organizers and how we make statements of that nature, I point to one of the key influencers and organizers of the convoy, Pat King, who said the mandate would only “end with bullets”.
    We saw other organizers who had previously made Islamophobic statements—
     Nobody had ever heard of this Pat King fellow before these convoy protests in Ottawa, yet you're saying that these people were involved with the United We Roll protests in 2018.
    That's correct.
    Do you have any evidence of a direct connection? Can you provide that evidence, since you've made this claim?
    Sure.
    Tamara Lich was in fact an organizer of United We Roll and she was one of the key organizers of the convoy. That's one example.
    Pat King [Inaudible—Editor] at the time of the convoy.
    Can you make a submission to the committee and provide us with this evidence? You've said it and I guess I'll take you at your word for now.
    Can you actually provide us with written evidence to back up these claims?
    Yes, we have.
    I'd be happy to share some of the articles we've already written on the subject with the committee.
    Do these articles contain primary sources that back up the evidence or are these opinion articles written by your admittedly not objective organization?
    Give a 10-second answer, please.
    Yes. Everything is demonstrated in the articles.
    Thank you.
    I would now invite Mr. Chiang to begin his six-minute line of questioning.
    The floor is yours, sir.
    Thank you, Mr. Chair.
    I'd like to thank all the witnesses for their time and sharing their expertise with us.
    My question is directed to Mr. Balgord.
    In your opinion, are Canada's national security agencies adequately focused on the far-right threats? If not, what recommendations do you have for these agencies?
    I'm not privy to how they make their decisions, of course.
    From what we can observe from the outside, there certainly seems to be much more of a focus on right-wing extremism and the ideologically motivated violent extremism that comes from it.
    I can't answer that question in depth. You'd have to ask our national security agencies themselves.

  (1125)  

    Thank you so much.
    Does your organization have any sort of tracking for hate-based extremism incidents?
    What are some ways the federal government might improve data collection related to extremism in order to better understand and combat this issue?
    We do not collect that kind of data ourselves. There are two sources of that data in Canada.
    The first is police-reported hate crime statistics. These are flawed because they don't capture a lot of the data.
    The best way we can measure hate crime and hate incidents in Canada is simply by asking Canadians if they've been the victim of it. That's what we do through the general social survey. Every five years there is this portion on victimization where we simply ask people if they have been the victim of a hate crime and collect some surrounding information on it. That gives us our best snapshot of where we are at in Canada in terms of hate crime.
    I would respectfully submit that every five years is too infrequent for collecting that data. We've been long advocating that Statistics Canada should be collecting that data on an annual basis.
    Thank you, Mr. Balgord.
    Dr. Perry, your bio describes you as a primary national authority on far-right extremism in Canada.
    Could you elaborate on the work you have done in this field and some of your findings related to the risk of right-wing extremism in Canada?
    I have been studying far-right extremism in the Canadian context since about 2012-13. I had done a little work previously in this space in the U.S. in the mid mid-nineties or so, but I have been working more broadly in the area of hate studies for about 30 years now.
    In 2015, we published a report coming from a study that was funded by Public Safety Canada, which was really the first comprehensive academic approach to understanding right-wing extremism in Canada. We have just finished another three-year study, which is an update of that.
    What we have found in that report in 2015—and I can share it or the subsequent book that came out of that—was a very conservative estimate of about 100 active groups across Canada. We could document through open-source data that there were over 100 incidents of violence of some sort associated with the far-right in Canada. Just to put that in context, during the same period of time there were about eight incidents of Islamist-inspired extremism, which is what the focus was at the time.
    What else did we find there? In the update, we have found in the last couple of years in particular over 300 active groups associated with the far-right and, of course, just in the last seven years or so we have seen now 26 murders, 24 of those mass murders, motivated by some variant of right-wing extremism.
    What else are we finding? One of the things that was alluded to earlier was the idea of the shifting demographics within the movement as well. I think that as we saw with the convoy, it is a much older demographic than what we were seeing previously, where it was not wholly but predominantly a youth movement—Skinheads, neo-Nazis,those traditional sorts of groups—but we're now seeing an older, better educated demographic being brought to the movement as well. Certainly, it is a movement that is much more facile and ready to use social media in very ironic, as well as very open, ways to share their narratives.
    Thank you, Dr. Perry.
    Next, do you have any recommendations for this committee regarding the deradicalization of people with extremist views? How can we get people out of extremist groups once they have joined? How can we prevent people from joining these groups in the first place?
    These are the easy questions, I think.
    With respect to deradicalization, there's a lot of controversy about that term. We can bring people out of the movement. It doesn't necessarily mean that if they come out of the movement, they put aside those narratives. Sometimes these narratives stay with them for a long time, but these people at least desist from engaging in spreading those narratives or engaging in any sort of violence or harassment.

  (1130)  

    Wrap it up in 10 seconds, please.
    There are a number of organizations with that task, both to counter the mobilization to the movement and to help people come out—life after hate and exit programs, and those sorts of things.
    Thank you very much.
    I would now like to invite Ms. Michaud to begin her six minutes of questioning.
    The floor is yours.

[Translation]

    I thank the witnesses for joining us.
    I will address Mr. Balgord.
    In an article from September about protests during the election campaign, you said that protest groups were organizing their activities through online groups, including on platforms like Facebook. I assume something similar happened with the “freedom convoy”. You talked a bit about that earlier.
    Do you think platforms like Facebook are doing enough with their service policies to counter those activities? Do you think they are helping hate groups get organized?

[English]

    Through all of the whistle-blower data that has come out and from the whistle-blowers themselves who have told the story of what happens behind the scenes at Facebook, we've seen pretty conclusively that they identify problems like polarization and hate speech. When they propose solutions, they're told by their executives not to do them because it would hurt engagement or they discover that some of the things they do to increase engagement are in fact driving polarization. They move forward with those decisions because engagement is money for them. Platforms like Facebook and Twitter have more of a built-in incentive to drive engagement at all costs.
    No, they are not doing enough to combat things. I know that right now the government is looking at an online safety piece of legislation. That would have been very effective five years ago. It's still going to be effective and it's important because when people get involved in ideologically motivated violent extremism or far-right organizing or COVID conspiracies, they don't start doing that on the weird fringe platforms like Telegram. They start on the Facebooks and the Twitters of the world.
    If we can stop people from connecting with that misinformation and disinformation, we can help a lot of families who are dealing with their grandmother, their uncle or their aunt who's been swept up into this alternate reality that's causing a lot of trouble.
    There's still a lot that we can accomplish with the platforms, but we need to change the incentives. We need to make it so that they act responsibly.
    They've had 10 years to figure out how to do it themselves. Unfortunately, nobody really likes the idea of government having to step in and tell an industry what to do. Everybody rankles at that here and there, but we have to because, quite frankly, the status quo is untenable.

[Translation]

    Thank you.
    I especially like how you concluded your comments. No one likes it when the government interferes in these kinds of things, but we cannot always rely on organizations' good faith.
    What do you think the government should do? Do you think the legislation Europe recently adopted on problematic content on major platforms could be a good solution for Canada? Should we adopt that kind of a model here?

[English]

    As far as I can tell, none of the legislation that has tried to address online harms has made a difference to people who are victimized by it. I mean, platforms may point and say they did this and they did that, but I dare say that if you ask people who use these platforms, they will not perceive that there's much of a difference in their safety or how they perceive these platforms.
    Of course, we run into opposition to doing anything about online harms, so I think we should be moving forward with a different model. I don't think we should have a complicated model that looks at censoring or taking down individual pieces of content. I think that we should have an ombudsperson model.
    The basic idea is that you have an ombudsperson that is a well-resourced regulator with investigatory powers, so they can kick down the door of Facebook and take its hard drives. I'm being a little hyperbolic here, but we know that these platforms hide data from us and lie to journalists, so we do need broad investigatory powers to investigate them.
    I believe that this ombudsperson should be able to issue recommendations on the platforms about the algorithms and things like that. That would be very similar to what their own employees kind of want to do behind the scenes. Like, if they learn that something drives polarization and negative engagement and is leading to hate speech, they suggested to maybe do this instead, or put this in as a stopgap measure.
    If we had an ombudsperson who could look at what was happening under the hood and make recommendations on the platforms, that's the direction we want to go. Where the platforms do not take those recommendations, we feel that the ombudsperson should be able to apply to a court. The court can measure what the ombudsperson is recommending versus all the charter implications. If the court decides that it's a good measure and it's charter consistent, then the court can make it an order. Then if the platforms don't follow it, they could face a big fine.
    This is a much more flexible way to move forward because it means that any particular arguments we might have against free speech versus hate speech, etc., are taken out of the hands of government and instead happen with a bunch of intervenors in front of a court and a judge. That's how we would move forward because it's kind of flexible. We can put it in place now and we can defer some of those arguments and have them in front of a court where they belong.

  (1135)  

[Translation]

    I can't help but take the time I have left to ask you a question about Elon Musk's recent purchase of Twitter.
    We know that algorithms play an important role on those types of platforms to spread disinformation and hateful content. This morning, I read in the media that the richest troll on earth has taken over that social media site and wants to make the algorithm public. What do you think about that? Should we be concerned about it?

[English]

    It's just a great example of how a lot of people who do not actually believe in free speech and free expression hide behind those arguments.
    We've seen Elon Musk, on a personal level, try to censor or sue people who say things he doesn't like. It's very concerning when somebody like that would have so much power over a social platform that we all use everyday and we have to use for work reasons.
     You have 10 seconds, please.
    So no, I think it's an incredibly terrible development, but I don't know what we do about it.
    Thank you.
    Thank you very much.
    I would now like to invite Mr. MacGregor to take the last six-minute slice of this round.
    Mr. MacGregor, the floor is yours, sir.
    Thank you to all of our witnesses who are aiding our committee in this study.
    Mr. Balgord, maybe I'll start with you. On the subject of Elon Musk, I was reading some of his tweets. In one that stuck out with me, he likened Twitter to sort of being the next iteration of the “public town square”, and how in this digital space it was important to protect people's abilities to voice their opinions and to enshrine free speech.
    I guess the main issue with social media on a variety of platforms is that it allows users to cloak themselves in anonymity. For example, I can't just go out among the public and start shouting obscenities and directing hate speech against identifiable groups, because I'll be held liable. People will see who I am. I can be held to account for my actions. But the cloak of anonymity is very prevalent on many social media platforms. There have also been problems with fake accounts being set up, and with troll factories, bot farms and so on.
    If social media companies to date have been wildly unsuccessful at tackling that problem, could you perhaps offer some comments on whether or not you foresee the role of the ombudsperson that you mentioned tackling that issue? Perhaps you could expand a bit more on that theme.
    On the issue of anonymity, I think you are entirely correct in how you've kind of diagnosed it. Our public square is more socially located and more democratic, in a sense. If you go spout off in your local Starbucks or Tim Hortons or whatever, you might be held socially responsible for it, whereas you are not online. Of course, now we have the social media companies that are very much not a democratic space. They can make unilateral decisions over who gets to speak, and how and when.
    On the issue of anonymity, I do very much take your point that people are more likely to troll and be abusive anonymously. However, we have to look at the case of perhaps a trans teenager whose parents are not supportive and they're looking to connect with a community online. Anonymity for them is safety, as it is for a woman who is perhaps fleeing a domestic violence situation who wants to engage with a social network online. In some cases, anonymity is absolutely the most valuable thing to people who are vulnerable. In the case of individuals overseas as well, where they face very real and very direct persecution by the government, anonymity is the only thing that keeps them safe.
    So I don't think making the Internet not anonymous is necessarily the way to go, because there are all these cases where it has unintentional consequences on people who do need safety.

  (1140)  

    I appreciate your raising that point. I think that is a very fair consideration. Perhaps the focus should be exclusively on the content.
    I'd like to turn my next question to the Global Project Against Hate and Extremism.
    In your opening remarks, you were talking about the fact that we do have to take social media companies on “with teeth”. In previous testimony from other witnesses in front of this committee, we heard a little bit about how far right and extremist groups are using different avenues to monetize their hate. For example, they may be using platforms like Amazon and Etsy to sell paraphernalia and raise funds that way.
    With the work that your organization does, is there anything on that particular subject you can inform our committee on that would help us produce some recommendations to the federal government?
    Perhaps I could just clarify the question. Are you talking about their ability to fundraise on some of these online platforms?
    Yes. We saw examples of them raising funds through selling paraphernalia on various platforms. Could you help illuminate anything on that particular subject?
     For platforms like Amazon and eBay, they have put in place rules that prohibit items from being sold if they meet a certain threshold in terms of inspiring hate and violence. The challenge is that it's not always well enforced. I think when we're talking about making rules or creating legislation to combat this, it's the enforcement that is the real challenge.
    We see that with the companies themselves. They have these rules. Twitter, Facebook, YouTube—all of them have rules about what can be aired on their platforms, but it's the enforcement. It is unequally enforced. It is inadequately enforced. There is not enough staff. There's not enough cultural and language competency in order for that to happen.
    So I think it is the enforcement. That's the teeth we were talking about in the opening remarks.
    Thank you very much.
    Colleagues, a quick look at the clock tells me that if I cut everybody's allotment in half in the second round, we'll finish exactly on time. Let's proceed with that.
    I would now invite Ms. Dancho to go ahead with a two-and-a-half-minute round.
    Thank you, Mr. Chair.
    Ms. Perry, you were talking about disinformation and conspiracy theories of actors online. You mentioned that they're “associated with the right wing” in Canada. Now, I'm a Conservative. I would consider myself on the right wing of the spectrum. I took issue with your characterization of that.
    I'm not sure if you misspoke or if you meant to say that the extremist elements are on the right side. I'd just like you to correct the record, if you wish.
    Well, there are two points there. I think I really was speaking more to the extremist state. I should also stress that conspiracy theories run the gamut from left to right, but there are some that seem to be particularly associated with the far right in the Canadian context.
    Thank you for that clarification. I would agree that there seem to be conspiracy theories across the spectrum. We need to pay special attention to that, certainly.
    My next question is for you, Mr. Balgord. You mentioned that you have concerns with regard to algorithms and how they drive extremism and what we see or what comes up on our social media platforms. One of the key things that Elon Musk has talked about concerning Twitter is to make the algorithms more public so that we understand why we're seeing what we're seeing. Would you not agree that this is a good idea?
    Yes. That actually is something that I would support.
    Great.
    You mentioned also the town square platform in general and the bots. Elon Musk has also talked significantly about addressing the bots issue. Do you believe that bots drive polarization as well, and that Elon Musk's idea in this regard is a good idea?
    Yes; bots kind of do two things. First, they can be weaponized by non-state actors to exacerbate social conflict within countries. That, of course, is not something we want foreign state actors doing. The second thing they do, of course, is more like marketing—hijacking, trying to grift and make money.
    Neither is good. Of course, we would like bots to be removed from the platform. The social media companies today actually try fairly hard to keep bot accounts off and haven't had a lot of success at it. Improvements there would be welcome.

  (1145)  

    You mentioned that you took issue with Elon Musk perhaps suing others—I'm not sure of the context—for perhaps defaming him, or perhaps he accused them of libel. I would assume that those contexts were when people were attacking his company or his reputation personally. Do you see that as different from him protecting free speech on a digital town square? I personally do see that differently.
    Please make it a 10-second answer.
    I would just say this. We see his comments in regard to free speech as maybe concerning because his personal opinion on free speech and the one he's putting forward publicly for the platform seem to be at odds with each other.
    Thank you very much.
    Mr. McKinnon, I will turn to you now for a two-and-a-half-minute round, sir.
    I'll start with you, Ms. Via. A lot of the testimony today has been focused on right-wing extremism. I'm wondering if you can discuss the other areas of extremism that we might be concerned about and that we should be aware of.
    Do you mean areas other than far-right extremism?
    Yes. We've been focused a lot on right-wing extremism, but Dr. Perry mentioned that it's right across the spectrum. I'm wondering if there are other general aspects or categories that we should be aware of and that we should be taking note of.
     There have certainly been incidents of extremism on the far left, particularly, related to climate or animal protection. However, that is not as much of an issue today as it was, say, in the nineties. The incidents that we see and the violence that we see today are primarily coming from the far-right extremist element. That is why we focus on it; because it is the primary source.
    Okay. Thank you very much.
    I'd like to extend that same question to Dr. Perry as well. Can you please expand on the nature of extremism and whether there are other categories that we should be aware of and taking note of?
    My response will be very similar to Wendy's in that the nature and extent of the violence that we see coming from other sectors in the Canadian context, specifically, are dwarfed by what we see from the far right. I gave some examples earlier on of the mass murders that we've seen in Canada and across the globe.
    Again, specific to the Canadian context, that's really where the predominant threat is in terms of violence, but also in terms of the visibility and extent of their attempts to recruit and to expand their narratives across the nation.
    Thank you. I believe that's my time.
    I'll immediately invite Ms. Michaud, who has all of one and a half minutes.
    Go ahead.

[Translation]

    Thank you, Mr. Chair.
    Ms. Perry, you are part of a group of researchers. Do you have any research data on the social, family or individual factors associated with the emergence of extremist groups in Canada?
    Do you know or are you discovering what the deepest causes of the emergence of such groups are in the Canadian context?

[English]

     I'm a sociologist by training, so most of the work I do is really looking more at the context in which hate crime emerges from extremism. However, I have been working with some colleagues, in particular at Yorktown Family Services, who take a very different approach. It is one that looks at what the concentric circle is.
    What are the individual challenges that those who are vulnerable might be experiencing? What's their family context? What's their broader peer context? What's the broader social context? We're looking at the ways that all of those pieces intercept.
    I think that this organization is one that you might like to connect with.

[Translation]

    Do you think the COVID‑19 pandemic has exacerbated those behaviours among people who may have already been susceptible to getting involved in those kinds of movements?

  (1150)  

[English]

    Again, it's at both ends of the spectrum in terms of increasing individual anxieties—
    Answer in 10 seconds, please.
    —as well as exacerbating the polarization that also feeds into right-wing extremisms. They're being fed the anti-Asian conspiracy theory.
    Thank you very much.
    Mr. MacGregor, you have a minute and a half, sir.
    Thank you, Mr. Chair.
    Maybe I'll direct the one question I have to Ms. Perry.
    A lot of the things that we're contemplating, policy-wise, are essentially reactive in nature, so I'm more interested in the proactive end of the spectrum. How can we properly address people's legitimate grievances and their frustrations with the way things in life are going right now?
    Also, with respect to our youth, we know education is largely within the provincial domain, but do you have any recommendations that our committee could make about what could be done at the federal level to ensure that young Canadians are aware of the narratives used by radical and extremist groups? Do you have any strategies we can use at the federal level to counteract that?
    Thank you for the opportunity to address that question. That's something I talk an awful lot about: the capacity of the federal government to support the work of grassroots, community-based and civil society organizations that are doing a lot of that work on the ground.
    Whether it's working in partnership with boards of education or even particular teachers to develop curricula, or whether it is developing programs that might be offered in the community through partnerships with other community groups, for me, the key is enhancing the capacity of community-based organizations with expertise in this area.
     Thank you very much.
    Mr. Lloyd, it's over to you, sir, for a two-and-a-half-minute round.
    My question is for Ms. Via. Something that you said in your recommendations related to monitoring and recruiting members of the armed forces, particularly the veteran side of the question. Are you recommending that this committee proposes that the government proactively monitor the political activities of Canada's veterans?
    Can you clarify what you meant?
    No, I wasn't recommending that.
    What I was trying to say is that the programs that are put into place should address the military and police officers at all stages of their careers, including veterans. Veterans are vulnerable to recruitment by extremist organizations because of their experience and the tactics they have learned, often in weapons and bomb making. They need to be protected from that recruitment.
    I appreciate that clarification.
    I wonder about your term “vulnerable”. It seems to me like the more accurate term is that they would be desirable targets. What would you say makes them vulnerable targets?
    Are you suggesting that veterans have something inherent in them that makes them more susceptible to being recruited by these organizations?
    I think that they are both desirable recruits and vulnerable. Some studies here in the United States, and some of the work we've done here, show that when active duty members separate from the armed forces, there is a transition period, particularly if there has been anything unpleasant about the separation during that transition period. There's also the sense of community, the sense of being a part of something and the sense of protecting your country. These are things that can make a veteran, in this case, vulnerable.
    In my final time, would you suggest a good recommendation would be that the government should seek out better ways to keep veterans integrated in their military communities and to improve their transition to minimize the threat that this recruitment could happen? Is that a recommendation that you would propose?
    Give a 10-second answer please.
    Yes.
    There are nine seconds left on the clock. I'll save them.
    Ms. Damoff, you will take us to the top of the hour and the end of this portion of our meeting. You have two and a half minutes.

  (1155)  

    Thank you so much, Chair.
    Thank you to all our witnesses for being here today.
    Dr. Perry, thank you for coming to our committee again today. It's wonderful to have your insight.
    You spoke at your last appearance at the public safety committee about rise of the incel movement. I wondered if you could update us on that and the role that the incel movement plays within IMVE.
    If we think about IMVE, it includes gender-motivated extremism, so incel certainly falls under that. However, there's also some intersection very often between incel and what we might think of as more traditional elements of the far right—even white supremacist groups, for example—in that there is also inbred misogyny and traditional fascination with or commitment to traditional gendered roles and gender values within many elements of the far right. They find, I think, a natural affinity among one another.
    We're seeing more activity among incels, whether related to far-right groups or not, but we're, thankfully, also seeing far more research in that space that helps us to understand both the peculiarities and the similarities with the far right.
    Is there anything the government should be doing? You mentioned research, but is there anything that we can be doing to counter the rise of the incel movement?
    Again, it comes back to what I was saying earlier on about supporting some of the great work that is going on at the community level. That's an important area of intervention.
    I am gratified that gender is included in the understanding of IMVE. I think that goes a long way to enhancing our recognition as a society that violence against women and gender non-conforming people is also a part of this continuum of hatred, hostility and violence.
     I have 15 seconds left. I think I'll give them back to you, Chair.
    Thank you very much.
    Everybody has been generous with their time this morning, particularly the witnesses. On behalf of the committee and all parliamentarians, I want to thank you for bringing all of this experience to a very important subject that the committee is studying now and on your behalf. Thank you very much.
    Colleagues, we will now take a five-minute suspension to change panels and take a bit of a break. We'll see everybody in five.

  (1155)  


  (1200)  

    I call the meeting back to order.
    Colleagues, we're ready to resume with our second panel. With us this second hour, we have Ilan Kogan, data scientist at Klackle. From Meta Platforms, we have Rachel Curran, public policy manager of Meta Canada, and David Tessler, public policy manager. From Twitter Inc., we have Michele Austin, director of public policy for the U.S. and Canada.
    I would like to invite our guests to give an opening statement of up to five minutes. I will begin with Mr. Kogan.
    Mr. Kogan, the floor is yours.
     Mr. Chair, members of the committee, I would like to thank you for inviting me today to discuss artificial intelligence and social media regulation in Canada.
    I begin with an oft-quoted observation: “For every complex problem, there is a solution that is clear, simple and wrong.”
    Canada is not the first country to consider how to best keep the Internet safe. In 2019, for instance, the French Parliament adopted the Avia law, a bill very similar to the online harms legislation that the Canadian government considered last year. The bill required social media platforms to remove “clearly illegal content”, including hate speech, from their platforms. Under threat of significant monetary penalties, the service providers had to remove hate speech within 24 hours of notification. Remarkably, France's constitutional court struck the law down. The court held that it overly burdened free expression.
    However, France's hate speech laws are far stricter than Canada's. Why did this seemingly minor extension of hate speech law to the online sphere cross the constitutional line? The answer is what human rights scholars call “collateral censorship”. Collateral censorship is the phenomenon where if a social media company is punished for its users' speech, the platform will overcensor. Where there's even a small possibility that speech is unlawful, the intermediary will err on the side of caution, censoring speech, because the cost of failing to remove unlawful content is too high. France's constitutional court was unwilling to accept the law's restrictive impact on legal expression.
    The risk of collateral censorship depends on how difficult it is for a platform to distinguish legal from illegal content. Some categories of illegal content are easier to identify than others. Due to scale, most content moderation is done using artificial intelligence systems. Identifying child pornography is relatively easy for such a system; identifying hate speech is not.
     Consider that over 500 million tweets are posted on Twitter every day. Many seemingly hateful tweets are actually counter-speech, news reporting or art. Artificial intelligence systems cannot tell these categories apart. Human reviewers cannot accurately make these assessments in mere seconds either. Because Facebook instructs moderators to err on the side of removal, counterintuitively, online, the speech of marginalized groups may be censored by these good-faith efforts to protect them. That is why so many marginalized communities objected to the proposed online harms legislation that was unveiled last year.
    Let me share an example from my time working at the Oversight Board, Facebook's content moderation supreme court. In August 2021, following the tragic discovery of unmarked graves in Kamloops, British Columbia, a Facebook user posted a picture of art with the title “Kill the Indian, Save the Man”, and an associated description. Without any user complaints, two of Facebook's automated systems identified the content as potentially violating Facebook's policies on hate speech. A human reviewer in the Asia-Pacific region then determined that the content was prohibited and removed it. The user appealed. A second human reviewer reached the same conclusion as the first.
    To an algorithm, this sounds like success, but it is not. The post was made by a member of the Canadian indigenous community. It included text that stated the user's sole purpose was to bring awareness to one of the darkest periods in Canadian history. This was not hate speech; it was counter-speech. Facebook got it wrong, four times.
    You need not set policy by anecdote. Indeed, the risk of collateral censorship might not necessarily preclude regulation under the charter. To determine whether limits on free expression are reasonable, the appropriate question to ask is, for each category of harmful content, such as child pornography, hate speech or terrorist materials, how often do these platforms make moderation errors?
     Although most human rights scholars believe that collateral censorship is a very significant problem, social media platforms refuse to share their data. Therefore, the path forward is a focus on transparency and due process, not outcomes: independent audits; accuracy statistics; and a right to meaningful review and appeal, both for users and complainants.
    This is the path that the European Union is now taking and the path that the Canadian government should take as well.
    Thank you.

  (1205)  

    Thank you very much.
    I would now like to invite Ms. Curran to take up to five minutes for an opening statement.
    The floor is yours.
    We'll start with my colleague, Mr. Tessler.
    Thank you for the invitation to appear before the committee today to talk about the important issue of ideologically motivated violent extremism in Canada.
    My name is David Tessler and I am the public policy manager on Meta's counterterrorism and dangerous organizations and individuals team.
    With me today is Rachel Curran, public policy manager for Canada.
    Meta invests billions of dollars each year in people and technology to keep our platform safe. We have tripled to more than 40,000 globally the number of people working on safety and security. We continue to refine our policies based on direct feedback from experts and impacted communities to address new risks as they emerge. We're a pioneer in artificial intelligence technology to remove harmful content at scale, which enables us to remove the vast majority of terrorism- and organized hate-related content before any users report it.
    Our policies around platform content are contained in our community standards, which outline what is and what is not allowed on our platforms. The most relevant sections for this discussion are entitled “violence and incitement” and “dangerous individuals and organizations”.
    With respect to violence and incitement, we aim to prevent potential offline harm that may be related to content on Facebook, so we remove language that incites or facilitates serious violence. We remove content, disable accounts and work with law enforcement when we believe there's a genuine risk of physical harm or direct threats to public safety.
    We also do not allow any organizations or individuals who proclaim a violent mission or who are engaged in violence to have a presence on our platforms. We follow an extensive process to determine which organizations and individuals meet our thresholds of “dangerous”, and we have worked with a number of different academics and organizations around the world, including here in Canada, to refine this process.
    The “dangerous” organizations and individuals we focus on include those involved in terrorist activities, organized hate, mass or serial murder, human trafficking, organized violence or criminal activity. Our work is ongoing. We are constantly evaluating individuals and groups against this policy as they are brought to our attention. We use a combination of technology reports from our community and human review to enforce our policies. We proactively look for and review reporting of prohibited content and remove it in line with our community standards.
    Enforcement of our policies is not perfect, but we're getting better by the month. We report our efforts and results quarterly and publicly in our community standards enforcement reports.
    The second important point, beyond noting that these standards exist, is that we are always working to evolve our policies in response to stakeholder input and current real-world contexts. Our content policy team works with subject matter experts from across Canada and around the world who are dedicated to following trends across a spectrum of issues, including hate speech and organized hate.
    We also regularly team up with other companies, governments and NGOs because we know those seeking to abuse digital platforms attempt to do so not solely on our apps. For instance, in 2017, we, along with YouTube, Microsoft and Twitter, launched a Global Internet Forum to Counter Terrorism, GIFCT. The forum, which is now an independent non-profit, brings together the technology industry, government, civil society and academia to foster collaboration and information sharing to counter terrorism and violent extremist activity online.
    Now I'll turn it over to my colleague, Rachel.

  (1210)  

    Thanks, David.
    In Canada, in 2020, in partnership with Ontario Tech University Centre on Hate, Bias and Extremism, led by Dr. Perry, who you just heard from, we launched the Global Network Against Hate. This five-year program will help advance the centre's work and research on violent extremism based on ethnic, racial, gender and other forms of prejudice, including how it spreads and how to stop it.
    The Global Network Against Hate also facilitates global partnerships and knowledge sharing focused on researching, understanding and preventing hate, bias and extremism online and off. Our partnerships with the academics and experts who study organized hate groups and figures help us stay ahead of trends and activities among extremist groups. Our experts are able to share information with us on how these organizations are adapting to social media and to give us feedback on how we might better tackle them.
    Based on this feedback, in Canada we've designated several Canadian hate organizations and figures in recent years, including Faith Goldy, Kevin Goudreau, the Canadian Nationalist Front, Aryan Strikeforce, Wolves of Odin and Soldiers of Odin. They've all been banned from having any further presence on Facebook and Instagram.
    We also remove affiliate representation for these entities, including linked pages and groups. Recent removals include Alexis Cossette-Trudel, Atalante Québec and Radio-Québec—
    Finish in 10 seconds, please.
    —and QAnon-affiliated pages and organizations.
    To sum up, we've banned 250 white supremacist organizations from our platforms. We're constantly engaged with this work in conjunction with Canadian law enforcement and intelligence agencies.
    Thank you very much.
    Ms. Austin, you have five minutes to make your opening comments. The floor is yours.
     Thank you very much, Chair and members of the committee, for the opportunity to be here, and thank you for your service.
    I'd also like to acknowledge the political staff who are in the room and thank them for their service and support.
    Twitter's purpose is to serve the public conversation. People from around the world come together on Twitter in an open and free exchange of ideas and issues they care about. Twitter is committed to improving the collective health, openness and civility of public conversation on our platform. We do this work with the recognition that freedom of expression and safety are interconnected.
    Twitter approaches issues such as terrorism, violent extremism and violent organizations through a combination of interventions, including the development and enforcement of our rules, product solutions and work with external partners such as government, civil society and academia.
    For my opening remarks, I will focus on our work with partners and, in particular, the Government of Canada.
    Twitter shares the Government of Canada's view that online safety is a shared responsibility. Digital service providers, governments, law enforcement, digital platforms, network service providers, non-government organizations and citizens all play an important role in protecting communities from harmful content online. Twitter is grateful for the Government of Canada's willingness to convene honest and sometimes difficult conversations through venues such as the Christchurch call to action and organizations such as Five Eyes.
    Through our joint work on the Global Internet Forum to Counter Terrorism, commonly known as GIFCT, which my colleague Mr. Tessler referred to in his remarks, we have made real progress across a wide range of issues, including establishing GIFCT as an independent, non-government organization; building out GIFCT's resources and impact; forming the independent advisory committee and working groups; and implementing a step change on how we respond to crisis events around the world.
    In Canada, the Anti-terrorism Act and the Criminal Code of Canada provide measures for the Government of Canada to identify and publicly list known terrorist and violent extremist organizations. Twitter carefully monitors the Government of Canada's list, as well as other lists from governments around the world. The last time that list was updated was on June 25, 2021. We also collaborate and co-operate with law enforcement entities when appropriate and in accordance with legal processes. I also want to acknowledge the regular and timely dialogue I have with officials across government working on domestic issues related to these files.
    In addition to governments, Twitter partners with non-government organizations around the world to help inform our work and to counter online extremist content. For example, we partner closely with Tech Against Terrorism, the global NGO, to share information, knowledge and best practices. We recently participated alongside the Government of Canada in the Global Counterterrorism Forum's workshop to develop a tool kit to focus on countering racially motivated violent extremism.
    Our approach is not stagnant. We aggressively fight online violent extremist activity and have invested heavily in technology and tools to enforce our policies. As the nature of these threats has changed, so has our approach to tackling this behaviour. As an open platform for free expression, Twitter has always sought to strike a balance between the enforcement of our own rules covering prohibited behaviour and the legitimate needs of law enforcement with the ability of people to express their views freely on Twitter, including views that people may disagree with or find offensive.
    I would like to end my testimony with a quote from Canada's Global Affairs Minister, the Honourable Mélanie Joly, on March 2 of this year. She said:
More than ever, social media platforms are powerful tools of information. They play a key role in the health of democracies and global stability. Social media platforms play an important role in the fight against disinformation....
    Twitter agrees.
    I'm happy to answer any questions you might have on policies, policy enforcement, product solutions and the ways in which we're working to protect the safety of the conversation on Twitter.
    Thank you.

  (1215)  

    Thank you very much. You won't have long to wait, because the first round of questions will start right now.
    We'll begin by asking Ms. Dancho to take us through the first six minutes of questioning in this round.
     Thank you, Mr. Chair.
    Thank you to the witnesses for being here. My first question is for Twitter.
    Today in committee, as you may have heard, we talked a lot about right-wing opinion and left-wing opinion, sharing online, and the harmful content from extreme elements of both. I'm sure you're also aware that Conservatives sometimes comment how they feel unfairly targeted by social media censorship.
     In that same vein, in your joint statement with Elon Musk, he explained his motivation for wanting to buy Twitter and take it private. He said, “Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are [being] debated”. Elon Musk, as you know, has also said he wants to enhance Twitter with new features, “making the algorithms open source to increase [user] trust, defeating the spam bots, and authenticating all [human users].
    Do you feel that Mr. Musk can achieve these goals, and do you feel that will ensure all sides of the political spectrum, so to speak, including Conservatives, are better protected to share their opinions freely on your platform?
     Twitter is certainly living up to its moniker. Twitter seems to be what's happening right now. It's a very exciting place to work. Partners can continue to expect our best-in-class customer service, client solutions and our commitment to safety.
    Yesterday, Twitter was a publicly traded company. Today, Twitter is still a publicly traded company. I cannot speculate on what Elon Musk is proposing or what changes he might make. For now, there will be no changes as a result of the announcement. Any changes will be publicly communicated on Twitter. You can actually follow on Twitter the entire company meeting that we had yesterday with regard to this.
    Thank you very much.
    My next question is for Facebook.
    Thank you, Ms. Curran, for being here today.
    I want to talk a bit about what happened in Australia. As you know, the Australian government brought forward legislation that would force Facebook to pay publishers of news media if Facebook hosted, or users shared, news content. As you know, Facebook retaliated and banned news links from being shared by Facebook users in Australia, and shut down Australian news pages hosted on the Facebook platform, in a protest to the Australian law that the government was looking to bring forward. Ultimately, Facebook had cut off the ability to share news publications online from users or otherwise. An agreement was reached shortly afterwards, but it did take this extraordinary step to ban the sharing of news publications.
    We know that the Liberal government brought forward a similar bill to what the Australian government did. Bill C-18 has some similarities. It's called, in short, the online news act. You may be familiar with it. There's also Bill C-11, which aims to control what Canadians see when they open their social media apps such as Facebook, Twitter and the like.
    Ms. Curran, is it reasonable to believe that Facebook could do the same thing in Canada as it did in Australia and prohibit the sharing of news, should the Liberal government move forward with bills such as Bill C-18 or other iterations of it?

  (1220)  

    The short answer is that we're still evaluating that legislation. We didn't know the scope of it until it was tabled very recently.
    We have some pretty serious concerns. Our view is that when publishers place links to their content on our platforms, they receive significant value from doing that. We don't actually control when or how or to what degree they post news material on our platforms.
     I will say this. We're committed to fuelling innovative solutions for the news industry and to the sustainability of the news industry in Canada. That's why we've entered into a number of partnerships to support that kind of work.
    I can't comment definitively on our future action with respect to that bill specifically, since we're still evaluating it.
    Thank you, Ms. Curran.
    You would say—perhaps I'm putting words in your mouth—and maybe you could clarify, that it's not off the table that you would take the similar action that Facebook did in Australia in response to Bill C-18.
     I would say that we're still looking at all of the options based on our evaluation of the legislation. We're still going through that in detail. We were not consulted on the content of it, and so we need to review it in pretty close detail before we decide what our future response will be.
    Thank you very much.
    I'll go back to Twitter.
    Perhaps you could comment on Bill C-18 as well. Do you feel that news publications benefit from being shared on Twitter's platform? Do you have any concerns, similar to those of Facebook's, with it?
    I agree with Rachel that we're still in the early stages of analysis.
     There are a couple of things to say with regard to Bill C-18.
     Twitter, like the news industry, does not make a lot of money on news. In fact, we have nobody in Canada who is selling news content. If you see news advertised on Twitter, it is largely self-serving. The news organizations have chosen to advertise on their own.
    We are also what's called a “closed” platform. When you link to news on Twitter, you have to leave the site. That is not necessarily the case with the other platforms.
    The thing we're most concerned about is with regard to scope and transparency. The question is whether or not Twitter is scoped in under that bill. That is very unclear. I understand that there will be quite an extensive GIC coming out after the bill is passed.
    I am more than happy to meet with anybody to discuss the content of Bill C-18.
     Thank you very much, Ms. Austin.
    Ms. Curran, if you would like to add anything further on the government's approach to censoring or regulating the Internet, you can have my last 10 seconds.
    Again, I would just reiterate that we have some fairly significant concerns with Bill C-18.
    We think it should take into account the way the Internet actually works when it comes to linking to views on our websites. We hope we're able to engage in a good conversation with the government about that.
    Thank you very much.
     Ms. Damoff, I will turn the floor over to you for a six-minute block of questions. Go ahead.
    Thank you so much, Chair.
    I'm going to start with Twitter. We have heard a lot in this study about the radicalization of individuals to ideologically motivated violent extremism through social media. You know, you've said that you're grateful to the Government of Canada for having conversations with platforms like yours, and yet you've also compared our draft proposal to regulate online harms to policies in Iran and North Korea. Do you think it's appropriate for a private company that has a financial stake in the legislation to make comments like that?

  (1225)  

    Your question is with regard to the proposal put forward by the Government of Canada to create the position of a digital safety commissioner who would have the ability to block Internet platforms. We made a submission that has been made public—which is great, and I'm very grateful for that access to information request—stating that this kind of activity, as it was proposed, was very similar to the activity we experience in those countries: China, Iran and North Korea.
    I don't think it's irresponsible to make a comparison when we're asked by the Government of Canada to give our input. We tried our best to make a very thoughtful submission and to make the recommendations that are contained in that submission of how to do things differently. Blocking Internet sites is contrary to Twitter's position on the open Internet.
    Your site uses algorithms to drive traffic to information and other tweets, correct? Why are those algorithms, as we've heard from other witnesses, driving individuals like me more likely to the far right than to the centre or far left? We know that those kinds of things are more likely to go viral and get more engagement, but your algorithms are not public, and yet you're driving people to the far right, which in turn can lead to radicalization.
    Twitter actually has much less algorithmic content than our competitors. The main indicator that we use with regard to our algorithm is who the user chooses to follow. I would also remind you that you can turn off the algorithm on your home timeline on Twitter. You can choose to see tweets in reverse chronological order, or you can turn the algorithm back on and ask us to surface tweets that we think you would be interested in.
    Open AI, open machine learning—I think that is the future of this policy discussion, and we're very much looking forward to it.
    Thank you.
    I'm going to turn to Facebook and Meta. Last year your revenue was $117 billion U.S. The year before that, it was $86 billion U.S. The company has been quite successful in increasing its revenue. I understand that's mostly through running advertisements. How do you decide what advertisements I see when I go on your platforms?
    Thank you for that question. It's actually a very good question.
    On this question of algorithms, what you see in your newsfeed, including advertising, depends on a number of what we call “signals”. Those signals include what you have liked before, what kinds of accounts you follow, what you have indicated your particular interests are, and any information that you have given us about your location, who you are and your demographic information. Those all act to prioritize, or not, particular information in your newsfeed. That will determine what you see when you open it up. It's personalized for each user.
    My understanding from other witnesses who have come forward, though, is that, for example, if I search for coronavirus or COVID-19, I very quickly end up on conspiracy sites. A lot of those conspiracy sites were also linked with the far right. Do your algorithms work quite quickly to be able to direct me to those sites?
     No, that's untrue. If you search for anything about COVID-19 or coronavirus, part of what you will be directed to is our COVID-19 hub, which contains credible information, including from the Public Health Agency of Canada, on the coronavirus and vaccines. We're really thrilled about the fact, actually, that 90% plus of Facebook users in Canada have indicated that they are supportive of vaccination and wish to find out more information about vaccines.

  (1230)  

    I have only 45 seconds left.
    One of the issues with the convoy that happened in Ottawa was these Facebook groups that started up—and remained up, quite honestly. How did you monitor those during the convoy?
    Yes, that's a really good question.
    We had a 24-7 monitoring effort during the convoy protest, which we set up almost immediately. We were looking at groups, accounts and discussions on the platform to monitor them for any breach of our community standards. We removed material that was in violation of our community standards. Again, that was an around-the-clock effort on our part.
    Thank you very much.
    I would now like to invite Ms. Michaud for her six-minute block.
    The floor is yours, Ms. Michaud.

[Translation]

    Thank you, Mr. Chair.
    I thank the witnesses for joining us.
    I will first go to Ms. Austin, from Twitter.
    A little earlier, we discussed with the previous panel Mr. Musk's purchase of Twitter. Those people carried out two surveys in March to ask users whether they felt that Twitter's algorithm should be open source code and whether freedom of expression was respected. Those surveyed answered yes to the first question, and no to the second. Of course, Mr. Musk accused the platform of applying censorship.
    Do you think Mr. Musk's taking over Twitter may lead to changes in some of the platform's policies and ways of operating? The fact that people could speak out more may unfortunately encourage the spread of disinformation and hate speech.

[English]

    I can't speculate on what Mr. Musk will or will not do until that deal closes, which could take months. I can only comment on our current approach, which will continue.
    With regard to open-source code, Mr. Dorsey, the former CEO of Twitter, tweeted extensively yesterday with regard to open-source code and algorithms and his support of those. Twitter has traditionally supported the open Internet and efforts to open-source code. We have a number of experiments under way with regard to that, but I wouldn't be in a position to speculate any more than that.

[Translation]

    Thank you.
    For the benefit of the committee and the people listening to us, could you tell us in more detail what the impact would be on the dissemination of harmful content if Twitter's algorithm was open source code? I am not an expert on algorithms. As many people have probably never heard of open source code, I would like you to tell us what would happen, in concrete terms, if Twitter made that change.

[English]

    Just so that people watching and listening understand, as you said, algorithms are used for some of the most basic services by companies around Canada. I would suggest to the committee that, when you speak about open algorithms, you want to think about specifically what the algorithm is trying to solve for rather than just saying generally, “please open up your algorithms”.
    We also rely on human curation and not algorithms to produce Twitter moments. Let me give you an example. We are partnering with Openminded, which is an open-source, non-profit organization. We're looking at machine learning and privacy-enhancing technologies, or PETs, to pioneer new methods of public accountability and access to data in a manner that respects and protects the privacy of people who use our service.

[Translation]

    Thank you very much.
    I will now turn to the Meta Platforms representative.
    In October 2021, a former Facebook data scientist told members of the U.S. Congress that Facebook knows the algorithms its platforms use are causing harm, but it refuses to change them because eliciting negative emotions in people encourage them to spend more time on sites or to visit them more often, which helps sell advertising. To reduce that harm without hurting Facebook's profits, she suggested that posts be displayed in chronological order instead of allowing the algorithm to anticipate what will engage the reader. She suggested that an additional step be added before people can share content.
    What do you think of those accusations?
    What would be the consequences of removing the engagement prediction function from a platform like Facebook?

  (1235)  

[English]

     The assertion that we algorithmically prioritize hateful and false content because it increases our profits is just plain wrong. As a company, we have every commercial and moral incentive to try to give the maximum number of people as much of a positive experience as possible on the platform, and that includes advertisers. Advertisers do not want their brands linked to or next to hateful content.
     Our view is that the growth of people or advertisers using our platforms means nothing if our services aren't being used in ways that bring people closer together. That's why we take steps to keep people safe, even if it impacts our bottom line and even if it reduces their time spent on the platform. We made a change to News Feed in 2018, for instance, which significantly reduced the amount of time that people were spending on our platforms.
     Since 2016, we've invested $13 billion in safety and security on Facebook, and we've got 40,000 people working on safety and security alone at the company.

[Translation]

    Thank you.
    I have a bit of time left to put a brief question to you.
    Once you have detected potentially problematic content or activities on your platform, approximately how much time do you need to decide to block or hide that content?

[English]

    That's a great question.
     Normally, it takes a matter of hours. If a more nuanced review is required and if it needs to go to one of our human reviewers, it might take a little bit longer, but we normally have material that's in breach of our community standards down within 24 to 48 hours at a maximum.
    Thank you very much.
    Mr. MacGregor, we'll go over to you, sir, for your six-minute block of questioning.
    Thank you very much, Mr. Chair.
    I'll start my line of questioning with Meta.
     Ms. Damoff, my colleague on the Liberal side, already identified the significant profits that your company has made, the majority of which come from advertising revenue. With respect to what you've already said about your algorithms, is it also true that your algorithms are also designed with a profit motive in mind?
    No. That's incorrect. They're designed to give our users and our community the most value possible, the best possible experience. We want them to see things that are useful to them and that are relevant to them. We want them to enjoy their experience on our platform. Otherwise, they're not going to come back and spend time there.
     That's really our priority. It's to make sure that our users—
    Mr. Alistair MacGregor: I'd like to reclaim my time—
    Ms. Rachel Curran: —are enjoying their time spent on our platform.
     With respect, though, those algorithms, while promoting all of these positive things that you've said, have also had the added benefit of raising an obscene amount of money for your company. I guess what I'm trying to figure out here is how much that profit motive and the incredible sums of money that your company is able to make off these algorithms.... We know, from the research that is out there and from what this committee has already heard, that emotionally provocative content that reinforces what we already believe works better than factual information.
     When we as a committee are looking at the increasing ad revenues that your company is making, when we know that emotionally provocative content can trump factual information and when we see the very obvious role that social media has has played in increasing misinformation and disinformation out there, with very real-world consequences, how can we have assurances that your company is actually taking this seriously when there are all of these competing priorities grabbing your attention?
    Yes, I understand that. We do make money from advertising. That's true. However, a lot of that money gets reinvested into securing the safety of our community. As I've talked about, we've invested over $13 billion in this area since 2016 alone.
     The other thing I would say is that I know it's sort of superficially attractive to say that social media is kind of the reason for division or polarization or some of these things we've seen. The latest research actually doesn't indicate that. In many countries where polarization is increasing, that started long before the advent of social media. In other countries with really significant or heavy social media use, polarization is lower and actually decreasing. Research doesn't back up the contention that social media is actually the cause of increased polarization or increasing divisiveness.
    That said, all of our work is to amplify the good that comes from these platforms and try to minimize the bad. Maybe my colleague David can weigh in on this a little bit more—

  (1240)  

     My time is limited.
    A lot of the work we do is to minimize the harmful stuff that you've talked about.
    Thank you for that.
    I'd like to go to the previous conversation you had with respect to the convoy that made its way to Ottawa and then turned itself into an illegal occupation. When we had GoFundMe before our committee, they pointed out that any fundraising campaigns relating to misinformation, hate speech, violence or more are prohibited by their terms of service. Yet, their fundraising platform, their crowdfunding, allowed this convoy to raise money all the way up until they shut it down on February 4, despite factual evidence that misinformation was floating everywhere for the previous two weeks.
    I want to know from Meta's perspective what you were doing during the time that you were monitoring these Facebook groups. How did you change tactics when GoFundMe stopped the fundraiser, when Ottawa declared a local state of emergency on February 6, when the Province of Ontario followed suit on February 11, and when finally the federal government was forced to do so on February 14? How did your company escalate its actions in that regard?
    Throughout the convoy protests in Ottawa, we actually saw a very small amount of funds raised in Canada on our platforms. It was under $10,000. So we weren't a big player in the fundraising issue.
    That said, as soon as the Emergencies Act was declared, we started analysis of what we needed to do to comply with that. We have a payment processor called Stripe that works with us. We worked with our legal counsel and with Stripe to figure out what our obligations would be.
    Thank you, but with respect, you're talking about the fundraising aspect. With the misinformation that was being posted on the various pages hosted by your platform, fundraising aside, how did your company escalate its monitoring and intervention when there was a very clear escalation in not only what the convoy was doing to the city of Ottawa, to its residents and its small businesses and workers, but also in the subsequent municipal, provincial and federal responses and interventions?
    We had a 24-7 monitoring effort and operational group internal to Meta that was going right from the moment the protests started.
    You have 10 seconds, please.
    We had eyes on accounts, pages and material related to the convoy protests around the clock. We were also in contact with the Ottawa police and the RCMP—
    Thank you.
    —and were responding to requests from them.
    Thank you very much.
    Colleagues, we will now move into the second round of questions. I did the same calculation: If I cut everybody's time in half, then we'll be right on time.
    Mr. Lloyd, you have two and a half minutes.
    Thank you, Mr. Chair.
    Mr. Kogan, you haven't been asked any questions, so I'll start with you. Do you think the activities of the social media giants to essentially sterilize their platforms from extremist views—I don't disagree, as that is necessary—has an effect of pushing extremist groups onto less regulated or unregulated platforms? What's the impact of that?
    I'm not an expert on terrorism in particular. However, I will note, from the empirical research that I've seen, that there seems to not be a clearly established causal link between removing such content from social media platforms and public safety. There are a few reasons for that. The first is the reason you mentioned, which is that these users might go into darker enclaves on the web that are greater echo chambers. In addition, it is more difficult for law enforcement to monitor some other regions of the Internet.
    Finally, one of the issues that has been raised is the idea that if you kick users off of these platforms inaccurately, it might disenfranchise and marginalize those communities, which could lead to violence as well.
    Thanks for that.
    Following up on that, what do you think can be done as a step before possibly sterilizing this content from these platforms? Do you think there are steps that can be taken? I think we can all agree that we want these people to rejoin society, to end their extremist views and to be contributing members of society. What recommendations would you have to help deradicalize potential extremists?

  (1245)  

    I think a lot of the conversation thus far has been about the algorithms. Unfortunately, I don't think changing the algorithms is a silver bullet. Part of the reason for this is that if these platforms were able to identify terrorist content in the first place, they would take it down. It's very clearly against their policies. The problem is that they have a lot of trouble identifying such content.
    What I would suggest instead is more of a focus on due process rights. But if you are interested in modifying the algorithms, I think a digital service—
     Thanks. I appreciate that.
    With my last 30 seconds, I'll go to Ms. Curran and Ms. Austin.
    Do you believe, as has been claimed, that your platforms are driving the growth of far-right extremism in Canada or across the world? Is there any evidence to back up those claims—yes or no, to each of you?
    I'll answer that first. Thank you, Mr. Lloyd.
    No. There is no evidence to back up those claims as far as Meta Platforms is concerned.
    Thank you very much.
    Now I will move to Mr. Zuberi.
     Sir, you have two and a half minutes in this round. The floor is yours.
    Thank you, Mr. Chair, and thanks to the witnesses for being here.
    I'd like to start off with Twitter.
    A December 2018 report by Amnesty International said that Twitter, as a company, is failing in its responsibility to protect women online. I'd like to know if Twitter has adjusted itself after that report, and if so how?
    Thank you very much for that very important question.
    We are constantly updating and changing our policies and our product solutions. I don't have the information with regard to whether or not we changed it specifically after December 22, but I would be happy, through the clerk, to answer that question in written form after this meeting.
    That would be really appreciated. Thank you for suggesting that. I would have suggested it had you not.
    Shifting to Facebook, the U.S. Congress Committee on Oversight and Reform, in February 2022, asked for information around Facebook profiles, in particular the role of stolen and fake accounts in promoting the large-scale organizing and fundraising of the trucker blockade.
    At the time, the committee's chairwoman asked for information in writing. Did Facebook respond in writing to the chairwoman of the U.S. Congress committee?
    Mr. Zuberi, we can also follow up with a specific answer to that question.
    I will say that we work very hard to protect our platforms for authentic voices. We know that scammers try to use and abuse hot-button issues, like the convoy blockade and protests. In that instance, we took action against groups and pages related to scammers from various countries around the world, who were trying to use abusive tactics to mislead users off our platform—
    I appreciate that.
    If you did table the letter to the chairperson and to the committee, can you also give that to this committee?
    I'm happy to take that back and check.
    So you can give it to us. That's correct.
    I'll end with a concluding remark.
     I have heard a lot about the extreme stuff on social media platforms and how algorithms can't capture, for example, hate speech. I didn't have the chance to ask this question, but it boggles my mind that this is on there.
    Thank you.
    Ms. Michaud, I now turn to you for a one-and-a-half minute question.

[Translation]

    Thank you, Mr. Chair.
    I will close by addressing Ms. Austin.
    You concluded your opening remarks by saying that social media platforms played an important role in the fight against disinformation, and I agree with you. However, a lot of disinformation exists on those platforms.
    Even we, elected members, are facing those kinds of problems. On the one hand, social media are our best friends because they enable us to reach out to people we represent, but, on the other hand, they are our worst enemies because we get bad comments and hate speech, if I may say so.
    Despite everything, you announced something interesting, last Friday, to mark Earth Day. You said that misleading advertising on climate change will be prohibited to prevent the undermining of efforts to protect the environment. That decision came at a time when the platform's content moderation is being roundly criticized left and right by those who are accusing it of censorship and those who are criticizing its lax approach. I personally think this is a wonderful announcement and a good decision.
    Can we expect a similar policy from Twitter to counter hate speech and disinformation?

  (1250)  

[English]

    The policy that we announced with regard to climate change advertising is in the spirit of the policies we've also announced that have banned political advertising on Twitter as well as advertising with regard to COVID‑19. So the company certainly is not afraid of making bold—
     In 10 seconds, please.
    —policy statements.
     With regard to misinformation, maybe I can answer that later in another question. There's a lot there.
    I now invite Mr. MacGregor to take his 90 seconds.
    Go ahead. The floor is yours, sir.
    Thank you, Mr. Chair.
    Ms. Austin, I'll ask you my last question. I know that for both Meta and your platform, it is a struggle to.... You do care about your platform. You want to ensure that there are legitimate users. I guess what I wanted to know from you is, can you inform our committee on what the trend has been like over the last number of years over the unverified accounts, the bots, the ones that are pushing extremist content?
     Is it like a game of Whac-a-Mole? How difficult is it, from your company's perspective, to actually verify that an account is a real person? What are some of the ways in which people are finding unique features in your platform to exploit the loopholes that might exist?
    I don't think it's unfair to say that everybody is certainly trying to game the system. We introduced a new product where you could choose who you wanted to have reply, and many people tweeted out, “Reply to this tweet if you want to earn a million dollars”, and of course didn't allow replies. I mean, gaming the system is really a big deal.
    Our verification policy was on hold for two years. It has recently been reintroduced. We are focusing on six areas, which I'm happy to inform the committee about later. It's not perfect. We get a lot of complaints, which are completely justified. We are doing our best to try to make sure that—
    You have 10 seconds.
    —we understand exactly who is tweeting out before we give them the blue check mark.
    Thank you very much.
    I would invite Mr. Shipley to use his two and a half minutes.
    I'd like to start off with Ms. Curran.
     Ms. Curran, earlier in your comments, you mentioned—and correct me if I'm wrong, please—that you have banned over 250 white supremacist groups from Facebook. Is that statement correct? Did I write that down right? Is it 250?
    It is correct, yes.
    Would some of those groups be the same groups that keep re-forming under different names?
    Maybe I'll turn it to my colleague, Mr. Tessler.
    We have, as Ms. Curran said, invested and continue to invest heavily both in terms of people—we have over 40,000 people—and in terms of technology to make sure that we can protect our platform from this harmful content.
    We know that this is an adversarial space. We know that these 250 hate organizations that we've designated and others are trying to evade our enforcement, so we are constantly trying to improve and adjust in order to keep those organizations off our platform. Once an organization is designated—
    Thank you. I only have a short amount of time. I'm sorry to cut you off.
    My next part of that question was going to be, how many groups in total.... We're talking about 250 white supremacist groups. What are some of the other groups? The number of 250 astounds me—good work, obviously, for removing them—but could you tell me how many groups in total have been banned from Facebook and what are some of the other groups?
    We at Meta try to be as transparent as possible, but as I said, we know this is an adversarial space. We know, as our colleague from Twitter said, that they're trying to game the system and avoid enforcement. We also need to be careful and protect the safety of our employees, so we don't publicize the entire list of our dangerous organizations and individuals.
     What I can say is that we've developed definitions, along with experts externally, for terrorism, for organized hate and for organized criminality and other categories under our dangerous organizations and individuals policy. We have a process to designate groups in those categories, and that's a continuous process that we undertake.
    Yes. I will also say that we designated the Proud Boys in 2018, well ahead of the Government of Canada.

  (1255)  

    Mr. Tessler, you mentioned a couple of times that you have 40,000 people globally who are monitoring this post. How can anybody possibly be monitoring or instructing 40,000 different people on what is acceptable and what is not acceptable?
    In 10 seconds, please.
    We've developed very clear definitions, and those are public in our community standards. We use those clear definitions for terrorism, for organized hate, etc., to be a guide for us as to which groups we designate and remove from our platform.
    Thank you very much.
    Mr. Chiang, I turn to you for the last two and a half minutes of this panel.
    Thank you, Mr. Chair.
    My question is directed towards Meta.
     In your opening remarks, you mentioned that Meta aims to prevent potential offline harm that may be related to content on Facebook. How do you square that aim with the fact that the Ottawa convoy blockade, which called for the removal of the democratically elected Canadian government, was able to organize the occupation through Facebook?
     I'll start this one off.
    Expressing opposition to government mandates is not against our community standards, and so we allow that on our platforms.
    Maybe, Mr. Tessler, you could get into a bit more detail about what we saw with respect to the convoy protests.
    Yes, definitely.
    Let me just be clear, there is no place on our platforms for violence or hate. Our policies are clear. We do not allow content that is violent or incites violence or includes hate speech. When we find that content, either through human review or through out investment in technology, we will remove it. We did not see a significant number of dangerous organizations or much individual involvement in the convoy blockade and protests in Canada.
    Thank you so much for your answer.
    I would like to invite Ms. Damoff to make a comment.
    Thank you, Paul.
    To both platforms, as a woman in politics, I am subjected to some of the most vile, misogynistic comments on all of your platforms—Instagram, Facebook and Twitter. Your reporting tool is not effective. If it's a direct message on Facebook, I can't report it at all. You're not doing a good job of monitoring your social media sites. When I'm tagged by a colleague who is a person of colour, the racists comments are absolutely disgusting.
    My comment was that you need to do better. I've brought this up before with these platforms at the status of women committee. It's not acceptable that people should be subjected to these kinds of comments on these platforms.
    Ms. Damoff, you had the last word.
    Thank you, everybody. Thank you to the witnesses. This is current; it's controversial, and it's important. I thank you all for sharing your insights with us.
    Members of the committee, that's it for today. We finished exactly on time. Thank you for your efficiencies.
    With that, I now adjourn this meeting.
Publication Explorer
Publication Explorer
ParlVU