Committee
Consult the user guide
For assistance, please contact us
Consult the user guide
For assistance, please contact us
Add search criteria
Results: 1 - 8 of 8
View Anthony Housefather Profile
Lib. (QC)
I wasn't arguing that; I was actually arguing the contrary. I was saying that beyond illegal content, social media providers will frequently say that certain things cannot be posted that are racist but that are not illegal and not hate speech. Their actual rules go beyond just legality. Isn't that correct?
View Steven Guilbeault Profile
Lib. (QC)
Bill C-10 is not about content moderation. The CRTC, in its last 50 years of existence, has never done content moderation, and Bill C-10 doesn't give the CRTC the ability to do content moderation.
View Rachael Harder Profile
CPC (AB)
There are two sections in this bill that were of significance: proposed subsection 2(2.1), which protects individuals; and proposed section 4.1, which protects their content.
Proposed subsection 2(2.1), on individuals, was kept in, but the section that protects their content, what they post online, was taken out. Therefore, they no longer have that protection. Why?
View Steven Guilbeault Profile
Lib. (QC)
You might have heard, like I did a few minutes ago, Justice Deputy Minister Drouin answer that question very clearly, specifying that the powers given to the CRTC are very narrow and targeted and don't have to do with content moderation.
View Martin Champoux Profile
BQ (QC)
View Martin Champoux Profile
2021-03-29 11:28
I will interpret that response as a no. So I have to conclude that you don't have any francophone moderators in Quebec. It was a simple question that you could have answered with yes or no, but you are telling me that you do not want to disclose this information. That's all right.
Mr. Chan, you remember the sad events in Christchurch. I was asking you if you control the content that goes out on your platform, because we're discussing what information Facebook allows, and you have some control over what is broadcast on your platform. For 17 minutes, the Christchurch killer broadcast his actions live on the Facebook platform.
Do you think you could have stopped that broadcast at that time?
Kevin Chan
View Kevin Chan Profile
Kevin Chan
2021-03-29 11:29
We were able to detect it and remove it, ultimately, as you point out. Of course we regret the tragedy and we regret that we were not even faster. We have obviously learned a lot from that terrible incident, not just at Facebook. To be fair, we've worked across the sector to build systems and protocols—with governments as well—to ensure that the entire system actually works, not just on Facebook, but across companies, across platforms and with governments. We've built these protocols to move much faster should the regrettable and unfortunate thing happen again.
Kevin Chan
View Kevin Chan Profile
Kevin Chan
2021-03-29 12:13
There are two ways of enforcing our systems, to be honest. One is the automated system, as I think one of your colleagues mentioned, which uses artificial intelligence. Some of the technology was developed in Canada: machine learning to go and find all these things.
In fact, I have some statistics here. In terms of hate speech, in the last quarter of 2020, our automated systems found over 97% of hate speech directed at groups automatically, before any human had seen them or reported them. That's where we are. Now, 97% is not 100%, so we still have a ways to go, but we're getting better every day. That's our posture. That's the way we do it right now.
The other piece, though, is that because speech is important from a contextual standpoint, we have to be careful on some of the grey zones for speech that, in fact, it is an attack on the community and not something else, for example, spreading awareness about Asian racism. We need humans as well, so part of that 35,000-person team that I referred to consists of people who are going to be looking at the context and saying that this image was shared, this video was shared, or this text was shared, but is the context of this to attack Asians, or is this to raise awareness about discrimination and racism? That context matters in terms of whether or not we would enforce and take it down.
It is really a parallel process that meets when we need to get more context. We have automated systems that go and find things automatically. We're constantly improving, but we're at about 97% of proactive identification and we need humans to verify some of the more challenging ones, where the speech is grey and we have to be sure of the context. Then, in the most complicated cases, they get escalated to people like me and Rachel, where we will look at specific pieces of content emanating from Canada, consult with experts and think through whether or not we're going to be drawing the line in the right place.
Results: 1 - 8 of 8

Export As: XML CSV RSS

For more data options, please see Open Data