Just for the record, Mr. Chair, I am but one of many different global policy directors at Facebook, so I'm not “the” director, just “a” director of the company.
Thank you, Mr. Chair, and members. My name is Kevin Chan, and I am the head of public policy at Facebook Canada. I am pleased to contribute to your study of online hate.
We want Facebook to be a place where people can express themselves freely and safely around the world. With this goal, we have invested heavily in people, technology and partnerships to examine and address the abuse of our platform by bad actors.
We have worked swiftly to remove harmful content and hate figures from our platform in line with our policies, and we also remain committed to working with world leaders, governments and across the technology industry to help counter hate speech and the threat of terrorism.
We want Facebook to be a place where people can express themselves freely and safely around the world. With this goal, we have invested heavily in people, technology and partnerships to examine and address the abuse of our platform by bad actors. We have worked swiftly to remove harmful content and hate figures from our platform, in line with our policies. We also remain committed to working with world leaders, governments and across the technology industry to help counter hate speech and the threat of terrorism.
Everyone at our company remains shocked and deeply saddened by the recent tragedies in New Zealand and Sri Lanka, and our hearts go out to the victims, their families and the communities affected by the horrific terrorist attacks.
With regard to the event in Christchurch, Facebook worked closely with the New Zealand police as they responded to the attack, and we are continuing to support their investigation.
In the immediate aftermath, we removed the original Facebook live video within minutes of the police's outreach to us and hashed it so that other shares that are visually similar to that video are then detected and automatically removed from Facebook and Instagram. Some variants such as screen recordings were more difficult to detect, so we also expanded to additional detection systems, including the use of audio technology.
This meant that in the first 24 hours we removed about 1.5 million videos of the attack globally. More than 1.2 million of those videos were blocked at upload and were, therefore, prevented from being seen on our services.
As you will be aware, Facebook is a founding member of the Global Internet Forum to Counter Terrorism, or GIFCT, which coordinates regularly on terrorism. We have been in close contact since the attack, sharing more than 800 visually distinct videos related to the attack via our collective database, along with URLs and context on our enforcement approaches. This incident highlights the importance of industry co-operation across the range of terrorists and violent extremists operating online.
At the same time, we have been working to understand how we can prevent such abuse in the future. Last month Facebook signed the Christchurch call to eliminate terrorist and violent extremist content online and has taken immediate action on live streaming.
Specifically, people who have broken certain rules on Facebook, including our dangerous organizations and individuals policy, will be restricted from using Facebook Live. We are also investing $7.5 million in new research partnerships with leading academics to address the type of adversarial media manipulation we saw after Christchurch, when some people modified the video to avoid detection in order to repost it after it had been taken down.
With regard to the tragedy in Sri Lanka, we know that the misuse and abuse of our platform may amplify underlying ethnic and religious tensions and contribute to offline harm in some parts of the world. This is especially true in countries like Sri Lanka, where many people are using the Internet for the first time and social media can be used to spread hate and fuel tension on the ground.
That's why in 2018 we commissioned a human rights impact assessment on the role of our services, which found that we weren't doing enough to help prevent our platform from being used to foment division and incite violence. We've been taking a number of steps, including building a dedicated team to work across the company to ensure we're building products, policies and programs with these situations in mind, and learning the lessons from our experience in Myanmar. We've also been building up our content review teams to ensure we have people with the right language skills and understanding of the cultural context.
We've been investing in technology and programs in places where we have identified heightened content risks and are taking steps to get ahead of them.
In the wake of the atrocities in Sri Lanka we saw our community come together to help one another. Following the terror attacks and up until the enforcement of the social media ban on April 21, more than a quarter of a million people had used Facebook's safety check tool to mark themselves safe, to reassure their friends and loved ones. Following the attacks there were over 1,000 offers or requests for help on Facebook's crisis response tool.
These events are a painful reminder that while we have come a long way there's always more we can and should do. The price of getting this wrong can be the very highest.
I'd like to now provide a general overview of how we approach hate speech online. Facebook's more important responsibility is keeping people safe both online and off to help protect what's best about the online world. Ultimately we want to give people the power to build communities and bring the world closer together through a diversity of expression and experiences on our platform.
Our community standards are clear: Hate can take many forms and none of it is permitted in our global community. In fact, Facebook rejects not just hate speech, but all hateful ideologies, and we believe we've made significant progress. As our policies tighten in one area, people will shift language and approach to try to get around them. For example, people talk about white nationalism to avoid our ban on white supremacy, so now we ban that too.
People who are determined to spread hate will find a way to skirt rules. One area we have strengthened a great deal is in the designation of hate figures and hate organizations based on a broader range of signals not just their on-platform activity. Working with external Canadian experts has led to the removal of six hate figures and hate organizations—Faith Goldy, Kevin Goudreau, the Canadian Nationalist Front, the Aryan Strikeforce, the Wolves of Odin and Soldiers of Odin—from having any further presence on Facebook and Instagram. We will also remove any praise, representation or support for them. We have already banned more than 200 white supremacist groups as a result of our dangerous organizations policy worldwide.
In addition to this policy change we have strengthened our approach to hate speech in the last few years centred around three Ps. The first is people. We have tripled the number of people at Facebook working on safety and security globally to over 30,000 people.
The second is products. We continue to invest in cutting-edge technology and our product teams continue to build essential tools like artificial intelligence, smart automation and machine learning that help us remove much of this content, often at the point of upload.
The third is partnerships. In addition to the GIFCT, in Canada we have worked with indigenous organizations to better understand and enforce against hateful slurs on our platform. We have also partnered with Equal Voice to develop resources to keep candidates, in particular women candidates, safe online for the upcoming federal election. We have partnered with the Canada Centre for Community Engagement and Prevention of Violence on a workshop on counter-speech and counter-radicalization.
Underpinning all of this is our commitment to transparency. In April 2018, we published our internal guidelines that our teams used to enforce our community standards. We also published our first-ever community standards enforcement report describing the amount and types of content we have taken action against, as well as the amount of content we have proactively flagged for review. We publish our report on a semi-annual basis, and in our most recent report released last month we were proud to share that we are continuing to make progress on identifying hate speech.
We now proactively detect 65% of the content we remove, up from 24% just over a year ago when we first shared our efforts. In the first quarter of 2019 we took down four million hate speech posts and we continue to invest in technology to expand our abilities to detect this content across different languages and regions.
I would like to conclude with some thoughts on future regulation in this space. New rules for the Internet should preserve what is best about the Internet and the digital economy: fostering innovation, supporting growth for small businesses, and enabling freedom of expression while simultaneously protecting society from broader harms. These are incredibly complex issues to get right and we want to work with governments, academics and civil society around the world to ensure new regulations are effective.
As the number of users on Facebook has grown and as the challenges of balancing freedom of expression and safety have increased, we have come to realize that Facebook should not be making so many of these difficult decisions, which is why we will create an external oversight board to help govern speech on Facebook by the end of this year. This oversight board will be independent of Facebook and will be a final level of appeal for what stays up and what goes down on the platform. Our thinking at this time is that the decisions by this oversight board will be publicly binding on Facebook.
Even with the oversight board in place, we know that people use many different online platforms and services to communicate, and we would all be better off if there were clear baseline standards for all platforms. This is why we like to work with governments to establish rules for what is permissible speech online. We have been working with President Macron of France on exactly this kind of project, and we would welcome the opportunity to engage with more countries going forward.
Thank you for the opportunity to present before you today, and I look forward to answering your questions.
Sure. I'm happy to do that.
It's important to stress at the outset that a lot of these different laws—and there aren't that many of them—where they do exist, such as in Germany, obviously reflect the cultural context and the historical context of certain countries. We always should be mindful of that and not necessarily say, “This is the model that exists; therefore, we should just adopt it holus-bolus in other countries, such as in Canada.”
That said, as I mentioned to Madam Raitt earlier, in Germany the challenge for us has been that because there are these very strict definitions or requirements on what the platform needs to do, you'll forgive me if I don't have the specific time frame, but within a very short period of time, let's say a day of reporting, content has to be removed. That obviously doesn't allow a lot of time for people to be certain that this type of content is in fact illegal or otherwise prohibited, and it doesn't allow a lot of time to prevent false positives.
In the last year or so that this law has been in place, if you look at some scholars who have looked at this a bit, what has happened is that platforms are over-rotating. I think there is generally this concern that if something is flagged and we don't take action on it, it's going to be a liability. There has been this general sense that perhaps platforms should be more aggressive in removing that content.
I don't know if that's a desired public policy outcome. In Canada, typically, we've thought about these things as trying to create as much space as possible for freedom of expression, not trying to censor people, but then also clearly identifying certain speech that should be prohibited. To get at that, we've always thought that having a measurement of prevalence, how much of that is out there, and holding companies responsible for reducing that amount and ensuring their processes are in place is a better way of thinking about it than focusing on specific pieces of content and you have x amount of time to get it down, because then you actually don't get at the fundamental challenges of what free speech is about.
I'll just pick up very briefly on what Mr. Boissonnault said. I've been involved for many years in trying to recruit LGBT candidates to run for public office. One of the most frequent reasons, if not the most frequent reason, cited by those people is the online hate they know they will face. It's a very real thing. I think Mr. Boissonnault's suggestion is very good.
There are two things you said in your testimony that I want to come back to. One of those I want to flag is that, when you talk about stance for permissible speech, that concerns me. What I'm looking for is grounds for prohibited speech. I think when you stray over that line and start talking about permissible speech, we're into concerns that I would share about free speech. We're talking about what speech should be prohibited because of its real and negative impacts in the community.
I guess I'm cautioning everybody, including myself, not to fall across that line into saying what is permitted, but instead what is prohibited because of its real-life impacts.
The other one of those, and I'm going to ask you about it, is your talking about over-rotation and false positives, and I guess my question really is: is that a real problem? If it's hate speech or promotional violence, there is an urgency for its removal. If it's not, it can be reposted at any time. There's not an urgency or a necessity to respond to that within 24 hours or whatever your standard is. If you find you are wrong, it can be put back up. If you find someone complains that they've been unjustly banned, you can deal with that.
There's an urgency with the hate and violence piece there that I think concerns me when you say that you're concerned about that part of the false positives, because you can correct that, but once the pieces of hate and promotional violence are out there, they're very difficult to find and stop, as you know. You can't really get rid of them once they're out there.
I guess I would just ask you about that concern, because I would err on the side of urgency for removal, and you can fix the other things.