Thank you to all members of the committee for the opportunity to speak with you today.
I don't mind the delay. It's the business of Parliament, and I'm just happy to be a part of it today.
As the chair just mentioned, my name is Colin McKay, and I'm the head of government affairs and public policy for Google in Canada.
We, like you, are deeply troubled by the increase in hate and violence in the world. We are alarmed by acts of terrorism and violent extremism like those in New Zealand and Sri Lanka. We are disturbed by attempts to incite hatred and violence against individuals and groups here in Canada and elsewhere. We take these issues seriously, and we want to be part of the solution.
At Google, we build products for users from all backgrounds who live in nearly 200 countries and territories around the world. It is essential that we earn and maintain their trust, especially in moments of crisis. For many issues, such as privacy, defamation or hate speech, local legislation and legal obligations may vary from country to country. Different jurisdictions have come to different conclusions about how to deal with these complex issues. Striking this balance is never easy.
To stop hate and violent extremist content online, tech companies, governments and broader society need to work together. Terrorism and violent extremism are complex societal problems that require a response, with participation from across society. We need to share knowledge and to learn from each other.
At Google we haven't waited for government intervention or regulation to take action. We've already taken concrete steps to respond to how technology is being used as a tool to spread this content. I want to state clearly that every Google product that hosts user content prohibits incitement to violence and hate speech against individuals or groups, based on particular attributes, including race, ethnicity, gender and religion.
When addressing violent extremist content online, our position is clear: We are agreed that action must be taken. Let me take some time to speak to how we've been working to identify and take down this content.
Our first step is vigorously enforcing our policies. On YouTube, we use a combination of machine learning and human review to act when terrorist and violent extremist content is uploaded. This combination makes effective use of the knowledge and experience of our expert teams, coupled with the scale and speed offered by technology.
In the first quarter of this year, for example, YouTube manually reviewed over one million videos that our systems had flagged for suspected terrorist content. Even though fewer than 90,000 of them turned out to violate our terrorism policy, we reviewed every one out of an abundance of caution.
We complement this by working with governments and NGOs on programs that promote counter-speech on our platforms—in the process elevating credible voices to speak out against hate, violence and terrorism.
Any attempt to address these challenges requires international coordination. We were actively involved in the drafting of the recently announced Christchurch Call to Action. We were also one of the founding companies of the Global Internet Forum to Counter Terrorism. This is an industry coalition to identify digital fingerprints of terrorist content across our services and platforms, as well as sharing information and sponsoring research on how to best curb the spread of terrorism online.
I've spoken to how we address violent extremist content. We follow similar steps when addressing hateful content on YouTube. We have tough community guidelines that prohibit content that promotes or condones violence against individuals or groups, based on race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity. This extends to content whose primary purpose is inciting hatred on the basis of these core characteristics. We enforce these guidelines rigorously to keep hateful content off our platforms.
We also ban abusive videos and comments that cross the line into a malicious attack on a user, and we ban violent or graphic content that is primarily intended to be shocking, sensational or disrespectful.
Our actions to address violent and hateful content, as is noted in the Christchurch call I just mentioned, must be consistent with the principles of a free, open and secure Internet, without compromising human rights and fundamental freedoms, including the freedom of expression. We want to encourage the growth of vibrant communities, while identifying and addressing threats to our users and their broader society.
We believe that our guidelines are consistent with these principles, even as they continue to evolve. Recently, we extended our policy dealing with harassment, making content that promotes hoaxes much harder to find.
What does this mean in practice?
From January to March 2019, we removed over 8.2 million videos for violating YouTube's community guidelines. For context, over 500 hours of video are uploaded to YouTube every minute. While 8.2 million is a very big number, it's a smaller part of a very large corpus. Now, 76% of these videos were first flagged by machines rather than humans. Of those detected by machines, 75% had not received a single view.
We have also cracked down on hateful and abusive comments, again by using smart detection technology and human reviewers to flag, review and remove hate speech and other abuse in comments. In the first quarter of 2019, machine learning alone allowed us to remove 228 million comments that broke our guidelines, and over 99% were first detected by our systems.
We also recognize that content can sit in a grey area, where it may be offensive but does not directly violate YouTube's policies against incitement to violence and hate speech. When this occurs, we have built a policy to drastically reduce a video's visibility by making it ineligible for ads, removing its comments and excluding it from our recommendation system.
Some have questioned the role of YouTube's recommendation system in propagating questionable content. Several months ago we introduced an update to our recommendation systems to begin reducing the visibility of even more borderline content than can misinform users in harmful ways, and we'll be working to roll out this change around the world.
It's vitally important that users of our platforms and services understand both the breadth and the impact of the steps we have taken in this regard.
We have long led the industry in being transparent with our users. YouTube put out the industry's first community guidelines report, and we update it quarterly. Google has long released a transparency report with details on content removals across our products, including content removed upon request from governments or by order from law enforcement.
While our users value our services, they also trust them to work well and provide the most relevant and useful information. Hate speech and violent extremism have no place on Google or on YouTube. We believe that we have developed a responsible approach to address the evolving and complex issues that have seized our collective attention and that are the subject of your committee's ongoing work.
Thank you for this time, and I welcome any questions.
I have a two-part response if you'll be patient with me. I think the first is that if we're speaking specifically about YouTube and a platform where you're able to upload information, there isn't a process of verification/authentication, but you do need to provide some reference points for yourself as an uploader. This can be limited to an email address and some other data points, but it does create a bit of a marker, especially for law enforcement who may want to track back the behaviour of a particular video uploader.
One area we focus on, though, is that we're very conscious that many users rely on anonymity or pseudonymity to be able to take positions, especially in politically sensitive or socially heightened environments, particularly if they're advocates of a particular position using our platforms. The process of verification/authentication in those circumstances is actually detrimental to them.
What I will speak to is that in responding to incidents of hate and online violent extremist content, we have made conscious efforts both in Google Search and our Google News product, as well as YouTube, in the moments after a crisis especially, when there isn't a reliable, factual content available about the immediate crisis, to focus, as our responsibility, on the authenticity and authority of those sources that are reporting and commenting on the crisis.
Within our systems, particularly in YouTube, you will see that if you're looking at a particular incident, the other material that is recommended to you comes from reliable sources that you likely have had contact with before. We try to send those signals. In addition to making information that's relevant to your query available, we're trying to make it clear that we're also trying to provide that level of reassurance, if not certainty.
First, speaking specifically to Europe, which has in place a code of conduct around hate speech and very clear reporting obligations, we've arrived at a point where 83.8% of the content that has been flagged for review is assessed in less than 24 hours, and 7.9% in less than 48 hours.
That gives you a bit of an idea of the window within which content that deals with hate can be reviewed appropriately. Obviously, from our point of view, we're trying to improve on that.
The way we work within this broad organization of 10,000 is that we have very clear-cut internal review, and established guidelines for those review teams, around what the expectations and obligations are within each jurisdiction and what is explicitly illegal, and then what we would consider borderline illegal that requires some level of intervention on YouTube to restrict access to that content.
Internal to the company, like any multinational, we have a team that's dedicated to identifying both the differences and the similarities, and ensuring that we are in compliance.