Thank you, Mr. Chair, for inviting me to appear today to discuss this study on online hate.
On behalf of Twitter, I'd like to acknowledge the hard work of all committee members and witnesses on this issue. I apologize; my opening remarks are long. There's a lot to unpack. We're a 280-character company though, so maybe they aren't. We'll see how it goes.
Twitter's purpose is to serve the public conversation. Twitter is public by default. When individuals create Twitter accounts and begin tweeting, their tweets are immediately viewable and searchable by anyone around the world. People understand the public nature of Twitter. They come to Twitter expecting to see and join public conversations. As many of you have experienced, tweets can be directly quoted in news articles, and screen grabs of tweets can often be shared by users on other platforms. It is this open and real-time conversation that differentiates Twitter from other digital companies. Any attempts to undermine the integrity of our service erode the core tenet of freedom of expression online, the value upon which our company is based.
Twitter respects and complies with Canadian laws. Twitter does not operate in a separate digital legal world, as has been suggested by some individuals and organizations. Existing Canadian legal frameworks apply to digital spaces, including Twitter.
There has been testimony from previous witnesses supporting investments in digital and media literacy. Twitter agrees with this approach and urges legislators around the world to continuously invest in digital and media literacy. Twitter supports groups that educate users, especially youth, about healthy digital citizenship, online safety and digital skills. Some of our Canadian partners include MediaSmarts—and I will note that they just yesterday released a really excellent report on online hate with regard to youth—Get Cyber Safe, Kids Help Phone, We Matter and Jack.org.
While we welcome everyone to the platform to express themselves, the Twitter rules outline specific policies that explain what types of content and behaviour are permitted. We strive to enforce these rules consistently and impartially. Safety and free expression go hand in hand, both online and in the real world. If people don't feel safe to speak, they won't.
We put the people who use our service first in every step we take. All individuals accessing or using Twitter services must adhere to the policies set forth in the Twitter rules. Failure to do so may result in Twitter's taking one or more enforcement actions, such as temporarily limiting your ability to create posts or interact with other Twitter users; requiring you to remove prohibited content, such as removing a tweet, before you can create new posts or interact with other Twitter users; asking you to verify account ownership with a phone number or email address; or permanently suspending your account.
The Twitter rules enforcement section includes information about the enforcement of the following Twitter rules categories: abuse, child sexual exploitation, private information, sensitive media, violent threats, hateful conduct and terrorism.
I do want to quickly touch on terrorism.
Twitter prohibits terrorist content on its service. We are part of the Global Internet Forum to Counter Terrorism, commonly known as GIFCT, and we endorse the Christchurch call to action. Removing terrorist content and violent extremist content is an area that Twitter has made important progress in, with 91% of what we remove being proactively detected by our own technology. Our CEO, Jack Dorsey, attended the Christchurch call meeting in Paris earlier this month and met with to reiterate Twitter's commitment to reduce the risks of live streaming and to remove viral content faster.
Under our hateful conduct policy, you may not “promote violence against or directly attack or threaten” people on the basis of their inclusion in a protected group, such as race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or serious disease. These include the nine protected categories that the United Nations charter of human rights has identified.
The Twitter rules also prohibit accounts that have the primary purpose of inciting harm towards others on the basis of the categories I mentioned previously. We also prohibit individuals who affiliate with organizations that—whether by their own statements or activities, both on and off the platform—“use or promote violence against civilians to further their causes.”
Content on Twitter is generally flagged for review for possible Twitter rules violations through our help centre found at help.twitter.com/forms or in-app reporting. It can also be flagged by law enforcement agencies and governments. We have a global team that manages enforcement of our rules with 24-7 coverage in every language supported on Twitter. We have also built a dedicated reporting flow exclusively for hateful conduct so it is more easily reported to our review teams.
We are improving. During the last six months of 2018, we took enforcement action on more than 612,000 unique accounts for violations of the Twitter rules categories. We are also taking meaningful and substantial steps to remove the burden on users to report abuse to us.
Earlier this year, we made it a priority to take a proactive approach to abuse in addition to relying on people's reports. Now, by using proprietary technology, 38% of abusive content is surfaced proactively for human review instead of relying on reports from people using Twitter. The same technology we use to track spam, platform manipulations and other violations is helping us flag abusive tweets for our team to review. With our focus on reviewing this type of content, we've also expanded our teams in key areas and locations so that we can work quickly to keep people safe. I would note: We are hiring.
The final subject I want to touch on is law enforcement. Information sharing and collaboration are critical to Twitter's success in preventing abuse that disrupts meaningful conversations on the service. Twitter actively works to maintain strong relationships with Canadian law enforcement agencies. We have positive working relationships with the Canadian centre for cybersecurity, the RCMP, government organizations and provincial and local police forces.
We have an online portal dedicated to law enforcement agencies that allows them to report illegal content such as hate, emergency requests and requests for information. I have worked with law enforcement agencies as well as civil society organizations to ensure they know how to use this dedicated portal.
Twitter is committed to building on this momentum, consistent with our goal of improving healthy conversations. We do so in a transparent, open manner with due regard to the complexity of this particular issue.
Thank you. I look forward to your questions.
I think you're also focusing specifically on terrorist and violent extremist content, which is a part of GIFCT as well. I believe we achieve a standard of two hours to try to take that content down.
As I stated in my remarks, with proprietary technology, 91% of that content doesn't make it to platform. We now have a better understanding of where it's being posted and who is posting it. Between the time you hit post and the time it comes through our servers, we can tag it.
It's a very interesting and important question that you ask because we're very proud of the work that we've done with regard to terrorism and violent extremist groups, but when we go to conferences like the Oslo Freedom Forum or RightsCon—I don't know if you know that conference; it happened in Toronto two years ago—we get feedback from groups like Amnesty International, which is here in Canada, and Witness, which is not, that are a little worried that we're too good at it. They want to see insignias on videos. They want to see the conversation that is happening in order to be able to follow it and eventually prosecute it.
We're trying to find this balance between the requests of governments to stop this kind of hate and these terrorist actions from happening, which, again, we've been very successful at, and the requirements of civil society to track them and prosecute.
Thank you very much for your question. It's something, again, that we consider thoughtfully and often.
There are two parts to the answer. The first one is our transparency report, which I would urge you to take a look at. It's published twice a year at transparency.twitter.com. In it we report, by government, what kinds of requests we have for takedowns, be they for information purpose or emergency takedowns. We report on how often we have completed them.
I think—and I could be wrong—in the previous report we complied with 100% of requests from the Canadian government, and it was a small number, like 38.
You can go and check that resource to see how we interact with governments. Of course, we're also governed by international rules and agreements. MLAT would be the law enforcement one that would govern how we work with other governments through Homeland Security in the U.S.
Finally, with regard to law enforcement, we work with them consistently. I was at RCMP headquarters yesterday to have discussions about whether they are getting the information they need from us, whether our reporting is helpful and sufficient in cases like child sexual exploitation and if they have enough of what they need to prosecute, and also that they understand what our limitations are and how we go through and assess reports.
I think, generally speaking, we believe that only about 1% of the content on Twitter makes up the majority of those accounts reported for abuse.
Focusing specifically on abuse, again, those statistics are published in our transparency report.
We action unique accounts focused on the six rules that I mentioned with regard to abuse, child sexual exploitation and hateful conduct. Out of the six categories we actioned in July to December 2018, 250,000 accounts under hateful conduct policies were suspended, 235,000 accounts under abuse, 56,000 under violent threats, 30,000 under sensitive media, 29,000 on child sexual exploitation and about 8,000 on private information.
It's very difficult to compare those numbers year to year, country to country, because context matters. There could have been an uprising in a country or something could have happened politically to spur on some sort of different conversation or different actions. But of the 500 million tweets per day, those are the numbers of accounts we actioned.
Context matters with regard to actions that we take. We have a number of signals that we measure and take a look at before we take action. Let me break that into two pieces: The first would be the reporting and the second would be review.
We publicly acknowledge that there's too much burden on victims to report to Twitter and that is why we are trying to do a better job. We now have a dedicated hateful conduct workflow so that we know we can raise those issues for review faster. As I mentioned, we're working with proprietary technology. We realize we have to do a better job in reporting abuse.
In reviewing those accounts or those tweets flagged for action, it's extremely important for us to try to get it right. There are a number of behavioural signals we get, so if I tweet something and you mute me, you block me and you report me, clearly something's up in the quality of the content. Further, we take a look at our rules and we take a look at the laws where that tweet came from.
The other part of context is that there are very different conversations happening on Twitter. Often, the example we use is gaming. It is perfectly acceptable in the gaming community to say something like, “I'm coming to kill you tonight, be prepared”, so we would like to make sure we have that context as well.
These are the things we consider when we make that review.
Ms. Austin, thank you for coming here before the committee.
Facebook banned a series of white nationalist groups: Faith Goldy, the Soldiers of Odin, Kevin Goudreau and the Canadian Nationalist Front, among others. Twitter only chose to ban the Canadian Nationalist Front from their own platform. Faith Goldy used Twitter to direct her followers to her website after her ban on Facebook.
When Twitter banned the Canadian Nationalist Front, you said you did so because the account violated the rules for barring violent extremist groups, but there was no further elaboration, and a multitude of other white nationalist groups still have a presence on the platform. Facebook has said that the removal of the groups came from their policy that they don't allow anyone who's engaged in offline organized hate to have a presence on the platform.
When Facebook took that step, why did Twitter only ban the Canadian Nationalist Front? Why is your threshold different from Facebook as to what can be allowed on your platform?
There are two things. First of all, there are no content reviewers in Canada. We have a really excellent Moments team. I don't know if you use the search function, but when you look through and we highlight trends, we have a really excellent Moments team in Canada that looks at content from that perspective.
Canadian voices are well represented across the country. There's bias training. There's a really huge number of trainings that occur.
I have personal experience where we've changed a policy and we've been asked to provide Canadian context, which is something I have done. We also have an appeal function internally in order to provide more context.
With regard to first nations, I don't know if you have heard from indigenous groups at this committee, but if you have not, I strongly encourage you to do so. They experience hate differently from us, not just in terms of hate but also language. This is something they have brought to my attention, for which I am very grateful. A word like “savage”, we would use very differently than they would use it, and this is something we are looking at going forward. Dr. Jeffrey Ansloos from the University of Toronto is a really excellent resource with regard to this.
These are open conversations that we're having. We look forward to hearing more from the indigenous community. They are doing a great job, for me at least, highlighting the different kinds of issues they get, as they often do run to conversations to try to correct the speech or tell their stories.
Does it matter if it's complex? We have to respect it, so that would be my point.
Are we taking into account what we're doing in other jurisdictions? Absolutely. I'll give you the GDPR as an example. That's a privacy example. When they implemented the GDPR in Europe, we took those best practices and applied them globally with regard to our company.
We are absolutely open to having discussions on best practices from around the world. Some things work very well, and some laws work very well in other countries.
If you're asking me also how to change Canadian law, or do that kind of thing, I wouldn't comment on that. If you're asking me about regulation, I would say that you should consider clear definitions for us to be measured against. Develop standards for transparency. We really value transparency with our users. The focus should be on systematic, recurring problems, not just individual one-off issues. Co-operation should be fostered.
I want to talk more about anonymity. I take the point you made to Mr. Barrett about it being sometimes quite necessary. I would also like to suggest to you that perhaps there could be a user option on the account to authenticate or not, and then there would be a flag that shows up on a tweet that says whether this person is authenticated or not. That might help because I do think anonymity is a factor in bad behaviour online.
I would also like to include in such a mechanism the prospect of pseudonymity. I think that in a case where you have people who are operating in situations where they don't want to identify themselves to authorities, such as authentication authorities such as VeriSign, that there's room for a relative authentication mechanism such as webs of trust, such as PGP offers, so that groups amongst themselves can identify themselves among themselves and use whatever names they like.
That would be a very great thing, I think, if that could be arranged.
I'm not sure if the second part of that fits the Twitter paradigm. I'm thinking more in terms of Facebook. It would be nice if interactions could be filtered based on whether or not people are authenticated or not, relative to my web of trust, perhaps, or relative to a white list of authentication authorities, perhaps, or not on a black list of authorities. Is that something that Twitter might be able to contemplate?
Let me take the second half of that first. You can mute words and accounts on Twitter, so if you were interested in the Raptors game tonight and couldn't see it, you could mute #WeTheNorth, #TorontoRaptors. You can filter that out of your conversations currently if you wanted to, in case you missed the game.
We are working on that, but right now, if you look on your account, you'll see “mute words” or “mute accounts”. You can do that to help filter through what you are and are not seeing. It's the same with the sensitive media setting.
With regard to information integrity and with regard to trusting who you're interacting with, this is actually something I testified on at the Senate last year. It is something that we are studying very carefully. Our verification process is on hold because we were unhappy with how it was being applied. We couldn't find a consistent policy that would reflect well to our users what it was about. It's something we are working on.
We're also making product changes daily. We have something now called the profile peak. As you're scrolling through, you can just hover over the image of the person you're interacting with and their profile will pop up. Further, we now also tell you where they're tweeting from and when so that you can understand if they're tweeting from their iPhone, if they're tweeting from Hootsuite or some other sort of third party application.
I take your question and your comment seriously. It is something that we are working on to improve the user experience.
Thanks. I have a question that I didn't get to ask in my time.
Touching a bit on authentication, which has been raised a couple of times, I'm obviously very concerned about the amount of disinformation on social media platforms that is propagated easily. Unfortunately, too many people aren't using their critical thinking skills when they see this disinformation. It allows these sorts of stories to spread like wildfire. People believe them and then they comment, and it almost becomes a mob mentality online.
It's important to respect journalistic standards for news items that are shared on social media platforms. I know that you have the blue check mark that certain organizations can be authenticated in a certain way, to say that this is a legitimate entity that's putting forward this information.
Is there another way to identify to individuals using your platform that this is a trustworthy news source that uses journalistic standards? It's not necessarily trying to get into the content itself, but saying that this is something that could be relied upon for your users?
I used to work for Preston Manning and he said to me not too long ago, “Michele, fake news is not new. You just have to go back to Genesis, and what the serpent told Eve was 100% fake news.” So it's not a new phenomenon. It's something, though, that we are very, very aware of and actioning. In 2018, we identified and challenged more than 425 million accounts that were suspected of engaging in platform manipulation.
With regard to context, we do context differently than our other competitors and other digital companies. As I mentioned, the Moments team, which has a really wonderful foothold in Canada, is now adding more information to stories that they curate. We curate stories using individual editors, if you will. We put tweets together, and now we're adding more context.
I'll take the example of the Alberta Speech from the Throne. We explained what a Speech from the Throne was in terms of that moment, and how it worked and what happened.
Certainly, fake news is something we're keeping a very close eye on. It's particularly concerning during elections. We have an election integrity team dedicated to it. It's a cross-functional team during elections. The Alberta election—knock on wood—went very smoothly. We had that team completely involved in that election.
However, it's something we're concerned about.