Thank you, Mr. Erskine-Smith.
Are there any further comments on the motion?
Frankly, to answer your question, being the chair of this committee on both levels, the international and our ethics committee, it's abhorrent that he's not here today and that Ms. Sandberg is not here today. It was very clearly communicated to them that they were to appear today before us. A summons was issued, which is already an unusual act for a committee. I think it's only fitting that there be an ongoing summons. As soon as either Mr. Zuckerberg or Ms. Sandberg step foot into our country, they will be served and expected to appear before our committee. If they choose not to, then the next step will be to hold them in contempt.
I think the words are strong, Mr. Angus, and I applaud you for your motion.
If there is not any further discussion on the motion, we'll go to the vote.
(Motion agreed to)
The Chair: Thank you, Mr. Angus.
Next, we'll go to the platforms. We'll start with Facebook, go to Google, and then....
I'll mention the names. With Facebook Inc., we have Kevin Chan, Global Policy Director for Canada, and Neil Potts, Global Policy Director. With Google LLC, we have Derek Slater, Global Director of Information Policy; and with Google Canada, Colin McKay, Head, Government Affairs and Public Policy. From Twitter Inc., we have Carlos Monje, Director of Public Policy, and Michele Austin, Head, Government and Public Policy, Twitter Canada.
I would like to say that it wasn't just the CEOs of Facebook who were invited today. The CEOs of Google were invited. The CEO of Twitter was invited. We are more than disappointed that they as well chose not to show up.
We'll start off with Mr. Chan, for seven minutes.
Thank you very much, Mr. Chair.
My name is Kevin Chan, and I am here today with my colleague Neil Potts. We are both global policy directors at Facebook.
The Internet has transformed how billions of people live, work and connect with each other. Companies such as Facebook have immense responsibilities to keep people safe on their services. Every day we are tasked with the challenge of making decisions about what speech is harmful, what constitutes political advertising and how to prevent sophisticated cyber-attacks. This is vital work to keeping our community safe, and we recognize this work is not something that companies like ours should do alone.
New rules for the Internet should preserve what is best about the Internet and the digital economy—fostering innovation, supporting growth for small businesses, and enabling freedom of expression—while simultaneously protecting society from broader harms. These are incredibly complex issues to get right, and we want to work with governments, academics and civil society around the world to ensure new regulations are effective.
We are pleased to share with you today some of our emerging thinking in four areas of possible regulatory action: harmful content, privacy, data portability and election integrity.
With that, I will turn it over to my colleague Neil, who would love to engage with you about harmful content.
Chair, members of the committee, thank you for the opportunity to be here today.
I'm Neil Potts. I'm a Director with oversight of the development and implementation of Facebook's community standards. Those are our guidelines for what types of content are allowed on our platform.
Before I continue, though, I'd just like to point out that Kevin and I are global directors, subject matter area experts, ready to engage with you on these issues. Mark and Sheryl, our CEO and COO, are committed to working with government in a responsible manner. They feel that we have their mandate to be here today before you to engage on these topics, and we are happy to do so.
As you know, Facebook's mission is to give people the power to build community and to bring the world closer together. More than two billion people come to our platform every month to connect with family and friends, to find out what's going on in the world, to build their businesses and to help those in need.
As we give people a voice, we want to make sure that they're not using that voice to hurt others. Facebook embraces the responsibility of making sure that the tools we build are used for good and that we keep people safe. We take those responsibilities very seriously.
Early this month, Facebook signed the Christchurch Call to Eliminate Terrorist and Violent Extremist Content Online, and we have taken immediate action on live streaming. Specifically, people who have broken certain rules on Facebook, which include our “dangerous organizations and individuals” policy, will be restricted from using Facebook Live.
We are also investing $7.5 million in new research partnerships with leading academics to address the adversarial media manipulation that we saw after Christchurch—for example, when some people modified the video to avoid detection in order to repost it after it had already been taken down.
As the number of users on Facebook has grown, and as the challenge of balancing freedom of expression and safety has increased, we have come to realize that Facebook should not be making so many of these difficult decisions alone. That's why we will create an external oversight board to help govern speech on Facebook by the end of 2019. The oversight board will be independent from Facebook, and it will be a final level of appeal for what stays up and what comes down on our platform.
Even with the oversight board in place, we know that people will use many different online platforms and services to communicate, and we'd all be better off if there were clear baseline standards for all platforms. This is why we would like to work with governments to establish clear frameworks related to harmful online content.
We have been working with President Macron of France on exactly this kind of project, and we welcome the opportunity to engage with more countries going forward.
In terms of privacy we very clearly understand our important responsibility as custodians of people's data and the need for us to do better. That is why, since 2014, we have taken significant measures to drastically reduce the amount of data that third party applications can access on Facebook and why we're putting together a much bigger and muscular privacy function within the company. We've also made significant advancements to give people more transparency and control over their data.
We recognize that, while we're doing much more on privacy, we're all better off when there are overarching frameworks to govern the collection and use of data. Such frameworks should protect your right to choose how your information is used, while enabling innovation. They should hold companies such as Facebook accountable by imposing penalties when we make mistakes and should clarify new areas of inquiry, including when data can be used for the public good and how this should be applied to artificial intelligence.
There are already some good models to emulate, including the European Union's General Data Protection Regulation and Canada's Personal Information Protection and Electronic Documents Act. Achieving some degree of harmonization around the world would be desirable and would facilitate economic growth.
We also believe that the principle of data portability is hugely important for consumer choice and for ensuring a dynamic and competitive marketplace for digital services. People should be able to take the data they have put on one service and move it to another service. The question becomes how data portability can be done in a way that is secure and privacy-protective. Data portability can only be meaningful if there are common standards in place, which is why we support a standard data transfer format and the open source data transfer project.
Finally, Facebook is doing its utmost to protect elections on our platform around the world by investing significantly in people, technology and partnerships. We have tripled the number of people working on security matters worldwide from 10,000 to 30,000 people. We have developed cutting-edge AI technology that allows us to detect and remove fake accounts en masse.
Of course, we cannot achieve success working only on our own, so we've partnered with a wide range of organizations. In Canada we are proud to be working with Agence France-Presse on third party fact checking, MediaSmarts on digital literacy and Equal Voice to keep candidates, in particular women candidates, safe online.
Facebook is a strong supporter of regulations promoting the transparency of online political advertising. We think it is important that citizens should be able to see all the political ads that are running online, especially those that are not targeted at them. That is why we support and will comply with Bill , Canada's Elections Modernization Act, which this Parliament passed, and will be engaging in the weeks ahead with Canadian political advertisers, including the federal political parties represented here today, on important changes for political advertising that will come to the platform by the end of June.
Finally, Mr. Chair, if I may, as you will know, Facebook is part of the Canada declaration on electoral integrity online, which sets out 12 commitments that the Government of Canada and certain online platforms agree to undertake together in the lead up to the October federal election. This is a strong expression of the degree to which we are taking our responsibilities seriously in Canada, and we look forward to working in lockstep with officials to guard against foreign interference.
Thank you for the opportunity.
We look forward to taking your questions.
Thank you for the opportunity to appear before you today.
My name is Derek Slater, and at Google I help shape the company's approach to information policy and content regulation. I'm joined here by my colleague Colin McKay, who's the head of public policy for Google in Canada.
We appreciate your leadership and welcome the opportunity to discuss Google's approach to addressing our many shared issues.
For nearly two decades, we have built tools that help users access, create and share information like never before, giving them more choice, opportunity and exposure to a diversity of resources and opinions. We know, though, that the very platforms that have enabled these societal benefits may also be abused, and this abuse ranges from spam to violent extremism and beyond. The scrutiny of lawmakers and our users informs and improves our products as well as the policies that govern them.
We have not waited for government regulation to address today's challenges. Addressing illegal and problematic content online is a shared responsibility that requires collaboration across government, civil society and industry, and we are doing and will continue to do our part.
I will highlight a few of the things we're doing today. On YouTube, we use a combination of automated and human review to identify and remove violative content. Over time we have improved, removing more of this content faster and before it's even viewed. Between January and March 2019, YouTube removed nearly 8.3 million videos for violating its community guidelines, and 76% of these were first flagged by machines rather than people. Of those detected by machines, over 75% had never received a single view.
When it comes to combatting disinformation, we have invested in our ranking systems to make quality count in developing policies, threat monitoring and enforcement mechanisms to tackle malicious behaviours and in features that provide users with more context, such as fact check or information panels on Google Search and YouTube.
Relatedly, in the context of election integrity, we've been building products for over a decade that provide timely and authoritative information about elections around the world. In addition, we have devoted significant resources to help campaigns, candidates and election officials improve their cybersecurity posture in light of existing and emerging threats. Our Protect Your Election website offers free resources like advanced protection, which provides Google's strongest account security, and Project Shield, a free service designed to mitigate the risk of distributed denial of service attacks that inundate sites with traffic in an effort to shut them down.
While industry needs to do its part, policy-makers, of course, have a fundamental role to play in ensuring everyone reaps the personal and economic benefits of modern technologies while addressing social costs and respecting fundamental rights. The governments and legislatures of the nearly 200 countries and territories in which we operate have come to different conclusions about how to deal with issues such as data protection, defamation and hate speech. Today's legal and regulatory frameworks are the product of deliberative processes, and as technology and society's expectations evolve, we need to stay attuned to how best to improve those rules.
In some cases, laws do need updates, for instance, in the case of data protection and law enforcement access to data. In other cases, new collaboration among industry, government and civil society may lead to complementary institutions and tools. The recent Christchurch call to action on violent extremism is just one example of this sort of pragmatic, effective collaboration.
Similarly, we have worked with the European Union on its hate speech code of conduct, which includes an audit process to monitor how platforms are meeting their commitments, and on the recent EU Code of Practice on Disinformation. We agreed to help researchers study this topic and to provide a regular audit of our next steps in this fight.
New approaches like these need to recognize relevant differences between services of different purpose and function. Oversight of content policies should naturally focus on content sharing platforms. Social media, video sharing sites and other services that have the principle purpose of helping people to create content and share it with a broad audience should be distinguished from other types of services like search, enterprise services, file storage and email, which require different sets of rules.
With that in mind, we want to highlight today four key elements to consider as part of evolving oversight and discussion around content sharing platforms.
First is to set clear definitions.
While platforms have a responsibility to set clear rules of the road for what is or is not permissible, so too, do governments have a responsibility to set out the rules around what they consider to be unlawful speech. Restrictions should be necessary and proportionate, based on clear definitions and evidence-based risks and developed in consultation with relevant stakeholders. These clear definitions, combined with clear notices about specific pieces of content, are essential for platforms to take action.
Second, develop standards for transparency and best practice.
Transparency is the basis for an informed discussion and helps build effective practices across the industry. Governments should take a flexible approach that fosters research and supports responsible innovation. Overly restrictive requirements like one-size-fits-all removal times, mandated use of specific technologies or disproportionate penalties will ultimately reduce the public's access to legitimate information.
Third, focus on systemic recurring failures rather than one-offs.
Identifying and responding to problematic content is similar, in a way, to having information security. There will always be bad actors and bugs and mistakes. Improvement depends on collaboration across many players using data-driven approaches to understand whether particular cases are outliers or representative of a more significant recurring systemic problem.
Fourth and finally, foster international co-operation.
As today's meeting demonstrates, these concerns and issues are global. Countries should share best practices with one another and avoid conflicting approaches that impose undue compliance burdens and create confusion for customers. That said, individual countries will make different choices about permissible speech based on their legal traditions, history and values consistent with international human rights obligations. Content that is unlawful in one country may be lawful in another.
These principles are meant to contribute to a conversation today about how legislators and governments address the issues we are likely to discuss, including hate speech, disinformation and election integrity.
In closing, I will say that the Internet poses challenges to the traditional institutions that help society organize, curate and share information. For our part, we are committed to minimizing that content that detracts from the meaningful things our platforms have to offer. We look forward to working with the members of this committee and governments around the world to address these challenges as we continue to provide services that promote and deliver trusted and useful information.
Chairman Zimmer, Chairman Collins and members of the committee, my name is Carlos Monje. I'm Director of Public Policy for Twitter. I'm joined by Michele Austin, who's our Head of Public Policy for Canada.
On behalf of Twitter, I would like to acknowledge the hard work of all the committee members on the issues before you. We appreciate your dedication and willingness to work with us.
Twitter's purpose is to serve the public conversation. Any attempts to undermine the integrity of our service erodes the core tenets of freedom of expression online. This is the value upon which our company is based.
The issues before this committee are ones that we care about deeply as individuals. We want people to feel safe on Twitter and to understand our approach to health and safety of the service. There will always be more to do, but we've made meaningful progress.
I would like to briefly touch upon our approach to privacy and disinformation and I look forward to your questions.
Twitter strives to protect the privacy of the people who use our service. We believe that privacy is a fundamental human right. Twitter is public by default. This differentiates our service from other Internet sites. When an individual creates a Twitter account and begins tweeting, their tweets are immediately viewable and searchable by anyone around the world. People understand the default public nature of Twitter and they come to Twitter expecting to see and join in a public conversation. They alone control the content that they share on Twitter, including how personal or private that content might be.
We believe that when people trust us with their data, we should be transparent about how we provide meaningful control over what data is being collected, how it is used and when it is shared. These settings are easily accessible and built with user friendliness front of mind. Our most significant personalization in data settings are located on a single page.
Twitter also makes available the “your Twitter data” toolset. Your Twitter data provides individuals with insight on the types of data stored by us, such as username, email address, phone numbers associated with the account, account creation details and information about the inferences we may have drawn. From this toolset, people can do things like edit their inferred interests, download their information and understand what we have.
Twitter is also working proactively to address spam, malicious automation, disinformation and platform manipulation by improving policies and expanding enforcement measures, providing more context for users, strengthening partnerships with governments and experts, and providing greater transparency. All of this is designed to foster the health of the service and protect the people who use Twitter.
We continue to promote the health of the public conversation by countering all forms of platform manipulation. We define platform manipulation as using Twitter to disrupt the conversation by engaging in bulk aggressive or deceptive activity. We've made significant progress. In fact, in 2018, we identified and challenged more than 425 million accounts suspected of engaging in platform manipulation. Of these, approximately 75% were ultimately suspended. We are increasingly using automated and proactive detection methods to find abuse and manipulation on our service before they impact anyone's experience. More than half the accounts we suspend are removed within one week of registration—many within hours.
We will continue to improve our ability to fight manipulative content before it affects the experience of people who use Twitter. Twitter cares greatly about disinformation in all contexts, but improving the health of the conversation around elections is of utmost importance. A key piece of our election strategy is expanding partnerships with civil society to increase our ability to understand, identify and stop disinformation efforts.
Here in Canada, we're working with Elections Canada, the commissioner of Canada Elections, the Canadian centre for cybersecurity, the Privy Council Office, democratic institutions and civil society partners such as the Samara Centre for Democracy and The Democracy Project.
In addition to our efforts to safeguard the service, we believe that transparency is a proven and powerful tool in the fight against misinformation. We have taken a number of actions to disrupt foreign operations and limit voter suppression and have significantly increased transparency around these actions. We released to the public and to researchers the world's largest archive of information operations. We've pervaded data and information on more than 9,600 accounts including accounts originating in Russia, Iran and Venezuela, totalling more than 25 million tweets.
It is our fundamental belief that these accounts and their content should be available and searchable, so that members of the public, governments and researchers can investigate, learn and build media literacy capabilities for the future. They also help us be better.
I want to highlight one specific example of our efforts to combat disinformation here in Canada.
Earlier this spring we launched a new tool to direct individuals to credible public health resources when they searched Twitter for key words associated with vaccines. Here we partnered with the Public Health Agency of Canada. This new investment builds on our existing work to guard against the artificial amplification of non-credible content about the safety and effectiveness of vaccines. Moreover, we already ensure that advertising content does not contain misleading claims about the cure, treatment, diagnosis or prevention of any disease, including vaccines.
In closing, Twitter will continue to work on developing new ways to maintain our commitment to privacy, to fight disinformation on our service and to remain accountable and transparent to people across the globe. We have made strong and consistent progress, but our work will never be done.
Once again, thank you for the opportunity to be here. We look forward to your questions.
Over the years, the digital platforms you represent have developed very powerful, even overly powerful, tools. You are in the midst of a frantic race for performance. However, it isn't necessarily for the well-being of humanity, but rather for the personal interests of your companies.
Let me make an analogy. You have designed cars that can travel up to 250 kilometres an hour, but you rent them to drivers who travel at that speed in school zones. You have developed tools that have become dangerous, that have become weapons.
As a legislator, I do not accept that you rejected out of hand your responsibility in this regard. These tools belong to you, you have equipped them with functions, but you don't necessarily choose the users. So you rent your tools commercially to people who misuse them.
In the election we'll have in Canada in a few months, will you have the technical ability to immediately stop any fake news, any form of hate advertising or any form of advertising that would undermine our democracy? Will you be able to act very quickly? At the very least, can you stop all advertising during elections in Canada and other countries, if you cannot guarantee us absolute control over the ads that can be placed on your platforms?
We'll start with the representatives from Facebook, then I'd like to hear from Google and Twitter.
Yes, thank you for that. Sorry, I only have a few minutes.
I appreciate that work. I'm a big fan of Facebook.
Mr. Kevin Chan: Thank you, sir.
Mr. Charlie Angus: I've spoken greatly about the powerful tools it has in the indigenous communities I represent.
My concern is this idea of opt-in, opt-out that Facebook has when it comes to national law. First of all, you ignored a summons by Parliament because Mr. Zuckerberg may be busy. It may be his day off. I don't know.
You were recently found guilty by our regulator in the Cambridge Analytica breach. Our regulator, Mr. Therrien, said:
Canadians using Facebook are at high risk that their personal information will be used in ways they may not be aware of, for purposes that they did not agree to and which may be contrary to their interests and expectations.
This could result in real harms, including political...surveillance.
What was striking was that Facebook didn't concede that we have jurisdiction over our own citizens. If you're saying you're willing to work with parliamentarians, I don't get this opt-in when it works for Facebook and opt-out when....
Can you give me an example of any company saying that they just don't recognize whether or not we have jurisdiction over our citizens?
I have limited time, so I would appreciate it if you just focus on my questions and give the answers directly.
We've heard a lot about what you wish to do, who you are engaged with, who you wish to see and how you're going to work on your policies, but let's just see what actually appears and continues to appear on your platforms.
To do that and to save some time, I have put together a little handout that summarizes several cases, which I have no doubt you're familiar with. Just thumb through them quickly. These are all cases that were sensational. They all went viral quickly. They were probably amplified by trolls and bots—fake accounts. They incite fear, they cause disaffection and tensions, and they prey on divisive social fault lines: race, religion, immigration.
One key fact is that they're all false information as well, and all resulted in real world harm: physical injuries, deaths, riots, accentuating divisions and fault lines between religions and races, causing fear.
Just go to the very last page of the handout and look at Sri Lanka, April 2019. The leader of the Easter terrorist bombings in Sri Lanka had posted videos that were on your platforms—Facebook and YouTube—for at least six months prior to the bombing itself. In the videos, he says, “Non-Muslims and people who don't accept Muslims can be killed, along with women and children.” Separately, he says, “We can kill women and children with bombs. It is right.”
This is clear hate speech, is it not?
Thank you, Mr. Chairman. We're delighted to be here this afternoon.
I'll start by listing out my questions for you, and the social media companies can answer afterwards. I have about three questions here. The first is in relation to data protection.
It's very clear that you're all scrambling to figure out how to make privacy rules clear and how to protect users' data. The inception of GDPR has been a sea change in European data protection. The Irish data protection commissioner now has the job of effectively regulating Europe, given the number of social media companies who have their headquarters based in Ireland.
In the 11 months since GDPR came into force, the commissioner has received almost 6,000 complaints. She has said that her concentration on Facebook is because she didn't think that there would be so many significant data breaches by one company, and at one point, there were breaches notified to her under GDPR every fortnight, so she opened a consolidated investigation to look at that. I want to ask Facebook if you can comment on her remarks and why you're having such difficulty protecting users' data.
Also, for this next question, I might ask Google and Facebook to comment. I and my colleague James Lawless and deputy Eamon Ryan met earlier this year with Mark Zuckerberg in Ireland, and he said that he would like to see GDPR rolled out globally. Some of Facebook's biggest markets are in the developing world, such as in Asia and Africa, and out of the top 10 countries, there are only two in the developed world, the United States and the United Kingdom. Some experts are saying that a one-size-fits-all approach won't work with GDPR, because some regions have different interpretations of the importance of data privacy.
I would like to get Google's viewpoint on that—What is your view is in relation to the rollout of GDPR globally? How would that work? Should it be in place globally?—and in relation to the concerns around the different interpretations of data privacy.
Finally, due to the work of our communications committee in the Oireachtas—the Irish parliament—the Irish government is now going to introduce a digital safety commissioner who will have legal takedown powers in relation to harmful communication online. Given that Ireland is the international and European headquarters for many social media companies, do you think that this legislation will effectively see Ireland regulating content for Europe and possibly beyond?
Whoever would like to come in first, please comment on that if you could.
Thank you, ma'am. I want to indicate that Neil and I spent some time with our Irish counterparts, and they have nothing but complimentary things to say about you, so it's nice to meet you.
With respect to the question of breaches that you mention, obviously we are not aware of the specifics that would have been sent to the Irish data protection authority, so I wouldn't be able to comment specifically on that, but I would say our general posture is to be as transparent as we can.
You—and various members at this committee—will probably know that we are quite forward-leaning in terms of publicly revealing where there have been bugs, where there has been some information we have found where we need to pursue investigations, where we're made aware of certain things. That's our commitment to you but also to users around the world, and I think you will continue to hear about these things as we discover them. That is an important posture for us to take, because we do want to do what is right, which is to inform our users but also to inform the public and legislators as much as possible whenever we are made aware of some of these instances.
We want to be very transparent, which is why you will hear more from us. Again, unfortunately, I cannot speak to the specifics that the DPA was referring to.
I will also focus first on Facebook. It's great to get to know the Canadian public policy team. I know the German public policy team.
First, I would like to comment on what my colleague from Singapore asked. From my experience in Germany, I have a relatively simple answer. It is simply that many companies, also present today, do not have enough staff to work on all these issues and all these complaints. As has already been mentioned, AI is not always sufficient to work on this. What we've learned in Germany, after the introduction of the NetzDG, is that a massive increase in staff, which is needed to handle complaints, also increases the number of complaints that are handled. I don't know the situation in other countries, but this is definitely an important aspect.
I want to ask about the antitrust ruling in Germany on the question of whether the data from Facebook, WhatsApp and Instagram should be combined without the consent of the users. You are working against that ruling in Germany, so obviously you don't agree, but maybe you can be a bit clearer on your position.
I would also like to thank the kind team doing us the honour of being here today: Kevin Chan, Derek Slater, Neil Potts, Carlos Moje and Michele Austin. We would have liked Mark Zuckerberg to be with us, but he let us down. We hope he will return some other time.
I have been very attentive to two proposals from Mr. Chan. I would like to make a linguistic clarification for interpreters: when I use the word “proposition”, in English, it refers to the term “proposition”, and not “proposal”.
In presenting the issues raised by his company, Mr. Chan said that it was not just Facebook's responsibility to resolve them. We fully agree on this point.
And then, again on these issues, he added that society must be protected from the consequences. Of course, these platforms have social advantages. However, today we are talking about the social unrest they cause; this is what challenges us more than ever.
Facebook, Twitter and YouTube were initially intended to be a digital evolution, but it has turned into a digital revolution. Indeed, it has led to a revolution in systems, a revolution against systems, a revolution in behaviour, and even a revolution in our perception of the world.
It is true that today, artificial intelligence depends on the massive accumulation of personal data. However, this accumulation puts other fundamental rights at risk, as it is based on data that can be distorted.
Beyond the commercial and profit aspect, wouldn't it be opportune for you today to try a moral leap, or even a moral revolution? After allowing this dazzling success, why not now focus much more on people than on the algorithm, provided that you impose strict restrictions beforehand, in order to promote accountability and transparency?
We sometimes wonders if you are as interested when misinformation or hate speech occurs in countries other than China or in places other than Europe or North America, among others.
It isn't always easy to explain why young people, or even children, can upload staged videos that contain obscene scenes, insulting comments or swear words. We find this unacceptable. Sometimes, this is found to deviate from the purpose of these tools, the common rule and the accepted social norm.
We aren't here to judge you or to conduct your trial, but much more to implore you to take our remarks into consideration.
Thank you very much, Mr. Ouzzine.
Again, please allow me to answer in English. It isn't because I can't answer your question in French, but I think I'll be clearer in English.
I'm happy to take the first question with respect to what you were talking about—humans versus machines or humans versus algorithms. I think the honest truth on that is that we need both, because we have a huge amount of scale, obviously. There are over two billion people on the platform, so in order to get at some of the concerns that members here have raised, we do need to have automated systems that can proactively find some of these things.
I think to go back to Mr. Collins's first question, it is also equally important that we have humans that are part of this, because context is ultimately going to help inform whether or not this is malicious, so context is super important.
If I may say so, sir, on the human questions, I do think you are hitting on something very important, and I had mentioned it a bit earlier. There is this need, I think, for companies such as Facebook not to make all of these kinds of decisions. We understand that. I think people want more transparency and they want to have a degree of understanding as to why decisions were arrived at in the way they were in terms of what stays up and what goes down.
I can tell you that in the last few months, including in Canada, we have embarked on global consultation with experts around the world to get input on how to create an external appeals board at Facebook, which would be independent of Facebook and would make decisions on these very difficult content questions. We think there is—at least as our current thinking in terms of what we put out there—this question of whether they should be publicly binding on Facebook. That is sort of the way we have imagined it and we are receiving input and we will continue to consult with experts. Our commitment is to get this done by 2019.
Certainly, on our platform, we understand that this is challenging. We want a combination of humans and algorithms, if you will, but we also understand that people will have better confidence in the decisions if there is a final board of appeal, and we're going to build that by 2019.
Of course, we're all here today to discuss the broader question of regulatory frameworks that should apply to all services online. There, once again obviously, the human piece of it will be incredibly important. So thank you, sir, for raising that, because that's the nub, I think, of what we're trying to get at—the right balance and the right framework per platform but also across all services online.
[Delegate spoke in Spanish, interpreted as follows:
Thank you very much.
I want to talk about some of the concerns that have already been mentioned at this meeting, and also express great concern regarding tweets and Twitter, on which there is a proliferation in the creation of false accounts that are not detected. They definitely remain active for a very long time on social networks and generate, in most cases, messages and trends that are negative and against different groups, both political and those that are linked to businesses or unions in many different areas.
I don't know what mechanisms you have decided to choose to verify the creation of these, because these are accounts that have to do with troll centres or troll farms, which in the case of Ecuador have really cropped up very frequently and which continue to. They have been spreading messages on a massive scale, malicious messages that counteract real information and true information and really twist the points of view.
More than continuing to mention the issues that have already been mentioned, I would urge you to think about fact-checking mechanisms that can detect these accounts in a timely manner, because definitely you do not do it quickly enough or as quickly as is necessary. This allows damaging messages to proliferate and generate different thoughts, and they distort the truth about a lot of subjects.
I don't know what the options are, in practice, or what you're going to be doing in practice to avoid this or prevent this, and to prevent the existence of these troll centres and the creation of accounts that are false, of which there are many.
Thank you. That is exactly the right question to ask, and one that we work on every day.
I'll just note that our ability to identify, disrupt and stop malicious automation improves every day. We are now catching—I misspoke earlier—425 million accounts, which we challenged in 2018.
Number one is stopping the coordinated bad activity that we see on the platform. Number two is working to raise credible voices—journalists, politicians, experts and civil society. Across Latin America we work with civil society, especially in the context of elections, to understand when major events are happening, to be able to focus our enforcement efforts on those events, and to be able to give people more context about people they don't understand.
I'll give you one example because I know time is short. If you go onto Twitter now, you can see the source of the tweet, meaning, whether it is coming from an iPhone, an Android device, or from TweetDeck or Hootsuite, or the other ways that people coordinate their Twitter activities.
The last piece of information or the way to think about this is transparency. We believe our approach is to quietly do our work to keep the health of the platform strong. When we find particularly state-sponsored information operations, we capture that information and put it out into the public domain. We have an extremely transparent public API that anybody can reach. We learn and get better because of the work that researchers have undertaken and that governments have undertaken to delve into that dataset.
It is an incredibly challenging issue, I think. One of the things you mentioned is that it's easy for us to identify instantaneous retweets and things that are automated like that. It is harder to understand when people are paid to tweet, or what we saw in the Venezuelan context with troll prompts, those kinds of things.
We will continue to invest in research and invest in our trolling to get better.
Thank you, Mr. Co-chair.
My questions are to Neil Potts, Global Policy Director. I have two questions.
The first one is that I would like to understand and to know from him and from Facebook, generally, whether or not they understand the principle of “equal arms of government”. It would appear, based on what he said earlier in his opening remarks, that he is prepared and he is willing to speak to us here, and Mr. Zuckerberg will speak to the governments. It shows a.... I do not understand...not realizing the very significant role that we play as parliamentarians in this situation.
My next question is with reference to Speaker Nancy Pelosi's video, as well as to statements made by him with reference to Sri Lanka. He said that the videos would only be taken down if there were physical violence.
Let me just make a statement here. The Prime Minister of Saint Lucia's Facebook accounts have been, whether you want to say “hacked” or “replicated”, and he is now struggling out there to try to inform persons that this is a fake video or a fake account. Why should this be? If it is highlighted as fake, it is fake and it should not be....
Let me read something out of the fake...and here is what it is saying, referring to a grant. I quote:
It's a United Nation grant money for those who need assistance with paying for bills, starting a new project, building homes, school sponsorship, starting a new business and completing an existing ones.
the United nation democratic funds and human service are helping the youth, old, retired and also the disable in the society....
When you put a statement out there like this, this is violence against a certain vulnerable section of our society. It must be taken down. You can't wait until there is physical violence. It's not only physical violence that's violence. If that is the case, then there is no need for abuse when it is gender relations, or otherwise. Violence is violence, whether it is mental or physical.
That is my question to you, sir. Shouldn't these videos, these pages, be taken down right away once it is flagged as fake?
Mr. Chair, thanks for including us here today. I'm glad to be here.
We've had engagement, obviously, at our Irish committee with the companies and we met Mr. Zuckerberg in Dublin recently.
I have to say that I welcome the engagement of the representatives from the tech companies that are here, but I do find extraordinary some of the statements that have been made, such as the statement made by Mr. Potts a few minutes ago that he wasn't familiar with the parliamentary procedure, and that was maybe to explain some gap in the evidence.
I also find it extraordinary that some of the witnesses are unfamiliar with previous hearings and previous discourse on these matters in all of our parliaments. I would have thought that was a basic prerequisite before you entered the room, if you were qualified to do the job. I caveat my questions with that. It is disappointing. I want to put that on record.
Moving on to the specifics, we've heard a lot of words, some positive words, some words that are quite encouraging if we were to believe them, both today and previously, from the executive down. However, I suppose actions speak louder than words—that's my philosophy. We heard a lot today already about the Cambridge Analytica-Kogan scandal. It's worth, again, putting on record that the Irish data protection commissioner actually identified that in 2014 and put Facebook on notice. However, I understand that it wasn't actually followed up. I think it was some two or three years, certainly, before anything was actually done. All that unfolded since could have been avoided, potentially, had that actually been taken and followed up on at the time.
I'm following that again to just, I suppose, test the mettle of actions rather than words. These first few questions are targeted to Facebook.
We heard Mr. Zuckerberg say in public, and we heard again from witnesses here today, that the GDPR is a potential gold standard, that the GDPR would be a good model data management framework and could potentially be rolled out worldwide. I think that makes a lot of sense. I agree. I was on the committee that implemented that in Irish law, and I can see the benefits.
If that is so, why is it that Facebook repatriated 1.5 billion datasets out of the Irish data servers the night before the GDPR went live? Effectively, we have a situation where a huge tranche of Facebook's data worldwide was housed within the Irish jurisdiction because that's the EU jurisdiction, and on the eve of the enactment of GDPR—when, of course, GDPR would have become applicable—1.5 billion datasets were taken out of that loop and repatriated back to the States. It doesn't seem to be a gesture of good faith.
Perhaps we'll start with that question, and then we'll move on if we have time.
I'm running out of time, so I want to talk about liability and the responsibility for content on your platforms. I understand that for very harmful content...and we can talk about the nature of the content itself. If it's very harmful, if it's child porn or terrorism, you will take it down. If it's clearly criminal hate speech, you take it down, because these are harmful just by the nature of the content. There would be liability in Germany, certainly, and we've recommended at this committee that there be liability. If it's obviously hateful content, if it's obviously illegal content, there should be liability on social media platforms if they don't take it down in a timely way. That makes sense to me.
The second question, though, is not about the nature of the content. It's about your active participation in increasing the audience for that content. Where an algorithm is employed by your companies and used to increase views or impressions of that content, do you acknowledge responsibility for the content? I'm looking for a simple yes or no.
Let's go around, starting with Google.
One of the things we've heard from many experts is that a lot of the issues that have happened with data were when the machine learning really came into force. There was an inflection point. Many experts agree that self-regulation is not viable anymore. Rules have to be in place. The business model just can't regulate itself, and it doesn't align with the public interest.
I have two points. My concern is that right now, we're a mature democracy. A lot of the countries represented around this table are mature democracies. My worry is for nascent democracies that are trying to elevate themselves but don't have the proper structure, regulation, education and efficiency in place, or a free or advanced press. There has been some suggestion that maybe this self-regulation should be internationalized, as with other products. Even though some countries may not have the ability to effectively regulate certain industries, the mature democracies could set a standard worldwide.
Would that be something that would be accepted, either through a WTO mechanism or some other international institution that's maybe set apart? Part of the conversation has been around the GDPR, but the GDPR is only for Europe. There are the American rules, the Canadian rules, the South Asian rules.... If there were one institution that governed everybody, there would be no confusion, wherever the platforms were doing business, because they would accede to one standard worldwide.
Thank you for that question.
I followed that as well from the U.S., the disinformation reporting flow that we launched in the EU and in India. I think this is one of the challenges and blessings of being a global platform—every time you turn around there's another election. We have Argentina coming up. We have the U.S. 2020 elections to follow, and Canada, of course, in October.
What we learned is that, as always, when you create a rule and create a system to implement that rule, people try to game the system. What we saw in Germany was a question of how and whether you sign a ballot. That was one of the issues that arose. We are going to learn from that and try to get better at it.
What we found—and Neil mentioned the GIFCT—was that our contribution to the effort, or what Twitter does, is to look at behavioural items first, which is not looking at the content but how are different accounts reacting to one another. That way we don't have to read the variety of contexts that make those decisions more complicated.
I want to make a confession. I'm a recovering digital utopian. I came here as a young democratic socialist and I fought hard against regulation. Imagine that, because we saw all the start-ups and we saw a great digital future. That was 2006. Now in 2019, I have conservative chairs here who are pushing for government regulation. That's the world we're in with you folks.
It's because we're talking about democratic rights of citizens, re-establishing the rights of citizens within the realm that you control. We're talking about the power of these platforms to up-end our democratic systems around the world, which is unprecedented. We're talking about the power of these platforms to self-radicalize people in every one of our constituencies, which has led to mass murder around the world. These are serious issues. We are just beginning to confront the issues of AI and facial recognition technologies and what that will mean for our citizens.
It's what our Privacy Commissioner has called the right of citizens to live free of surveillance, which goes to the heart of the business model, particularly of Facebook and Google, and it came up yesterday and today from some of the best experts in the world that the front line of this fight over the public spaces and the private lives of citizens will be fought in the city of Toronto with the Google project.
Mr. McKay, we asked you questions on Sidewalk Labs before, but you said you didn't speak for Sidewalk Labs, that it was somehow a different company.
Mr. Slater, we had experts say this is a threat to the rights of our citizens. Mr. McNamee said he wouldn't let Google within 100 miles of Toronto.
How is it that the citizens of our country should trust this business model to decide the development of some of the best urban lands in our biggest city?
I want to thank everybody for your testimony today.
I applaud Mr. Chan on some of the changes that you say are coming. We've heard this story many times, so I guess we will wait to see what we get at the end of the day.
It goes to what we were asking for in the first place. In good faith we asked your CEO and your COO to come before us today to work together for a solution to what these problems are that a whole bunch of countries and a whole bunch of people around the globe see as common issues. To me, it's shameful that they are not here today to answer those specific questions that you could not fully answer.
That's what's troubling. We're trying to work with you, and you're saying you are trying to work with us. We just had a message today that was forwarded to me by my vice-chair. It says, “Facebook will be testifying at the International Grand Committee this morning. Neil Potts and Kevin Chan will be testifying. Neither of whom are listed in this leadership chart of the policy team's 35 most senior officials”.
Then we're told you're not even in the top 100. No offence to you individuals, you're taking it for the team for Facebook, so I appreciate your appearance here today, but my last words to say before the committee is shame on Mark Zuckerberg and shame on Sheryl Sandberg for not showing up today.
That said, we have media availability immediately following this meeting to answer questions. We're going to be signing the Ottawa declaration just over here so we're going to have a member from each delegation sitting here as representatives.
After that, all the members of Parliament visiting from around the world are invited to attend our question period today. I'm going to point out my chief of staff, Cindy Bourbonnais. She will help you get the passes you need to sit in the House if you wish to come to QP today.
Thank you again for coming as witnesses. We will move right into the Ottawa declaration.
The meeting is adjourned.