Thank you very much, Mr. Chair and members of the committee. Thank you for the invitation to appear today to speak again about these important topics.
At Google we believe in ensuring that our users have choice, transparency and control. These values are built into every product we make. We build products for everyone and make most of them free.
Privacy is also for everyone, and should be simple to understand and control. At Google we combine cutting-edge technology with data to build products and services that improve people's lives, help grow the economy, and make the web safer. With partners, we are working to tackle big challenges and enable scientific breakthroughs.
We understand that our users trust us to share information about them so that we can build better products and serve and improve their lives, but we also know that with that comes great responsibility. That's why we do so with strong privacy and security practices.
Before I provide an update on important work that's happened on our end since I appeared before this committee in May, I want to briefly outline the four key pillars that underpin how Google thinks about protecting user data—transparency, control, data portability and security.
First, on transparency, our approach to privacy stems directly from our founding mission, which is to organize the world's information and make it universally accessible and useful. That's why we build for everyone. Providing most of our products for free is a key part of meeting that mission. Ads help us make that happen. With advertising, as with all our products, users expect us to keep their personal information confidential and under their control.
Second, with regard to user control, our privacy tools are built for everyone. Different people have different ideas about privacy, so we must build with that in mind. Our “My Account” centre puts these privacy and security settings in a single place, making it easy to find and set preferences.
I want to call attention to our security checkup and privacy checkup tools that we regularly promote to users. These tools help users identify and control the apps that have access to their Google account data and guide users to review and change their security and privacy settings, such as deleting their Google activity, disabling personalized ads, or downloading a copy of their information.
Third, we believe portability is a key way to drive innovation, facilitate competition and best serve our users. That's why we've been working on it for over a decade. If a user wants to try out or even switch to another product or service, they should be able to do so as easily as possible and not be locked in to a service. In 2011 we launched Google Takeout, allowing users to download their data from Google and use it with a different service. We announced an important update to that service this year. That's the data transfer project, which we developed and are now working on with leading partners in the industry to facilitate that transfer between services.
Fourth, security considerations are paramount in all of these efforts. Securing the infrastructure that provides Google services is critical in light of the growing and sophisticated nature of many threats directed at our services and users. Google products are built at their core with strong security protections, including continuous efforts to block a range of security threats. We make technology like safe browsing available for free to other technology providers. This helps to protect Internet users on and off of Google services.
With that in mind, our privacy work is never finished. We try to learn from our mistakes and don't take our success for granted. Our goal is to be the best in class and continually raise the bar for ourselves and industry.
This committee's current inquiry stems from the breach of personal information that was associated with Cambridge Analytica, a breach that Facebook first reported earlier this year. When this news broke, Google proactively embarked on an analysis of our products and services to further improve the privacy of our products, particularly in relation to developer access to our data. This effort, “Project Strobe”, as it's known internally at Google, has so far resulted in several important insights and actions about our platforms. More will be coming. We announced some earlier this month, but in the interest of updating this committee on what's happened since we last spoke, let me offer a quick overview of actions that we've recently taken.
The first update is with regard to app permissions. We announced an update earlier this month outlining how consumers will now get more fine-grained control over what account data they choose to share with each app.
We launched more granular Google account permissions that will show in individual dialogue boxes when you download an app and when that app is updated.
People want fine-grained controls over the data they share with apps, so instead of seeing all requested permissions on a single screen, these apps will have to show you each requested permission, one at a time, within its own dialogue box. For example, if a developer requests access to both calendar entries and Drive documents, you will be able to choose to share one but not the other.
The second update concerns Gmail. We understand that when users grant apps access to their Gmail, they do so with certain use cases in mind. We're limiting the types of apps that can request access to Gmail to those that directly enhance email functionality, such as mail merge services or send-delay products.
Moreover, these apps will need to agree to new rules on handling Gmail data and will be subject to security assessments. People can always review and control which apps have access to their Google account data, including Gmail, within our security checkup tool.
The third update concerns restricting apps that can request call log and SMS permissions on Android devices. When users grant SMS, contacts and phone permissions to Android apps, they do so with certain use cases in mind.
We now limit apps' ability to receive call log and SMS permissions on Android devices and are no longer making contact interaction data available via the Android contacts API. Many third party apps, services and websites build on top of our various services to improve everyone's phones, working life and online experience. We strongly support this active ecosystem, but we understand that its success depends on users knowing that their data is secure and on providing clear tools and guidelines for developers.
From the unbundling of the permissions that are shown to users when they are deciding to give access to their sensitive data to limiting developer access to Gmail to requiring developers to undertake security enhancements, we continue to make the securing of data a top priority while supporting a wide range of useful apps.
When it comes to protecting their data and giving them more control over their data, Canadian consumers are counting on us and Canadian businesses are counting on us. Google's search tools help Canadians find information, answers and even jobs, and our advertising products help Canadian businesses connect with consumers and customers around the globe.
This brings me to one more update since I last appeared before this committee. In September, Deloitte released a report looking at the economic impact of Google Search and Google Ads on Canadian businesses. Deloitte estimates that our ads services supported between $10.4 billion and $18.5 billion in economic activity by those businesses and partners, which translates to the equivalent of 112,000 to 200,000 full-time jobs. The Android ecosystem in Canada helped generate $1.5 billion in revenue within Canada's app economy, supporting 69,000 jobs.
The web is at the heart of economic growth, both here in Canada and globally. That's why Google invests in building products and services that help businesses, entrepreneurs, non-profits, developers, creators and students succeed online. Hundreds of thousands of Canadians are using Google's tools to grow global audiences and enterprises, and we're proud to support Canadian businesses in making the most of the web.
We at Google remain committed to continuing to develop products and programs to bring this opportunity to everyone.
Our privacy and security work is never finished. We will continue to do this, and we stand ready to do our part and to help build a better ecosystem for Canadians and Canadian businesses.
Thank you again for the opportunity to be here today. I look forward to continuing to work with you on these important issues, and I welcome any questions you may have.
Earlier this year, at the same time that the Facebook and Cambridge Analytica story came out, we launched an internal process to verify that our APIs and internal data protection processes weren't allowing similar lapses in information sharing.
Through that process, what we discovered is that in Google Plus there was a bug, not a breach. The bug allowed apps that had access to a user's public data—data they had chosen to share on their profile—to access elements of that data that the user hadn't necessarily granted permission to. It also allowed the app to access information that the user had shared with a friend in that same subset of data.
Because Google Plus was designed as a privacy-protective tool, we have very limited logs about what is available in terms of behaviour on Google Plus. We don't keep a record of what our users do on Google Plus. We had a two-week window to evaluate whether or not developers were aware of that bug within the API and that they could access this additional information, whether they had acted on it, and whether any information had been collected. Our internal data protection office reviewed that and could find no evidence that there was an awareness that the bug existed or that the data had been accessed or misused.
Once that had been identified, they then went through the evaluation of harm and whether or not they should notify users that this bug existed and that the potential had existed for this to happen. What they determined was that there was no sign that the information, that bug, had been accessed by developers. There was no sign that any information had been shared in a manner they did not expect. Also, there was really no way to notify developers of how to change their access to data, because as soon as we noticed the bug, we closed it.
Also, in notifying users, neither could we identify a set of users that had been affected by the bug, because there were none in the data available to us. Therefore, we couldn't notify them on any behaviour that would change any possible harm from that bug. That was the rationale behind the decision.
Colin, it's good to see you. I'm a little bit of a visitor to this committee, but since I'm here and you're here, I thought we'd proceed on that basis.
You may have covered this at your previous encounter, but I wanted to go through the hacking incidents that have occurred, and particularly what we saw on the U.S. presidential campaign. Maybe you've covered this ground.
I don't even know whether it was Gmail or some other service, but my knowledge of the hack on the Democratic national campaign, the Hillary Clinton campaign, is that it was a kind of sad story.
They had rules and so on, and just one human error—and unless we're all taken over by robots tomorrow, human error is going to continue—created the opportunity for the Russians to gain access to every single email of Hillary Clinton's national campaign director. That was on the basis of a Bitly. In a case like that, what these hackers like to do, if they're phishing or spear phishing, is give you a sense of urgency. If you don't do something right away, if you don't click right away, your credit card is going to be compromised or access to your bank account will be compromised or what have you.
There was a Bitly attached to the email that went to John Podesta. He rationally flipped it to his director of IT in the Hillary Clinton campaign, asking if it was for real, if it was legitimate, which was the right thing to do.
The director of IT in the campaign figured out that this was wrong, it was suspect, and flipped the email back—but he forgot a word. He said, “This is legit” rather than saying, “This is not legit.” Then as soon as Podesta saw that, he clicked on the Bitly, and the rest is part of the history books now.
I'm just asking a question. Based on that, we know there is human error. You can have all the systems in the world, but human error does take place. How do we...? Maybe it's a combination of education and better systems, and maybe there's AI involved. I wanted your take on this, because we're all coming up to an election campaign and we're all susceptible to hacks. I'd be very surprised if there were no hacking attempts in the next federal election campaign in Canada.
Let me get your side of this issue.
This is certainly something that we recognize because of the billions of users we have, particularly starting in Gmail. We've attacked this problem in multiple ways over the years.
To start, in 2010, we were notifying Gmail users that we were seeing attempts to access their account, attempts to try to crack their account by force or to send them spoof emails that would force them to make a decision much like yours. We built on those notifications security protections that now give you a notification if we're seeing an attempt to access your account from an unusual area or an unusual geography, so that if someone outside your normal space or even you while travelling log in to your account from elsewhere, you'll get a notification either on your account or on your phone if you've enabled two-factor authentication. We've forced the implementation of two-factor authentication across most of our products so that someone can't just hack into your account by virtue of having the account name and the password. You now need a physical token of some kind.
However, we also recognize that you can force your way into a system through brute force. Jigsaw, which is an Alphabet company, has developed a service called Shield, which is available to non-profits, political parties, and others, to help them counter denial-of-service attacks, where there is that brute force attempt to cause a security system to fail.
As well, earlier this year we put in advanced privacy protection, particularly for elected officials, so they could put in place security controls that we have within the company that not only require two-factor authentication but also place specific restrictions on unusual log-in attempts and attempts to access information within your Google account services. You are forced to provide additional verification. It's an inconvenience for the user, but it also provides more surety of mind that you have the security protection that allows you to identify those sorts of flagrant attempts.
For the general user, I mentioned Safe Browsing in my remarks. Safe Browsing is developed specifically for that concern. When people have clicked on a link and they use the Chrome browser to go to a page, we can see if that move to a page causes unusual behaviour, such as immediately hitting the back button, trying to leave that page, or shutting down their browser altogether. Over billions and billions of interactions, we can recognize the pages that are generating suspicious or harmful content and are causing our users to behave that way. We then generate an API that we share with Microsoft and Firefox that allows them to flag those URLs as well, so that when you actually click on a link to those pages, you get a red warning box that tells you that it's a security threat and most times will not let you advance to the page. Therefore, we're taking insights from behaviour to try to eliminate the concern as well.
I'd like to go back to the part that you discussed in the beginning when you outlined the key areas on controls, particularly, in this case, not the Google search engine but YouTube. Obviously certain things on YouTube that would go against the user policy are automatically removed, or the person is informed that it's been removed because they're violating the policy.
I would be very interested in knowing how those determinations are made. For instance, is it an individual person who looks at those? Is it an algorithm? Are you using some sort of AI? Are there keywords that you're looking at?
There are a couple of things that I'm a bit concerned about. After the testimony at this committee of Mr. Vickery, I posted my questions back and forth, just like you and I are doing right now. It's televised; it's on ParlVu. One of the questions that I asked was about the fact that some of this data had been found on a Google drive. When I went to post that intervention, which was from a parliamentary television site, it was found to violate the YouTube.... The only caption was the name of our study, which is the breach of personal information involving Cambridge Analytica and Facebook. It was removed, and I was told that I would have penalties. I went for a review, and of course, after a review it was posted back on again.
I know of another member of Parliament who asked a question in question period about cannabis, and that was removed because it was said that he was promoting the use of drugs.
How are these determinations made? What are the algorithms or terms, or how do you do that with YouTube? There are, at the same time, an awful lot of things on YouTube that promote hate, that are anti-democratic, that are perhaps even put there by interests that have links to international crime.
I worry that the way these algorithms are being used might not necessarily be capturing what we really want to remove, while free speech in an environment like this, which is a parliamentary committee, has been actually caught in this net.