We'll call to order the Standing Committee on Access to Information, Privacy and Ethics for meeting 154, an by extension, the international grand committee on big data, privacy and democracy.
I don't need to go through the list of countries that we have already mentioned, but I will go through our witnesses very briefly.
From the Office of the Privacy Commissioner of Canada, we have Mr. Daniel Therrien, the Privacy Commissioner of Canada.
As an individual, we have Joseph A. Cannataci, special rapporteur on the right to privacy for the United Nations.
We are having some challenges with the live video feed from Malta. We'll keep working through that. I'm told by the clerk that we may have to go to an audio feed to get the conversation. We will do what we have to do.
Also we'd like to welcome the chair of the United States Federal Election Commission, Ellen Weintraub.
First of all, I would like to speak to the meeting's order and structure. It will be very similar to that of our first meeting this morning. We'll have one question per delegation. The Canadian group will have one from each party, and we'll go through until we run out of time with different representatives to speak to the issue.
I hope that makes sense. It will make sense more as we go along.
I would like to thank the members who came to our question period today. I personally thank the Speaker for recognizing the delegation.
I'll give Mr. Collins the opportunity for to open it up.
Members of the grand committee, thank you for the invitation to address you today.
My remarks will address three points that I think go to the heart of your study: first, that freedom and democracy cannot exist without privacy and the protection of our personal information; second, that in meeting the risks posed by digital harms, such as disinformation campaigns, we need to strengthen our laws in order to better protect rights; lastly, I will share suggestions on what needs to be done in Canada, as I'm an expert in Canadian privacy regulation, so that we have 21st century laws in place to ensure that the privacy rights of Canadians are protected effectively.
I trust that these suggestions made in a Canadian context can also be relevant in an international context.
As you know, my U.K. counterpart, the Information Commissioner's Office, in its report on privacy and the political process, clearly found that lax privacy compliance and micro-targeting by political parties had exposed gaps in the regulatory landscape. These gaps in turn have been exploited to target voters via social media and to spread disinformation.
The Cambridge Analytica scandal highlighted the unexpected uses to which personal information can be put and, as my office concluded in our Facebook investigation, uncovered a privacy framework that was actually an empty shell. It reminded citizens that privacy is a fundamental right and a necessary precondition for the exercise of other fundamental rights, including democracy. In fact, privacy is nothing less than a prerequisite for freedom: the freedom to live and develop independently as individuals, away from the watchful eye of surveillance by the state or commercial enterprises, while participating voluntarily and actively in the regular, day-to-day activities of a modern society.
As members of this committee are gravely aware, the incidents and breaches that have now become all too common go well beyond matters of privacy as serious as I believe those to be. Beyond questions of privacy and data protection, democratic institutions' and citizens' very faith in our electoral process is now under a cloud of distrust and suspicion. The same digital tools like social networks, which public agencies like electoral regulators thought could be leveraged to effectively engage a new generation of citizens, are also being used to subvert, not strengthen, our democracies.
The interplay between data protection, micro-targeting and disinformation represents a real threat to our laws and institutions. Some parts of the world have started to mount a response to these risks with various forms of proposed regulation. I will note a few.
First, the recent U.K. white paper on digital harms proposes the creation of a digital regulatory body and offers a range of potential interventions with commercial organizations to regulate a whole spectrum of problems. The proposed model for the U.K. is to add a regulator agency for digital platforms that will help them develop specific codes of conduct to deal with child exploitation, hate propaganda, foreign election interference and other pernicious online harms.
Second, earlier this month, the Christchurch call to eliminate terrorist and violent extremist content online highlighted the need for effective enforcement, the application of ethical standards and appropriate co-operation.
Finally, just last week here in Canada, the government released a new proposal for an update to our federal commercial data protection law as well as an overarching digital charter meant to help protect privacy, counter misuse of data and help ensure companies are communicating clearly with users.
Underlying all these approaches is the need to adapt our laws to the new realities of our digitally interconnected world. There is a growing realization that the age of self-regulation has come to an end. The solution is not to get people to turn off their computers or to stop using social media, search engines, or other digital services. Many of these services meet real needs. Rather, the ultimate goal is to allow individuals to benefit from digital services—to socialize, learn and generally develop as persons—while remaining safe and confident that their privacy rights will be respected.
There are certain fundamental principles that I believe can guide government efforts to re-establish citizens' trust. Putting citizens and their rights at the centre of these discussions is vitally important, in my view, and legislators' work should focus on rights-based solutions.
In Canada, the starting point, in my view, should be to give the law a rights-based foundation worthy of privacy's quasi-constitutional status in this country. This rights-based foundation is applicable in many countries where their law frames certain privacy rights explicitly as such, as rights, with practices and processes that support and enforce this important right.
I think Canada should continue to have a law that is technologically neutral and principles based. Having a law that is based on internationally recognized principles, such as those of the OECD, is important for the interoperability of the legislation. Adopting an international treaty for privacy and data protection would be an excellent idea, but in the meantime, countries should aim to develop interoperable laws.
We also need a rights-based statute, meaning a law that confers enforceable rights to individuals while also allowing for responsible innovation. Such a law would define privacy in its broadest and truest sense, such as freedom from unjustified surveillance, recognizing its value in correlation to other fundamental rights.
Privacy is not limited to consent, access and transparency. These are important mechanisms, but they do not define the right itself. Codifying the right, in its broadest sense, along the principles-based and technologically neutral nature of the current Canadian law would ensure it can endure over time, despite the certainty of technological developments.
One final point I wish to make has to do with independent oversight. Privacy cannot be protected without independent regulators and the power to impose fines and to verify compliance proactively to ensure organizations are truly accountable for the protection of information.
This last notion, demonstrable accountability, is a needed response to today's world, where business models are opaque and information flows are increasingly complex. Individuals are unlikely to file a complaint when they are unaware of a practice that may harm them. This is why it is so important for the regulator to have the authority to proactively inspect the practices of organizations. Where consent is not practical or effective, which is a point made by many organizations in this day and age, and organizations are expected to fill the protective void through accountability, these organizations must be required to demonstrate true accountability upon request.
What I have presented today as solutions are not new concepts, but as this committee takes a global approach to the problem of disinformation, it's also an opportunity for domestic actors—regulators, government officials and elected representatives—to recognize what best practices and solutions are emerging and to take action to protect our citizens, our rights, and our institutions.
Thank you. I look forward to your questions.
Thank you very much, Mr. Chair, and members of the grand committee, for the invitation to speak.
I will try to build on what Mr. Therrien has said in order to cover a few more points. I will also make some references to what other witnesses presented previously.
First, I will be trying to take a more international view, though the themes that are covered by the committee are very global in nature. That's why when it comes to global...the previous witness spoke about an international treaty. One of the reasons, as I will be explaining, that I have decided in my mandate at the United Nations to go through a number of priorities when it comes to privacy is that the general framework of privacy and data protection in law insofar as an international treaty is concerned, who regulates this, doesn't happen to be specifically a UN treaty. It happens to be convention 108 or convention 108+, which is already ratified by 55 nations across the world. Morocco was the latest one to present its document of ratification yesterday.
When people meet in Strasbourg or elsewhere to discuss the actions and interoperability within an international framework, there are already 70 countries, ratified states and observer states, that will discuss the framework afforded by that international legal instrument. I would indeed encourage Canada to consider adhering to this instrument. While I am not an expert on Canadian law, I have been following it since 1982. I think Canadian law is pretty close in most cases. I think it would be a welcome addition to that growing group of nations.
As for the second point that I wish to make, I'll be very brief on this, but I also share preoccupations about the facts on democracy and the fact that the Internet is being increasingly used in order to manipulate people's opinions through monitoring their profiles in a number of ways. The Cambridge Analytica case, of course, is the classic case we have for our consideration, but there are other cases too in a number of other countries around the world.
I should also explain that the six or seven priorities that I have set for my United Nations mandate to a certain extent summarize maybe not all, but many of the major problems that we are facing in the privacy and data protection field. The first priority should not surprise you, ladies and gentlemen, because it relates to the very reasons that my mandate was born, which is security and surveillance.
You would recall that my United Nations mandate was born in the aftermath of the Snowden revelations. It won't surprise you, therefore, that we have dedicated a great deal of attention internationally to security and surveillance. I am very pleased that Canada participates very actively in one of the fora, which is the International Intelligence Oversight Forum because, as the previous witness has just stated, oversight is a key element that should be addressed. I was also pleased to see some significant progress in the Canadian sphere over the past 12 to 24 months.
There is a lot to be said about surveillance, but I don't have much left of my 10 minutes so I can perhaps respond to questions. What I will restrict myself to saying at this stage is that globally we see the same problems. In other words, we don't have a proper solution for jurisdiction. Issues of jurisdiction and definitions of offences remain some of the greatest problems we have, notwithstanding the existence of the Convention on Cybercrime. Security, surveillance and basically the growth of state-sponsored behaviour in cyberspace are still a glaring problem.
Some nations are not very comfortable talking about their espionage activities in cyberspace, and some treat it as their own backyard, but in reality, there is evidence that the privacy of hundreds of millions of people, not in just one country but around the world, has been subjected to intrusion by the state-sponsored services of one actor or another, including most of the permanent powers of the United Nations.
The problem remains one of jurisdiction and defining limits. We have prepared a draft legal instrument on security and surveillance in cyberspace, but the political mood across the world doesn't seem conducive to major discussions on those points. The result is that we have seen some unilateral action, for example, by the United States with its Cloud Act, which has not seen much take-up at this moment in time. However, regardless of whether unilateral action would work, I encourage discussion even on the principles of the Cloud Act. Even if it doesn't lead to immediate agreements, the very discussion will at least get people to focus on the problems that exist at that stage.
I will quickly pass to big data and open data. In the interests of the economy of time, I refer the committee to the report on big data and open data that I presented to the United Nations General Assembly in October 2018. Quite frankly, I would advise the committee to be very wary of joining the two in such a way that open data continues to be a bit like a mother with an apple pie when it comes to politicians proclaiming all the good it's going to do for the world. The truth is that in the principles of big data and open data, we are looking at key fundamental issues when it comes to privacy and data protection.
In Canadian law, as in the law of other countries, including the laws of all those countries that adhere to convention 108, the purpose specification principle that data should be collected and used only for a specified or compatible purpose lives on as a fundamental principle. It also lives on as a principle in the recent GDPR in Europe. However, we have to remember that in many cases, when one is using big data analytics, one is seeking to repurpose that data for a different purpose. Once again, I refer the committee to my report and the detailed recommendations there.
At this moment in time, I have out for consultation a document on health data. We are expecting to debate this document, together with recommendations, at a special meeting in France on June 11 and 12. I trust there will be a healthy Canadian presence at that meeting too. We've received many positive comments about the report. We're trying to build an existing consensus on health data, but I'd like to direct the committee's attention to how important health data is. Growing amounts of health data are being collected each and every day with the use of smart phones and Fitbits and other wearables, which are being used in a way that really wasn't thought about 15 or 20 years ago.
Another consultation paper I have out, which I would direct the committee's attention to, is on gender and privacy. I'm hoping to organize a public consultation. It has already started as an online consultation, but I am hoping to have a public meeting, probably in New York, on October 30 and 31. Gender and privacy continues to be a very important yet controversial topic, and it is one in which I would welcome continued Canadian contribution and participation.
I think you would not be surprised if I were to say that among the five task forces I established, there is a task force on the use of personal data by corporations. I make it a point to meet with the major corporations, including Google, Facebook, Apple, Yahoo, but also some of the non-U.S. ones, including Deutsche Telekom, Huawei, etc., at least twice a year all together around a table in an effort to get their collaboration to find new safeguards and remedies for privacy, especially in cyberspace.
This brings me to the final point I'll mention for now. It's linked to the previous one on corporations and the use of personal data by corporations. It's the priority for privacy action.
I have been increasingly concerned about privacy issues, especially those affecting children as online citizens from a very early age. As the previous witness has borne witness, we are looking at some leading new and innovative legislation, such as that in the United Kingdom, not only the one on digital harms, but also one about age-appropriate behaviour and the liability of corporations. I am broaching these subjects formally next with the corporations at our September 2019 meeting. I look forward to being able to achieve some progress on the subject of privacy and children and on greater accountability and action from the corporations in a set of recommendations that we shall be devising during the next 12 to 18 months.
I'll stop here for now, Mr. Chair. I look forward to questions.
Thank you, Mr. Chair and members of the committee.
I am the chair of the Federal Election Commission in the United States. I represent a bipartisan body, but the views that I'm going to express are entirely my own.
I'm going to shift the topic from privacy concerns to influence campaigns.
In March of this year, special counsel Robert S. Mueller III completed his report on the investigation into Russian interference in the 2016 presidential election. Its conclusions were chilling. The Russian government interfered in the 2016 presidential election in sweeping and systemic fashion. First, a Russian entity carried out a social media campaign that favoured one presidential candidate and disparaged the other. Second, a Russian intelligence service conducted computer intrusion operations against campaign entities, employees and volunteers, and then released stolen documents.
On April 26, 2019, at the Council on Foreign Relations, FBI director Christopher A. Wray warned of the aggressive, unabated, malign foreign influence campaign consisting of “the use of social media, fake news, propaganda, false personas, etc., to spin us up, pit us against each other, sow divisiveness and discord, and undermine Americans' faith in democracy. That is not just an election cycle threat; it's pretty much a 365-days-a-year threat. And that has absolutely continued.”
While he noted that “enormous strides have been made since 2016 by all different federal agencies, state and local election officials, the social media companies, etc.,” to protect the physical infrastructure of our elections, he said, “I think we recognize that our adversaries are going to keep adapting and upping their game. And so we're very much viewing 2018 as just kind of a dress rehearsal for the big show in 2020.”
Last week, at the House of Representatives, a representative of the Department of Homeland Security also emphasized that Russia and other foreign countries, including China and Iran, conducted influence activities in the 2018 mid-terms and messaging campaigns that targeted the United States to promote their strategic interests.
As you probably know, election administration in the United States is decentralized. It's handled at the state and local levels, so other officials in the United States are charged with protecting the physical infrastructure of our elections, the brick-and-mortar electoral apparatus run by state and local governments, and it is vital that they continue to do so.
However, from my seat on the Federal Election Commission, I work every day with another type of election infrastructure, the foundation of our democracy, the faith that citizens have that they know who's influencing our elections. That faith has been under malicious attack from our foreign foes through disinformation campaigns. That faith has been under assault by the corrupting influence of dark money that may be masking illegal foreign sources. That faith has been besieged by online political advertising from unknown sources. That faith has been damaged through cyber-attacks against political campaigns ill-equipped to defend themselves on their own.
That faith must be restored, but it cannot be restored by Silicon Valley. Rebuilding this part of our elections infrastructure is not something we can leave in the hands of the tech companies, the companies that built the platforms now being abused by our foreign rivals to attack our democracies.
In 2016, fake accounts originating in Russia generated content that was seen by 126 million Americans on Facebook, and another 20 million Americans on Instagram, for a total of 146 million Americans; and there were only 137 million voters in that election.
As recently as 2016, Facebook was accepting payment in rubles for political ads about the United States elections.
As recently as last year, in October 2018, journalists posing as every member of the United States Senate tried to place ads in their names on Facebook. Facebook accepted them all.
Therefore, when the guys on the other panel keep telling us they've got this, we know they don't.
By the way, I also invited Mark Zuckerberg and Jack Dorsey, all those guys, to come and testify at a hearing at my commission when we were talking about Internet disclosure of advertising, and once again, they didn't show up. They didn't even send a surrogate that time; they just sent us written comments, so I feel for you guys.
This is plainly really important to all of us. In the United States, spending on digital political ads went up 260% from 2014 to 2018, from one mid-term election to the next, for a total of $900 million in digital advertising in the 2018 election. That was still less than was spent on broadcast advertising, but obviously digital is the wave of the future when it comes to political advertising.
There have been constructive suggestions and proposals in the United States to try to address this: the honest ads act, which would subject Internet ads to the same rules as broadcast ads; the Disclose Act, which would broaden the transparency and fight against dark money; and at my own agency I've been trying to advance a rule that would improve disclaimers on Internet advertising. All of those efforts so far have been stymied.
Now, we have been actually fortunate that the platforms have tried to do something. They have tried to step up, in part, I'm sure, to try to ward off regulation, but in part to respond to widespread dissatisfaction with the information and the disclosure they were providing. They have been improving, in the United States at least, the way they disclose who's behind their ads, but it's not enough. Questions keep coming up, such as about what triggers the requirement to post the disclaimer.
Can the disclaimers be relied upon to honestly identify the sources of the digital ads? Based on the study about the 100 senators ads, apparently they cannot, not all the time, anyway. Does the identifying information travel with the content when information is forwarded? How are the platforms dealing with the transmission of encrypted information? Peer-to-peer communication represents a burgeoning field for political activity, and it raises a whole new set of potential issues. Whatever measures are adopted today run the serious risk of targeting the problems of the last cycle, not the next one, and we know that our adversaries are constantly upping their game, as I said, and constantly improvising and changing their strategies.
I also have serious concerns about the risks of foreign money creeping into our election system, particularly through corporate sources. This is not a hypothetical concern. We recently closed an enforcement case that involved foreign nationals who managed to funnel $1.3 million into the coffers of a super PAC in the 2016 election. This is just one way that foreign nationals are making their presence and influence felt even at the highest levels of our political campaigns.
These kinds of cases are increasingly common, and these kinds of complaints are increasingly common in the United States. From September 2016 to April 2019, the number of matters before the commission that include alleged violations of the foreign national ban increased from 14 to 40, and there were 32 open matters as of April 1 of this year. This is again an ongoing concern when it comes to foreign influence.
As everything you've heard today demonstrates, serious thought has to be given to the impact of social media on our democracy. Facebook's originating philosophy of “move fast and break things”, cooked up 16 years ago in a college dorm room, has breathtaking consequences when the thing they're breaking could be our democracies themselves.
Facebook, Twitter, Google, these and other technology giants have revolutionized the way we access information and communicate with each other. Social media has the power to foster citizen activism, virally spread disinformation or hate speech and shape political discourse.
Government cannot avoid its responsibility to scrutinize this impact. That's why I so welcome the activities of this committee and appreciate very much everything you're doing, which has carryover effects in my country, even when we can't adopt our own regulations when you all adopt regulations in other countries. Sometimes the platforms maintain the same policies throughout the world, and that helps us. Thank you very much
Also, thank you very much for inviting me to participate in this event. I welcome your questions.
Thank you very much for being here and for your very informative testimony.
I'd like to focus my questions on the foreign threats to democracy and what I call the enabling technologies: the social media platforms, the “data-opolies” that are allowing for those foreign threats to actually take hold.
I note that, as legislators, we are the front lines of democracy globally. If those of us who are elected representatives of the people are not able to tackle this and do something about these threats.... This is really up to us and that's why I'm so pleased that the grand committee is meeting today.
I also note the co-operation that even our committee was able to have with the U.K. committee on AggregateIQ, which was here in Canada when we were studying Cambridge Analytica and Facebook.
However, we do have a problem, which is that individual countries, especially smaller markets, are very easily ignored by these large platforms, because simply, they're so large that individually there is not much we can do. Therefore, we do need to work together.
Ms. Weintraub, do you believe that right now you have the tools you need to ensure that the 2020 U.S. election will be free, or as free as possible, of foreign influence?
Thank you for appearing, Ms. Weintraub.
You Americans are like our first cousins, and we love you dearly, but we're a little smug, because we look over the border and see all this crazy stuff and say we'd never do that in Canada. I will therefore give you the entire history of electoral fraud and interference in Canada in the last 10 years.
We had a 20-year-old who was working for the Conservatives who got his hands on some phone numbers and sent out wrong information on voting day. He was jailed.
We had a member of this committee who got his cousins to help pay an electoral paying scheme. He lost his seat in Parliament and went to jail.
We had a cabinet minister who cheated on 8,000 dollars' worth of flights in an election and went over the limit. He lost his position in cabinet and lost his seat.
These situations have consequences, and yet we see wide open data interference now for which we don't seem to have any laws, or we're seemingly at a loss and are not sure how to tackle it.
I can tell you that in 2015 I began to see it in the federal election, and it was not noticed at all at the national level. It was intense anti-Muslim, anti-immigrant women material that up-ended the whole election discourse in our region. It was coming from Britain First, an extremist organization. How working-class people in my region were getting this stuff, I didn't understand.
I understand now, however, how fast the poison moves in the system, how easy it is to target individuals, how the profiles and the data profiles of our individual voters can be manipulated.
When the federal government has new electoral protection laws, they may be the greatest laws for the 2015 election, but that was like stage coach robberies compared with what we will see in our upcoming election, which will probably be testing some of the ground for the 2020 election.
In terms of this massive movement in the tools of undermining democratic elections, how do we put in in place the tools to take on these data mercenaries who can target us right down to individual voters each with their own individual fears?
Thank you for the question, Mr. Lucas.
The answer is that national practices vary. Some countries go more towards the United States model. Others go towards the United Kingdom model. In truth, though, we are finding that in many countries where the law is more restrictive, in practice many individuals and political parties are using social media to get around the law in a way that was not properly envisaged in some of the actual legislation.
With the chair's permission, I'd like to take the opportunity, since I've been asked a question, to refer to something that I think is transversal across all the issues we have here. It goes back to the statement made by Ms. Weintraub regarding who is going to be the arbiter of truth. In a number of countries, that value is still very close to our hearts. It is a fundamental human right, which happens to reside in the same article 17 of the International Covenant on Civil and Political Rights, of which many of the countries around the table, if not all, are members.
In the same section that talks about privacy, we have the provision on reputation, and people [Technical difficulty—Editor] care a lot about their reputation. So in terms of the arbiter of truth, essentially, in many countries, including France—I believe there was some discussion in Canada too—people are looking at having an arbiter of truth. Call him the Internet commissioner or call him the Internet ombudsman, call him what you will, but in reality people want a remedy, and the remedy is that you want to go to somebody who is going to take things down if something is not true.
We have to remember—and this applies also to online harm, including radicalization—that a lot of the harm that is done on the Internet is done in the first 48 hours of publication, so timely takedown is the key practical remedy. Also, in many cases, while freedom of speech should be respected, privacy and reputation are best respected by timely takedown. Then, if the arbiter in the jurisdiction concerned deems that it was unfair to take something down, it can go back up. Otherwise, we need the remedy.
Thank you, Chair. I have another couple of questions and observations.
Just picking up on something that I think Nate was saying earlier in his other points, the question often is whether these platforms are legally publishers or dumb hosts, terminals that display the content that gets put in front of them. I think one argument to support the fact that they're publishers and therefore have greater legal responsibilities is that they have moderators and moderation policies, with people making live decisions about what should and shouldn't be shown. On top of that, of course, are the targeting algorithms. I think that's something that's of interest, just as an observation.
On my other point, before I get into questions, we were talking about nation-states and different hostile acts. One thing that's the topic of the moment, I suppose, is the most recent revelation in terms of the Chinese government and the Huawei ban, and the fact that Google, I think in the last few days, announced a ban on supporting Huawei handsets. But it strikes me that Google is tracking us through Google Maps and everything else as we walk about with our phones. I think I read that there are 72 million different data points in a typical year consumed just by walking about town with a phone in your hand or in your pocket. Maybe the difference is that somewhere Google has terms and conditions that we're supposed to take, and Huawei doesn't, but both are effectively doing the same thing, allegedly. That's just a thought.
On the legislative framework, again, as I mentioned earlier, I've been trying to draft some legislation and track some of this, and I came to the honest ads act. One of the issues we've come across, and one of the challenges, is balancing free speech with, I suppose, voter protection and protecting our democracies. I'm always loath to criminalize certain behaviours, but I'm wondering what the tipping point is.
I suppose that in the way I've drafted it initially what I've considered is that I think you can post whatever you whatever you want as long as you're transparent about who actually said it, who is behind it, who is running it or who is paying for it, particularly if it's a commercial, if it's a paid-for post. In terms of the bots and the fake accounts, and what I would call the industrial-scale fake accounts, where we have a bot farm or where we have multiple hundreds or thousands of users actually being manipulated by maybe a single user or single entity for their own ends, I think that probably strays into the criminal space.
That's one question for Ms. Weintraub.
I suppose another question, a related question, is something that we struggle with in Ireland and that I guess many jurisdictions might struggle with. Who is responsible for policing these areas? Is it an electoral commission? If so, does that electoral commission have its own powers of enforcement and powers of investigation? Do you have law enforcement resources available to you? Is it the plain and simple police force of the state? Is a data protection commissioner in the mix as well? We have different types of regulators, but it can be a bit of an alphabet soup, and it can be difficult to actually pin down who is in charge. Also, then, if we do have somebody in charge, it can be difficult; they don't always have the resources to follow through.
That's my first question. In terms of criminalization, is that a bridge too far? Where do you draw the line? Second, if there is criminalization and there's an investigation required, what kind of resourcing do you have or do you think is needed?
Actually, that's a great line. I'm going to use that again myself.
I think I still have time for my next question. There is another way around this that we've seen in Ireland and, I guess, around the world. We've heard it again today. Because of the avalanche of fake news and disinformation, there is a greater onus on supporting the—dare I say—traditional platforms, the news media, what we'd call independents, quality news media.
There is a difficulty in terms of who decides what's independent and what's quality, but one of the approaches that we've been looking at I think I heard it in the Canadian Parliament when we watched the question period a few hours ago. I heard similar debates. One solution we're toying with is the idea of giving state subsidies or state sponsorship to independent media, not to any particular news organization but maybe to a broadcasting committee or a fund that is available to indigenous current affairs coverage, independent coverage.
That could be online, or in the broadcast media, or in the print media. It's a way to promote and sustain the traditional fourth estate and the traditional checks and balances of democracy but in a way that I suppose has integrity and is supported, asks questions of us all, and acts as a foil to the fake news that's doing the rounds. However, it's a difficult one to get right, because who decides who's worthy of sponsorship and subsidy and who isn't? I guess if you can present as a bona fide, legitimate local platform, you should be entitled to it. That's an approach we're exploring, one that has worked elsewhere and has seemed to work in other jurisdictions.
My question will focus on a word that seemed to me to be important just now, the word “responsibility”.
This morning, we heard from representatives from digital platforms. They seemed to brush away a major part of their responsibility.
That shocked me. I feel they have a responsibility for their platforms and for the services they provide. Despite very specific questions, they were not able to prove that they are in control of their platforms in terms of broadcasting fake news or hate propaganda that can really change the course of things and influence a huge number of people.
The users, those that buy advertising, also have a responsibility. When you buy advertising, it must be fair and accurate, in election campaigns especially, but also all the time.
If there was international regulation one day, how should we determine the responsibility of both parties, the digital platforms and the users, so they can be properly identified?
That question goes to anyone who wants to answer.
I certainly share Mr. Therrien's opinion, but I would like to add something else.
If I could in this questioning just pick out one thing, it is to say that for whoever is going to control whether something should be taken down or not, or whether it's true or not—whatever—it requires effort, and that requires resources. Resources need to be paid for, and who is collecting the money? It's largely the companies.
Of course, you can have somebody for whom you can genuinely say, “Okay, this was the party, or the sponsor, or whoever who paid for the ad.” Otherwise, when push comes to shove, I think we're going to see a growing argument and a growing agreement in a lot of jurisdictions, which will say that they think the companies are collecting the money and, therefore, they have the means to control things. We've seen that to be the case when, for example, Facebook needed to have people who spoke the language of Myanmar in order to control hate speech in that country. I think we're going to see an increasing lead in many national jurisdictions and potentially probably international agreements attributing accountability, responsibility and fiscal liability for what goes on the platforms to the people who collect the money, which is normally the platforms themselves, to a large extent.
Thank you, Mr. Chair.
We heard an extraordinary statement from Google today that they voluntarily stopped spying on our emails in 2017. They did that in such a magnanimous manner, but they wouldn't agree not to spy on us in the future because there may be nifty things they can do with it.
I can't even remember 2016—it's so long ago—but 2018 changed our lives forever. I remember our committee was looking at consent and whether the consent thing should be clear or it should be bigger with less gobbledygook.
I don't ever remember giving Google consent to spy on my emails or my underage daughters' emails. I don't ever remember that it came up on my phone that, if I wanted to put my tracking location on so I could find an address, they could permanently follow me wherever I went and knew whatever I did. I don't remember giving Google or any search engine the consent to track every single thing I do. Yet, as legislators, I think we've been suckered—Zuckered and suckered—while we all talked about what consent was, what consumers can opt in on, and if you don't like the service, don't use it.
Mr. Therrien, you said something very profound the last time you were here about the right of citizens to live free of surveillance. To me, this is where we need to bring this discussion. I think this discussion of consent is so 2016, and I think we have to say that they have no consent to obtain this. If there's no reason, they can't have it, and that should be the business model that we move forward on: the protection of privacy and the protection of our rights.
As for opt-in, opt-out, I couldn't trust them on anything on this.
We've heard from Mr. Balsillie, Ms. Zuboff, and a number of experts today and yesterday. Is it possible in Canada, with our little country of 30 million people, to put in a clear law that says you can't gather personal information unless there's an express, clear reason? It seems to me that's part of what's already in PIPEDA, our information privacy laws, but can we make it very clear with very clear financial consequences for companies that ignore that? Can we make decisions on behalf of our citizens and our private rights?