Hello. Thank you for inviting me to speak today.
I am an assistant professor at the University of Ottawa. I completed my doctoral work at the University of Oxford. My research focuses on political communication in a digital media environment. I've examined issues such as the political uses of artificial intelligence and political bots, echo chambers, and citizens' perceptions of social media data use by third parties, such as government, journalists and political parties.
My research has been conducted in Canada and internationally, but today I want to speak about four things: first, analog versus digital voter-targeting strategies; second, changing definitions of political advertisements; third, self-regulation of platforms; and fourth, artificial intelligence.
I have one quick note. I'll use the term “platform” throughout my testimony today. When I do, I'm referring to technology platform companies, including social media, search engines and others.
Let's start with voter targeting. This is by no means a new phenomenon. It's evolving at a spectacular rate, though. It is typical and in fact considered quite useful for a political party to collect information by going door to door in a community and asking people if they plan to vote and who for. In some cases, they may also ask what issues a citizen cares about. This helps political parties learn how to direct their limited resources. It also helps citizens connect with their political system.
However, even with this analog approach, there are concerns, because disengagement of voters and discrimination can be exacerbated. For example, if certain groups are identified as unlikely voters, they are then essentially ignored for potentially the remainder of the campaign.
Digital data collection can amplify these issues and present new challenges. I see four key differences in the evolving digital context as opposed to that analog one I briefly outlined.
First, there are meaningful differences between digital and analog data. The speed and scope of data collection is immense. While data collection used to require a lot of human resources, it now can be done automatically through sophisticated tools. I believe that last week you heard from a number of people who described the ones that political parties are using currently.
Similarly, this data can now more easily be joined with other datasets, such as credit history or other personal information that citizens may not want political parties or political entities to be using. It can also be more easily shared and transported and more easily searched, and predictive analytics can be employed because there is so much more data and there are so many more kinds of data that they can be collected together and analyzed very quickly.
Second, citizens may no longer be aware when their data is being collected and used. Unlike when they had to answer the door to give out personal information, this now can be done without their knowledge. They may not even know what is technically possible. In a study of Canadian Internet users, my colleagues at Ryerson University and I found that most Canadians are uncomfortable with political uses of even publicly available social media data. For me, this signals a need to really think about what kinds of data citizens would actually want their political representatives to have and to be using.
Third, the uses of data are evolving. Since online advertisements, for example, can now target niche audiences, personal data has become more useful to political entities. At the same time, these uses are less transparent to regulators and less clear to citizens. This means that emerging uses could be breaking existing laws, but they're so hard to trace that we don't know. We need to have increased transparency and accountability in order to respond adequately.
Fourth, political entities are incentivized to collect data continually, not solely during an election campaign. This means that existing elections laws could be insufficient. I should note that it is not just political parties that are collecting this kind of data, but also non-profits, unions and other third parties, so the questions about how this data is collected and what is the responsible use have to be broader than simply political parties writ large.
These changes are particularly concerning, then, because many of these uses aren't covered by existing privacy laws, and the Privacy Commissioner doesn't have the tools needed to make sure those laws are enforced the way they were intended.
This data use is not all bad. There are a lot of positive uses, including increasing voter turnout and trying to combat voter apathy. That said, to balance things we need to make sure we include political parties under the personal data uses laws that we have, PIPEDA being the main one. We need to create provisions that ensure transparency and accountability for political uses of data, and we need to ensure that citizens are literate, which includes things like having better informed-consent statements and other media and digital literacy initiatives.
With the few minutes I have left, I want to talk about a few issues that stem from this targeted voter behaviour. First is political advertisement. It's no longer quite as clear-cut as it once was. In addition to the placement cost for what platforms might call advertisements, there are a bunch of other ways that political entities can have paid content show up in somebody's newsfeed or as a recommended video, and how algorithms can be gamed to make sure that certain pieces of content show up on people's screens.
Those might include something like sponsored stories, using brand ambassadors, renting social media accounts that already have a big following, or employing political bots to help disseminate information more widely. All of these could be done potentially for free but they could also be done on a paid basis, and when they're paid, that comes awfully close to advertising, under the spirit of the law.
In response, we need to redefine what constitutes a political advertisement in order to continue enforcing these existing laws and their intended outcomes. It's particularly important that we consider this when we look at the worldwide increase in instant messaging platform use. The ways that political parties and other political entities are using instant messaging platforms is a lot harder to track than the ways social media platforms are used, and we can expect that is going to increase.
Second, I want to talk about self-regulation and how it is insufficient when we're talking about the big platform companies. While they have been responding, these are reactionary responses. These are not proactive responses to the threat that we see when digital data is being collected and personal information is being stored. These companies need to be responsible for the content that shows up, what they allow to show up, on their platforms. We also need to make sure that any interactions they have with those data are transparent and accountable. Right now there is a black box. We don't know how Facebook or Google decides what shows up and what doesn't, and we can't allow that to continue when things like personal privacy, hate speech, and free speech are being called into question.
Finally, the use of artificial intelligence is already complicating matters. The typical narrative at the moment is that when learning algorithms are used, it is impossible to open that black box and unpack what's happened and why. While this may be true if you take a very narrow technical perspective, there are in fact steps we can take to make the use of AI more transparent and accountable.
For example, we could have clearer testing processes, where data is open for government and/or academics to double-check procedures. There could be regular audits of algorithms, the way financial audits are required, and documented histories of the algorithm development, including information about how decisions were made by the team and its members and why. We also need things like clearer labelling of automated accounts on social media or instant messaging applications, and registrations of automated digital approaches to voter contact. You could imagine a voter contact registry being modified to include digital automated approaches. As well, we need widespread digital literacy programs that really dig into how these digital platforms work so that citizens can be empowered to demand the protection they deserve.
Ultimately I see a lot of value in political uses of digital data, but those uses must be transparent and accountable in order to protect the privacy of Canadians and the integrity of Canadian democracy. This requires privacy laws to be updated and applied to political parties, the Privacy Commissioner to have increased power to enforce regulations, and platforms to be held responsible for the content they choose to allow and the reasons for that.
Thank you very much for having me today.
I'm an associate professor in the faculty of law at the University of Ottawa, where I teach election law and constitutional law. Also, I am the director of the public law group there, although today I speak only for myself. I work on matters including voter privacy, campaign finance laws applied online and social media platform regulation, in addition to election cybersecurity. Today I'd like to speak to you a little bit about political parties, which I know is something you've heard a lot about, about social media platform regulation, and then about cybersecurity, briefly, I think, given what you've heard in the last few rounds of testimony.
Some of this material I had the opportunity to present to your colleagues in the procedure and House affairs committee in their study of Bill , so I also have a few comments about that bill.
The first issue, which I know you've heard about, is voter privacy as it relates to political parties. As my colleague Professor Dubois mentioned, political parties are one of the few major important Canadian institutions and entities not covered by meaningful privacy regulation. They are not government entities under the Privacy Act, and they are not engaging in commercial activity under PIPEDA. They fall into a gap between the two major pieces of federal privacy legislation.
Very recently, all of the privacy commissioners across Canada—the federal commissioner and the provincial ones—issued a statement saying this was an unsatisfactory state of affairs and something needed to be done about it. Only in B.C. are political parties covered by provincial privacy laws. There was a bill in Quebec, as I know you've heard, which was not passed before the recent election.
Bill would address these measures to some extent. Mainly, though, it would require political parties to have privacy policies and set rules on which particular issues the policies must address. All the major registered parties already do have privacy policies. The bill might change some of the issues that they address, because they're not consistent across all parties, but it would not actually clearly give oversight authority to either the federal Privacy Commissioner or Elections Canada. It would not actually require specific content in privacy policies. It wouldn't provide an enforcement mechanism. Therefore, I think, it's a good first step. It's the biggest step that's been made in terms of political parties and privacy, but it doesn't go far enough.
What would regulation of political parties to protect voter privacy look like? Voters should have the right to know what data political parties hold about them. Voters should have the right to correct incorrect information, which is pretty common under other privacy regimes. Voters should have comfort that political parties should only use the data they collect for actual legitimate political purposes. As Professor Dubois mentioned, it's a good thing that political parties collect information about voters—you can find out what voters actually want and you can learn more about them—but that data should only be used for political purposes, electoral purposes.
One place where I think some of the other generally applicable privacy rules would not work here is, say, on a “do not call” list. Political parties should be able to contact voters, and it would be a problem, I think, for democratic electoral integrity if 25%, 30% or 40% of voters were simply uncontactable by political parties. I think we have to actually adapt the content of the rules that are out there for the specific context of political parties and elections.
The second big issue I wanted to address is social media platform regulations. I know you've heard a lot about Facebook. A lot of this is contained in a paper I gave recently at MIT, which I'm happy to share with the committee if it's useful. The Canada Elections Act and related legislation governs political parties, leadership candidates, nomination contestants and third parties, as you well know. Social media platforms and technology companies need to be included under the set of groups that are explicitly regulated by electoral legislation and the legislation that is under the purview of this committee. How so? Platforms should be required to disclose and maintain records about the source of any entities seeking to advertise on them.
Bill does take some positive measures there. It would prevent, say, Facebook from accepting a foreign political advertisement for the purpose of influencing a Canadian election. That's a good step forward. It only applies during the election campaign, as I read it, and I would like to see a more robust rule that requires due diligence on the part of the social media companies. Is there a real person here? Where are they located? Are they trying to pay in rubles or dollars? Do they have an address and other basic things that we would all pretty logically think of doing, if you cared about the source of the donation.
That relates to foreign interference. It also relates to having a clean domestic campaign finance system, given all the advertising that happens online.
Another issue that I think requires further regulation is search terms. You can microtarget ads to particular users of a social media platform. If there's a political election ad on Hockey Night in Canada, we get to see the content of the ad. As members of the public, we don't necessarily get to see an ad that's microtargeted at an individual or a group of individuals and those individuals might not even know why they were targeted.
There are certain kinds of searches that we may think have no place in electoral policy. For instance, searching for racists is something you can do, potentially, and there's been a lot of media discussion about that and whether that did happen in the last U.S. election. I don't think we have concrete information about particular instances, but we know enough to know that search terms might be used in a way that we find objectionable, in broadly understood terms about how democracy should operate in Canada.
Therefore, there's a public value in disclosing search terms, but also to the individuals that have been targeted who may not know why.
Another issue is that there should be a public repository of all election-related ads. Facebook has voluntarily done some of this. That decision could be rescinded at any point by people sitting in California. That's not an acceptable state of affairs to me, so that should be legally mandated.
A very interesting precedent has been raised about political communication on WhatsApp. There's even less publicity about what is sent on text messaging, especially for encrypted end-to-end applications, like WhatsApp. It came out in the media recently that, in the Ontario provincial election, there were political communications on XBox. I don't use the XBox. I don't play a lot of video games, but people who do can be targeted and have election ads directed to them. In the public, we have no way of knowing what the content of those ads are, so public disclosure of election ads on an ongoing basis, not just during the election campaign, on all the relevant platforms is something that I would like to see.
Another matter is social media platforms and whether they should be treated as broadcasters. I'm not an expert in telecommunications law. I don't make any claims about whether, say, Facebook should count as a broadcaster, like CTV or CBC, generally. However, there are provisions in the Elections Act related to broadcasters, in particular section 348, which says that the broadcaster must charge the lowest available rate to a political party seeking to place an ad on its platform. This ensures that political parties have access to the broadcasting networks, but it also ensures that they're charged substantially the same rate. Therefore, CTV cannot say, “We like this party, so we're going to charge them less. We don't like that party, so we're going to charge them more”.
Facebook's ad auction algorithm potentially increases a lot of variation and the price that an advertiser might pay to reach the exact same audience. That is something that I think is unwelcome because it could actually tilt the scale in one direction or another.
We have a bit of a black box problem with the ad auction system. Facebook doesn't tell us exactly how it works because it's their proprietary information, but on the basis of the information we know, I think that there is something there for regulation under section 348, even if we don't treat Facebook like a broadcaster more generally.
The second last thing is liability. One way to incentivize compliance with existing laws is imposing liability on social media platforms. Generally, they're not liable for the content posted on them, so one of the big questions, before this committee and the House in general, is whether there should be liability for repeated violations of norms around elections. I think that's something that we may need to consider.
The last point I wanted to make is simply on election cybersecurity, because I understand that's something of interest to the committee. Cybersecurity costs a lot of money. For example, I think that Canadian banks spend a lot of money trying to ensure cybersecurity. It may be difficult for political parties or entities involved in the electoral sphere. Political parties receive indirect public subsidies through the rebate system, say, for election expenses. One way to incentivize spending on cybersecurity is to have a rebate for political parties or other entities to spend money on cybersecurity. That's an idea that I've been trying to speak about quite a bit lately.
The last issue is that the U.S. has come out with very detailed protocols on what should happen among government agencies in the event of a cyber-attack, an unfortunate potential event, say, in the middle of the October 2019 election. What would the protocols be? There may be discussions that I'm not privy to between Elections Canada or the new cybersecurity agency. I hope there are, but the public needs to have some confidence about what procedures are followed, because if they don't know what the procedures are, there can be risks that an agency is seen as favouring one side or another, of foreign interference, potentially, on behalf of one party or one set of entities. I think that's pretty self-evident based on what has happened in the U.S.
Some more publicity around those protocols, I think, would be very welcome.
Thank you very much for your attention. I look forward to your questions in either official language.
Thanks for having me today.
My name is Samantha Bradshaw. I'm a researcher on the computational propaganda project at the University of Oxford. I'll shorten that to Comprop.
On the Comprop project, we study how algorithms, big data and automation affect various aspects of public life. Questions around fake news, misinformation, targeted political advertisements, foreign influence operations, filter bubbles, echo chambers, all these big questions that we're struggling with right now with social media and democracy, are things that we are researching and trying to advance some kind of public understanding and debate around.
Today I'm going to spend my 10 minutes talking through some of the relevant research that I think will help inform some of the decisions the committee would like to make in the future.
One of our big research streams has to do with monitoring elections and the kinds of information that people are sharing in the lead-up to a vote, and we tend to evaluate the spread of what we call “junk news”. This is not just fake news and not just information that is false or misleading, but it also includes a lot of that highly polarizing content—the hate speech, the racism, the sexism—this highly partisan commentary that's masked as news. These are the kinds of junk information that we track during elections. In the United States, that was one of our most dramatic examples of the spread of junk news around elections. We found about a 1:1 ratio of junk information being shared to professionally produced news and information.
What's really interesting here is that if you look at the breakdown of where this information was spreading most, you see it tended to be targeted to swing states, and to the constituencies where 10 or 15 votes could tilt the scale of the election. This is really important because content doesn't just organically spread, but it can also be very targeted, and there can be organized campaigns around influencing the voters whose votes can turn an election.
The second piece of research that I'd like to highlight for everyone here today has to do with our work on what we call “cyber troops”. These are the organized public opinion manipulation campaigns. These are the people who work for government agencies, political parties or private entities. They have a salary, benefits. They sit in an air-conditioned room, and it's part of their job to work on these influence operations. Every year for the last two years we've done a big global inventory to start estimating some of the capacities of various governments and political party actors in carrying out these manipulation campaigns on social media.
There are a few interesting findings here. I'm not going to talk about all of them, for sake of time, but I'd like to highlight what we're seeing in democracies and what some of the key threats are. For democracies, it tends to be the political parties who are using these technologies, such as political bots, to amplify certain messages over others and maybe even spreading misinformation themselves in some of the cases we've seen. They tend to be the ones who use these organized manipulation tactics within their own population.
We also tend to see democracies using these techniques as part of more military psychological or influence operation activities. For the most part, it's the political parties who tend to focus domestically. We also see a lot of private actors being involved in these sorts of campaigns around elections, so where a lot of the techniques around social media manipulation were developed in more military settings for these information warfare techniques back in 2009 or 2010, now it tends to be private companies or firms that are offering these as services. Companies such as Cambridge Analytica are the biggest example, but there are so many different companies out there who are working with politicians or with governments to shape public discussions online in ways that we might not consider healthy for democracy and for democratic debate.
I guess the big challenge for me when I'm looking at these problems is that a lot of the data that goes into the targeting is no longer being held by the government, by Statistics Canada, which is the best information about Canadian public life. Instead it's being held by private companies such as Facebook or Google that collect personal information and then use that to target voters around elections.
In the past, it was all about targeting us commercially to sell us shampoo or other kinds of products. We knew it was happening and we were somewhat okay with it, but now when it comes to politics, selling us political ideologies and selling us world leaders, I think we need to take a step back to critically ask to what extent we should be targeted as voters.
I know that a lot of the laws right now are around transparency and improving why we're seeing certain messages, but I would take that a step further to ask if I should even be allowed to be targeted because I'm a liberal or on a even more microscale than that.
I know one of my colleagues earlier talked about targeting because you are identified as being a racist. At those much deeper levels as to who we are as individuals that really get to the core of our identity, I think we need to have a serious debate about that within society.
In terms of some of the future threats we're seeing around social media manipulation, disinformation and targeted advertisements, there are big questions around deep fakes and artificial intelligence making political bots a lot more conversational so that the person behind the account or the bot behind the account is human and more genuine. That might make it harder for citizens and also the platforms to detect these fake accounts that are spreading disinformation around election periods. That's one of the future threats on the horizon.
Professor Dubois talked about messaging platforms, things like WhatsApp and Telegram. A lot of these encrypted channels are incredibly hard to study because they are encrypted. Of course, encryption is incredibly important, and there's a lot of value in having these kinds of communication platforms, but the way they are affecting democracy by spreading junk information raises serious questions that we need to tackle, especially when you look at places like India or Sri Lanka where this misinformation is actually leading to death.
The third point on the horizon in the future is regulation. I think there is a real risk of over-regulation in this area. With Europe, for example, and Germany's NetzDG law, I applaud them for trying to take some of the first steps to making this situation better by placing fines on platforms. There has been a lot of, I guess, unintended consequences to that law, and we tend to see a lot more.
To use a good example, as soon as that law was put into place, there was someone from the alt-right party who had made some horribly racist comments online, and it got taken down, which is good, but what also got taken down was all the political satire, all the people calling that comment out as being racist, so you lose a lot of that really important democratic deliberation if you force social media companies to take on the burden of making all of those really hard decisions about content.
I do think one of the threats and one of the challenges in the future is over-regulation. As governments, we need to find a way to create smart regulations that get to the root of the problem instead of just addressing some of the symptoms, such as the bad content itself.
I will end my comments there. I look forward to your questions.
You make a very good point that the scenario in 2019 is very different from what it was in 2015. Things move very quickly, and the risk is that you could put in regulation that is overly intrusive, or that doesn't actually achieve what you want or has the wrong consequences. I'm always aware of that, but I think we have a lot of good information. Political parties are collecting enormous amounts of data, personal data, sensitive data. Parties have always done so, but it's just reached another level. To me this is a non-partisan issue. It doesn't affect one party more than the other, but currently all political actors have an incentive to up their data operations and their data game.
The pitch that I try to make in this space is that actually privacy rules are to the benefit of political parties. No one wants to be regulated, and it may seem onerous and it may cost money, but imagine what would happen if there was a hack of one of Canada's major political parties, similar to what happened with the Democratic Party in the United States. It wouldn't take many hacks, or many instances of personal information being disclosed by, say, a malicious foreign actor for the public to potentially lose faith or trust in that political party or the system as a whole. I think we are at a moment where it's very important to address the privacy issues, and doing so is in the interest of the political parties themselves.
I tried to suggest a few areas in terms of content, such as the right to know what data a political party holds about you, and the right to correct incorrect information. A lot of hard work is done by volunteers, as you all know, and when you're entering information on an app or on a piece of paper, it's very possible for information to be incorrect, and that may be something the voter, the individual, doesn't want. I think rules on who gets access to political party databases or at least disclosure about that might be helpful as well.
I understand those may at times seem onerous to political parties, but I think they go a long way to instilling confidence in voters that the parties have their best interests in mind. The worst case scenario is a hack. We've seen denial-of-service attacks on political parties. I believe the summarized the Communications Security Establishment report, which said there were low-level attacks in the 2015 election. CSE said there were over 40 incidents of interference around the world, so we shouldn't see Canada as isolated from that. I have a lot of concerns about 2019, and I think privacy addresses some of those.
Ms. Bradshaw, I'm very interested in your analysis of junk news. You talk about “cyber troops”, psy-ops, the weaponization of AI.
In 2015, in my region, because I basically live on Facebook, according to my wife, I saw a completely different narrative than what was in the national media. I saw deeply racist posts, mostly from Britain First, anti-Muslim posts, posts attacking immigrants and attacking refugees. They were very targeted. They were targeting on Facebook in key areas of my region among key voters.
It was a completely different message than anything that was happening nationally. It wasn't really noticed, because we still pay attention to what Peter Mansbridge says at six o'clock.
I always felt that out of that there had to have been a better or clearer type of targeting, such that these Facebook users who were not normally political were suddenly repeating this type of message. This seemed to be what we saw out of Brexit, the idea that groups such as Cambridge Analytica can specifically target the poll voters, the voters who are actually going to be influencing, and going on and pushing this.
You talked about how this was used in swing states with certain swing voters. I'd like you to elaborate on that.
I'd also like you to elaborate...because we keep talking about the third party actors as though it's just the bad, hired mercenary guns. You talked about the political influencers who actually are in the parties. Can you talk about the connections between people in the parties, these third party operatives, and how they're using this misinformation online?
If we look at the U.S. and junk news being spread in swing states, this is just based on Twitter. It wasn't a Facebook analysis, but just Twitter and what people were sharing as news and information.
We analyzed a couple of million tweets in the 11 days leading up to the vote. If you looked on average at the URLs that users were sharing in swing states, they tended to point to higher rates of junk news and information, compared with uncontested states. Therefore, part of this is the somewhat organic drive of spreading misinformation. It's not necessarily coming through the advertisements but it's being organically spread through the platforms by users, or maybe by bots, who did play somewhat of a role in amplifying a lot of those stories.
The way we measured where the accounts were coming from was by using geo-tagged data. If a user had reported to be in Michigan, for example, which was one of the swing states, that's how we determined where the information was and where the junk news was concentrated.
There's the organic side of it, but there's also the targeted advertisement side of things. We have a lot of information on Russia, thanks to Facebook's disclosures around Russian operatives buying political advertisements and targeting them to voters based on their identities or values. They homed in on groups such as gun-right activists and the Black Lives Matter movement.
They tended to also play both sides of the political spectrum. It wasn't only about supporting Trump. They also supported candidates such as Jill Stein and Bernie Sanders. They never supported Clinton, though. They would always launch ad attacks on her.
The stuff that comes from the political parties themselves is really hard to trace. That relates back to the question you asked before on laws and what we can do to improve some of this targeting stuff.
We talked to and interviewed a lot of the bot developers who worked on campaigns for various parties. They were the ones who created the political bots to amplify certain messages. It's hard to trace their work back to a political party because of the campaign finance laws that only require reporting up to two levels. Generally how these contracts go out is that there will be a big contract to a big strategic communications firm, which will then outsource to maybe a specialized Facebook firm, which will then outsource work to a bunch of independent contractors. As you go down the list, you eventually get to the bot developer, who we interviewed.
We don't have any specific data on exactly what parties these groups worked for, at least none that I can share because of our ethics agreements with these developers. The big problem here is that we're unable to actually track because of campaign finance laws.
I need an introduction that may be long, so bear with me.
In my comments, I will disregard and not consider disinformation like calls that send someone to the wrong poll or anything covered by the Criminal Code.
In talking about junk news, fake news, whatever, in the Cold War, and especially in war times, propaganda was one of the best tools in town to make sure that your message, whatever it was, went through. This, and magazines, photos in Middle East countries, in which you see food, everyone at the table, big cars, well-dressed people, just to push the population against their own government....
At the time, sending a thousand letters for publicity, whatever it was, cost a fortune. You had to send one or two pages. Today you send five million to 10 million emails in a click—no cost.
I will submit for your consideration that you are looking at the problem from the wrong end. That's my hypothesis. We try to focus on those who provide this information and bad content, and try to regulate company's social media because they do things that are not good, probably because people are too lazy to do their own cross-checking and verification of information. By the way, we don't prevent people from seeing specific information. We just download a huge amount of information and you don't see where you are anymore.
From a regulatory standpoint, how do you expect me as a government to act on those companies that are sending this kind of content without touching their freedom of speech?
I understand that you want to separate things like voter suppression tactics from junk news and junk content. However, when we're thinking about how to deal with one of those things and not the other, it's very difficult to say that we would regulate the platforms only for one thing and we're going to have a completely different solution for the other. The conversations go together, because the mechanisms for getting the information to the front page of somebody's newsfeed are the same.
There's that sort of technical challenge there, but then I think the idea of how we balance this against questions about free speech is a really important one. We don't want to have a democracy where there are people who don't get to share their opinions, where certain views are silenced. That is certainly a problem.
We have to think about the changing media system, though. We have to think about the fact that it used to cost a lot more money and take a lot more resources to spread disinformation. Now it's very easy to spread it.
We also know that people used to not have a whole lot of choices in terms of what content they were getting. For them to be media literate, and not be lazy, in your terms, was a lot simpler. There were fewer checks that they needed to do. There was less work that they had to do to make sure they knew what content was showing up and who created it. There were only so many people who could afford a broadcast licence.
The expectations we put on the citizens in that context are very different from the expectations we would put on citizens now, in saying, “Look, we can't regulate platforms. This is the responsibility of citizens.” In the media environment that we have, I think it's unreasonable to expect citizens to be able to discern the different sources of content, what is true and what isn't, without some support.
I don't want to let citizens off the hook. I think that digital literacy and media literacy are very important, but I think it's one piece of a larger puzzle.
I thank the witnesses for being here this morning.
I'd like to go back to the roots of the communication problem.
In an election, candidates have to reach between 90,000 and 110,000 electors in a short period of time, in approximately two months. We, the members, have more time because we are already in our riding. If we want to be fair to everyone, that period lasts a maximum of two months.
It is a big challenge, because it's difficult to reach people. Elections Canada provides us with an address and a name, period.
There are two ways to reach the electors, and that is to go door to door or use their phone number, but there are fewer and fewer landlines. All we can do next is try to find cell phone numbers, which is more or less legal, because those are considered confidential in Canada.
Using digital platforms has become a necessary evil for all future politicians if they want to reach a large number of people in a very short period of time. We have less than a 1,000 hours to reach 100,000 people, and that may not add up to a lot of minutes per person. That could be why targeting becomes very interesting to politicians.
Do you think it would be possible to create legislation for these platforms, so as to restrict access to some information, or give politicians more access to these platforms in order to reach people?
If we can't get the phone numbers, we turn to digital platforms. If politicians had access to cell numbers, they could call people and speak to them to at least have some direct contact.
Currently, we are doing politics through indirect contacts. To reach an elector, we are using machines and artificial intelligence, and that is not the essence of democracy. We aren't electing robots, but members of political parties and a prime minister.
The time we have to reach electors is very limited, and I see a problem there for democracy in the short, medium and long term, and that is the root of the problem. Everything else is related to the lack of time and information.
Should Elections Canada provide us with landline telephone numbers and cell phone numbers?
What started this whole investigation was the massive Facebook breach that led to Cambridge Analytica and the potential that that information undermined the Brexit vote. There was another Facebook breach of 50 million users. We have no idea. We're told not to worry. As far as they can tell, everything's fine.
As soon as I heard that, I thought, “Wow, thank God we have Facebook on the case. There's nothing to worry about here.”
When we had Facebook here, we were asking about the mass murders that happened in Myanmar. It's not the responsibility of Facebook that there were mass murders, but Facebook was accused time and time again of not responding to the misuse of their platform. Their response was something like, “We admit it, we're not perfect.” We're talking about the power of a platform to engage in mass killing.
We're talking about a lot of tweaks to a system that suddenly seems more powerful, more encompassing than domestic law, than anything we've dealt with in the past, and that seems to be moving beyond many jurisdictions with very little regard. Do you believe that platforms like Facebook, like Google, need to be regulated, or can we trust them to respond when there's enough outrage? Does there need to be antitrust action taken to break them up, since Facebook now controls Instagram, WhatsApp, and many other platforms? Google is the same.
What do you see in terms of holding these companies to account? Is it self-regulation? Is it antitrust? Is it some form of national or international regulation? I put that open.
Thanks very much, Chair.
I've been impressed that all of you have said in different ways today that we should be cautious about over-regulating with regard to the digital world, social media and accumulated individual data in the elections process. I'm wondering how much transparency should be made available to individual voters about what political parties have on them or what political parties consider their voting inclination to be.
After all, door-knocking, face-to-face contact, is still here today. It used to be telephone as well, but with the absence of land lines that's pretty much gone the way of the dodo. When we knock on doors throughout one riding or another, we find out who is inclined to vote for the party during the writ period or during the entire parliamentary session.
On election day, we're interested in getting out the vote, so our encouragement, our messaging one way or another, is to those who we know are likely to support whichever of our parties exist. It's not that we're discouraging others who at the door have told us that they're not voting for us but for party X or party Y. It's simply that we go where the votes are. We don't waste our energy trying to encourage people at the moment of decision to go to the polls and vote for us.
Again, coming down to the thorny concept of who owns my identity in the digital world or the accumulated data world, would it be necessary to tell a voter that we would consider them to be unenthusiastic about supporting me as a candidate or perhaps even hostile and very unlikely to ever vote for me or my party? How would one divulge that information? Also, wouldn't there be an awful lot of make-work if everyone is demanding to know what the party thinks of them or how they consider them?