Thank you very much for your invitation to appear today. My apologies for not being able to attend in person.
I am Dr. Claire Wardle. I'm a research fellow at the Shorenstein Center on Media, Politics and Public Policy at Harvard's Kennedy School.
I'm also the executive chair of First Draft. We are a non-profit dedicated to tackling the challenges associated with trust and truth in a digital age. We were founded three years ago specifically to help journalists learn how to verify content on the social web, specifically images and videos. That remains my research speciality.
In 2016, First Draft began focusing on mapping and researching the information ecosystem. We designed, developed and managed collaborative journalism projects in the U.S. with ProPublica, and then in 2017 ran projects in France, the U.K. and Germany during their elections. This year we're currently running significant projects in the U.S. around the mid-terms and the elections in Brazil, so we have a lot of on-the-ground experience of information disorder in multiple contexts.
I'm a stickler for definitions and have spent a good amount of time working on developing typologies, frameworks and glossaries. Last October, I co-authored a report with Hossein Derakhshan, a Canadian, which we entitled “Information Disorder”, a term we coined to describe the many varieties of problematic content, behaviours and practices we see in our information ecosystem.
In the report, we differentiated between misinformation, which is false content shared without any intention to cause harm; disinformation, which is false content shared deliberately to cause harm; and, malinformation, which is a term we coined to describe genuine content shared deliberately to cause harm. An example of that would be leaked emails, revenge porn or an image that recirculates during a hurricane but is from a previous natural disaster, our point being that the term “fake news” is not helpful and that in fact a lot of this content is not fake at all. It's how it's used that's problematic.
The report also underlined the need for us to recognize the emotional relationships we have with information. Journalists, researchers and policy-makers tend to assume a rational relationship. Too often we argue that if only there were more quality content we'd be okay, but humans seek out, consume, share and connect around emotions. Social media algorithms reflect this. We engage with content that makes us laugh, cry, angry or feel superior. That engagement means more people see the content and it moves along the path of virality.
Agents of disinformation understand that. They use our emotional susceptibilities to make us vulnerable. They write emotion-ridden headlines and link them to emotional images, knowing that it is these human responses that drive our information ecosystem now.
As a side note, in our election projects we use the tool CrowdTangle, which now has been acquired by Facebook, to search for potentially misleading or false posts. One of the best techniques we have is filtering our search results by Facebook's angry face reaction emoji. It is the best predictor for finding the content that we're looking for.
I have three challenges that I want to stress in this opening statement.
First, we need to understand how visuals work as vehicles for disinformation. Our brains are far more trusting of images, and it takes considerably less cognitive effort to analyze an image compared to a text article. Images also don't require a click-through. They sit already open on our feeds and, in most situations, on our smart phones, which we have a particularly intimate relationship with.
Second, we have an embarrassingly small body of empirical research on information disorder. Much of what we know has been carried out under experimental conditions with undergraduate students, and mostly U.S. undergraduate students. The challenges we face are significant and there's a rush to do something right now, but it's an incredibly dangerous situation when we have so little empirical evidence to base any particular interventions on. In order to study the impact of information disorder in a way such that we can really further our knowledge, we need access to data that only the technology companies have.
Third, the connection between disinformation and ad targeting is the most worrying aspect of the current landscape. While disinformation itself at the aggregate level might not seem persuasive or influential, targeting people based on their demographic profile, previous Internet browsing history and social graph could have the potential to do real damage, particularly in countries that have first-past-the-post electoral systems with a high number of close-fought constituencies. But again, I can't stress enough that we need more research. We simply just don't know.
At this stage, however, I would like to focus specifically on disinformation connected to election integrity. This is a type of information disorder that the technology companies are prepared to take action around. Just yesterday, we saw Facebook announce that around the U.S. mid-terms, they will take down, not just de-rank, disinformation connected to election integrity.
If disinformation is designed to suppress the vote, they can take action, whereas in other forms of information disorder, without external context, they are less willing to take action in a way that actually right now is the right thing.
In 2016 in the U.S., visual posts were micro-targeted to minority communities, suggesting they could stay at home to vote for Hillary Clinton via SMS, giving a short code. Of course, this was not possible. As a minimum, we need to prioritize these types of posts. At a time when the whole spectrum is so complex, that's the type of post we should be taking action on.
In terms of other types of promoted posts that can be microtargeted, there is a clear need for more action; however, the challenge of definitions returns. If any type of policy or even regulation applies simply to ads that mention a candidate or party name, we would be missing the engine of any disinformation campaign, which is messages designed to aggravate existing cleavages in society around ethnicity, religion, race, sexuality, gender and class, as well as specific social issues, whether that's abortion, gun control or tax cuts, for example.
When a candidate, party, activist or foreign disinformation agent can test thousands of versions of a particular message against endless slices of the population, based on the available data on them, the landscape of our elections looks very different very quickly. The marketing tools are designed for toothpaste manufacturers wanting to sell more tubes, or even for organizations like the UNHCR. I used to do that type of microtargeting when I was there, to reach people who were more likely to support refugees. When those mechanisms have been weaponized, what do we do? There is no easy solution to this challenge. Disinformation agents are using these companies exactly as they were designed to be used.
If you haven't read it already, I recommend you read a report just published by the U.K.'s leading fact-checking organization, Full Fact. They lay out their recommendations for online political advertising, calling for a central, open database of political ads, including their content, targeting, reach and spend. They stress that this database needs to be in machine-readable formats, and that it needs to be provided in real time.
The question remains how to define a political ad and whether we should try to publicly define it. Doing so allows agents of disinformation to find other ways to effectively disseminate their messages.
I look forward to taking your questions on what is an incredibly complex situation.
Focusing on elections, we wish to highlight here that Parliament is forward-thinking in the fact that in 2014, they introduced a provision to the Elections Act directed to the impersonation of certain kinds of people in the election process. While such provisions are not specifically targeted at deepfake videos, such videos may very well fall within the scope of this section.
In addition, there have been examples in our Canadian case law where social media platforms have been compelled through what courts call Norwich orders to assist in the investigation of a crime committed on that social media platform. For example, a social media platform may be compelled by a court to reveal the identities of anonymous users utilizing the services of that social media platform. That is to say that legal mechanisms already exist and, in our experience, law-abiding third parties subject to such orders generally comply with the terms thereof.
There is also room for our courts to expand on common law torts and for governments to codify new ones.
In general, laws exist in common law and statute form. It is important not to lose sight of the fact that governments have the ability to create law; that is, governments are free to come up with laws and pass them into force. Such laws will be upheld, assuming that they comply with certain criteria. Even if they do not necessarily comply with those criteria, there are certain override provisions that are available.
An example of codification of torts is British Columbia's Privacy Act, which essentially writes out in statute what the cause of action of appropriation of personality is.
Today we are flagging two other torts for discussion: unjust enrichment and the tort of false light.
With regard to unjust enrichment, such tort has generally been upheld in cases involving economic loss suffered by the claimant. However, it is reasonable to argue that the concept of losses should be expanded to cover other forms of losses that may not be quantifiable in dollars and cents.
Regarding the tort of false light, such tort exists in some states of the United States. Canada, however, does not recognize this tort just yet. However, the impact of deepfake videos may cause Canadian courts to rethink their position about the tort of false light. Even if this tort of false light does not exist in common law, it is very well within the power of the provincial government to enact the tort into statutory code, thereby creating its existence via statutory form.
In our article, we explore copyright tort and even Criminal Code actions as potential yet sometimes imperfect remedies. We note that deepfake, impressive and game-changing no doubt, is likely overkill from manipulating the public. One certainly would not need complex computer algorithms to fake a video of the sort routinely serving as evidence or newsworthy.
Think back really to any security footage you have ever seen in a news incident. It's hardly impressive fidelity. It's often grainy or poorly angled, and usually only vaguely resembles the individuals in question.
While deepfake might convincingly place a face or characteristics into a video, simply using angles, poor lighting, film grain, or other techniques can get the job done. In fact, we've seen recent examples of speech synthesis seeming more human-like by actually interjecting faults such as ums, ahs, or other pauses.
For an alternative example, a recent viral video purportedly showed a female law student pouring bleach onto men's crotches on the Russian subway to prevent them from the micro aggression of manspreading, or men sitting with legs too splayed widely apart. This video triggered an expected positive and negative reaction across the political spectrum. Reports later emerged that the video was staged with the specific intent to promote a backlash against feminism and further social division in western countries. No AI technology was needed to fake the video, just some paid actors and a hot button issue that pits people against each other. While political, it certainly didn't target Canadian elections in any conceivably actual manner.
Deepfake videos do not present a unique problem, but instead another aspect of a very old problem worthy of consideration certainly, but we do have two main concerns about any judicial or legislative response to deepfake videos.
The first is overspecification or overreaction. We've long lived with the threat that deepfake poses for video in the realm of photography. I'm no visual effects wizard, but when I was an articling student at my law firm more than a decade ago, as part of our tradition of roasting partners at our holiday parties, I very convincingly manipulated a photograph of the rapper Eminem replacing his face with one of our senior lawyers. Most knew it was a joke, but one person did ask me how I got the partner to pose. Thankfully, he did not feel that his reputation was greatly harmed and I survived unscathed.
Yes, there will come a time when clear video is no longer sacred, and an AI-assisted representative of a person's likeness will be falsified and convincingly newsworthy. We've seen academic examples of this already, so legislators can and should ensure that existing remedies allow the state and victims to pursue malicious deepfake videos.
There are a number of remedies already available, a lot which will be discussed in our article, but in the future of digitally manipulable video, the difference between a computer simulation and the filming of an actual physical person may be a matter of content creator preference, so it may, of course, be appropriate to review legal remedies, criminal offences, and legislation to ensure that simulations are just as actionable as physical imaging.
Our second concern is that any court or government action may not focus on the breadth of responsibility by burdening or attacking the wrong target. By pursuing a civil remedy through courts, particularly over the borderless Internet, it will often be a heavy burden to place on the victim of a deepfake, whether it's a woman victimized by deepfake revenge pornography, or a politician victimized by deepfake controversy. It's a laborious, slow and expensive process. Governments should not solely leave remedy entirely to the realm of victim-pursued legislation or litigation.
Canada does have experience in intervening in Internet action to varying degrees of success. Our privacy laws and spam laws have protected Canadians, and sometimes burdened platforms, but in the cybersecurity race among malicious actors, platforms and users, we can't lose sight of two key facts.
First, intermediaries, networks, social media providers, and media outlets will always be attacked by malicious actors just as a bank or a house will always be the target of thieves. These platforms are, and it should not be forgotten, also victims of malicious falsehood spread through them just as much as those whose information is stolen or identities falsified.
Second, as Dr. Wardle alluded to, the continued susceptibility of individuals to fall victim to fraud, fake news, or cyber-attack speaks to the fact that humans are inherently not always rational actors. More than artificial intelligence, it is the all too human intelligence with its confirmation bias, pattern-seeking heuristics, and other cognitive shortfalls and distortions that will perpetuate the spread of misinformation.
For those reasons, perhaps even more than rules or laws that ineffectively target anonymous or extraterritorial bad actors, or unduly burden legitimate actors at Canadian borders, in our view governments' response must dedicate sufficient resources to education, digital and news literacy and skeptical thinking.
Thanks very much for having us.
I am Tristan Harris. It's a pleasure to be with you today. My background was originally as a Google design ethicist, and before that I was a technology entrepreneur. I had a start-up company that was acquired by Google.
I want to mirror many of the comments that your other guests have made, but I also want to bring the perspective of how these products are designed in the first place. My friends in college started Instagram. Many of my friends worked at the early technology companies, and they actually have a similar basis.
What I want to avoid today is getting into the problem of playing whack-a-mole. There are literally trillions of pieces of content, bad actors, different kinds of misinformation, and deepfakes out there. These all present this kind of whack-a-mole game where we're going to constantly search for these things, and we're not going to be able to find them.
What I'd like to do today is offer a diagnosis that is really just my opinion about the centre of the problem, which is that we have to basically recognize the limits of human thinking and action. E.O. Wilson, the great sociobiologist, said that the real problem of humanity is that we have paleolithic emotions, medieval institutions and god-like technology. This basically describes the situation we are in.
Technology is overwriting the limits of the human animal. We have a limited ability to hold a certain amount of information in our head at the same time. We have a limited ability to discern the truth. We rely on shortcuts like what other people are saying is true, or the fact that a person who I trust said that thing is true. We have a limited ability to discern what we believe to be truthful using our own eyes, ears and senses. If I can no longer trust my own eyes, ears and senses, then what can I trust in the realm of deepfakes?
Rather than getting distracted by hurricane Cambridge Analytica and hurricane addiction and hurricane deepfakes, what we really need to do is ask what the generator function is for all these hurricanes. The generator function is basically a misalignment of how technology is designed to not accommodate, almost like the ergonomic view of a human animal.
Just like ergonomics, where a pair of scissors can be in my hands and I can use it a few times, it will get the job done. However, if it's not geometrically aligned with the way the muscles work, it actually starts to stress the system. If it's highly geometrically misaligned, it causes enormous stress and can break the system.
Much like that, the human mind and our ability to make sense of the world and our emotions have a kind of ergonomic capacity. We have a situation where hundreds of millions of teenagers, for example, wake up in the morning, and the first thing they do when they turn off their alarm is turn their phone over. They are shown evidence of photo after photo after photo of their friends having fun without them. This is a totally new experience for 100 million teenage human animals who are waking up in the morning every day.
This is ergonomically breaking our capacity for getting an honest view of how much our friends are having fun. It's sort of a distortion. However, it's a distortion that starts to bend and break our normal notions and our normal social construction of reality. That's what's happening in each different dimension.
If you take a step back, the scale of influence that we're talking about is unique. This is a new form of psychological influence. Oftentimes what is brought up in this conversation is, “Well, we've always had media. We've always had propaganda. We've always had moral panic about how children use technology. We've always had moral panic about media.” What is distinctly new here? I want to offer four distinct new things that are unprecedented and new about this situation.
The first is the embeddedness and the scale. We have 2.2 billion human animals who are jacked into Facebook. That's about the number of followers of Christianity. We have 1.9 billion humans who are jacked into YouTube. That's about the number of followers of Islam. The average person checks his or her phone 80 times a day. Those are Apple's numbers, and they are conservative. Other numbers say that it's 150 times a day. From the moment people wake up in the morning and turn off their alarms to the moment they set their alarms and go to sleep, basically all these people are jacked in. The second you turn your phone over, thoughts start streaming into your mind that include, “I'm late for this meeting”, or “My friends are having fun without me.” All of these thoughts are generated by screens, and it's a form of psychological influence.
The first thing that's new here is the scale and the embeddedness, because unlike other forms of media, by checking these things all the time, they have really embedded themselves in our lives. They're much more like prosthetics than they are like devices that we use. That's the first characteristic.
The second characteristic that's different and new about this form of media propagandic issue is the social construction of reality. Other forms of media, television, and radio did not give you a view of what each of your friends' lives were like or what other people around you believed. You had advertising that showed you a theoretical couple walking on a theoretical beach in Mexico, but not your exact friends walking on that specific beach and the highlight reels of all these other people's lives. The ability to socially construct reality, especially the way we socially construct truth, because we look at what a lot of other people are retweeting, is another new feature of this form of psychological manipulation.
The third feature that's different is the aspect of artificial intelligence. These systems are increasingly designed to use AI to predict the perfect thing that will work on a person. They calculate the perfect thing to show you next. When you finish that YouTube video, and there's that autoplay countdown five, four, three, two, one, you just activated a supercomputer pointed at your brain. That supercomputer knows a lot more information about how your brain works than you do because it's seen two billion other human animals who have been watching this video before. It knows the perfect thing that got them to watch the next video was X, so it's going to show another video just like X to this other human animal. That's a new level of asymmetry, the self-optimizing AI systems.
The fourth new distinct thing here is personalization. These channels are personalized. Unlike forms of TV, radio or propaganda in the past, we can actually provide two billion Truman Shows or two billion personalized forms of manipulation.
My background in coming to these questions is that I studied at the Persuasive Technology Lab at Stanford, which taught engineering students essentially how to apply everything we knew about the fields of persuasion, Edward Bernays, clicker training for dogs, the way slot machines and casinos are designed, to basically figure out how you would use persuasion in technology if you wanted to influence people's attitudes, beliefs and behaviours. This was not a nefarious lab. The idea was could we use this for good? Could you help people go out and get the exercise they wanted, etc.?
Ultimately, in the last class at the Persuasive Technology Lab at Stanford, someone imagined the use case of, what if in the future you had a perfect profile of what would manipulate the unique features, the unique vulnerabilities, of the human being sitting in front of you. For example, the person may respond well to calls from authority, that the Canadian government's summoning the person would be particularly persuasive to his or her specific mind because the person really falls for authority, names like Harvard or the Canadian government, or is really susceptible to the fact that all of his or her friends or a certain pocket of friends really believed something. By knowing people's specific vulnerabilities, you could tune persuasive messages in the future to perfectly manipulate the person sitting in front of you.
This was done in the last class of my persuasive technology class, done by one of the groups. It was on the future of the ethics of persuasive technology, and it horrified me. That hypothetical experiment is basically what we live inside of every single day. It's also what was more popularly packaged up at Cambridge Analytica where, by having the unique personality characteristics of the person who you're influencing, you could perfectly target political messaging.
If you zoom out, it's really all about the same thing, which is that the human mind, the human animal is fundamentally vulnerable, and there are limits to our capacity. We have a choice. We either redesign and realign the way the technology works to accommodate the limits of human sense making and human choice making or we do not.
As a former magician who can tell you that these limits are definitely real, what I hope to accomplish in the meeting today is we have to bring technology back inside those limits. That's what we work on with our non-profit group, the Center for Humane Technology.
Good morning, Mr. Chairman. It's a privilege to appear before your committee. Thank you for the opportunity.
My name is Vivian Krause. I'm a Canadian writer and I have done extensive research on the funding of environmental and elections activism. My understanding is I have been asked to speak to you today on the topic of elections integrity and specifically about issues related to social media.
Based on my research, Mr. Chairman, it is clear to me that the integrity of our 2015 federal election was compromised by outside interests. Furthermore, our federal election was compromised because the charities directorate at the CRA is failing to enforce the Income Tax Act with regard to the law that all charities must operate for purposes that are exclusively charitable.
I'll get to the CRA in a minute, but first I'd like to speak briefly about the non-Canadian organizations that intervened in the 2015 election and why. As evidence, Mr. Chairman, I would ask your committee to please take a look at the 2015 annual report of an American organization called the Online Progressive Engagement Network, which goes by the acronym OPEN. This is an organization based in Oakland, California. I have provided a copy to the clerk. In the annual report the executive director of OPEN writes that his organization based in California ended the year 2015 with “a Canadian campaign that moved the needle during the national election, contributing greatly to the ousting of the Conservative Harper government.”
Who is OPEN, and how did it involve itself on the 2015 federal election? OPEN is a project of the strategic incubation program of an organization called the Citizen Engagement Laboratory, CEL. The Citizen Engagement Laboratory has referred to itself as the people behind the people. It says on its website that it is dedicated to providing best-in-class technology, finance, operations, fundraising and strategic support.
What does OPEN do exactly? According to OPEN, it provides its member organizations with financial management, protocols, and what it calls surge capacity in the early days of their development. OPEN helps “insights, expertise and collaboration flow seamlessly” across borders, adding that this helps new organizations to “launch and thrive in record time”.
Indeed, that is precisely what Leadnow did in the 2015 federal election. As part of his job description for OPEN, the executive director says he was employed to “advise organizations on every stage of the campaign arc: from big picture strategy to messaging to picking the hot moments”.
OPEN is funded, as least partially, by the Rockefeller Brothers Fund based in New York. Tax returns and other documents, which I have also provided to the clerk, state that since 2013 the Rockefeller Brothers Fund has paid at least $257,000 to OPEN. In its literature, OPEN describes itself as a B2B organization with “a very low public profile”. It says this is intentional as the political implications of an international association can be sensitive in some of the countries in which it works. In its Facebook profile, the executive director of OPEN says of himself that he can see the Golden Gate from one house—in other words, from San Francisco—and the Washington monument from the other—in other words, the White House—and he adds that he spent a lot of time interloping in the affairs of foreign nations.
What did OPEN do exactly in the 2015 federal election? OPEN helped to launch Leadnow, a Vancouver-based organization. We know this because OPEN's executive director tweeted about how he came to Canada in 2012, stayed at a farmhouse near Toronto and worked with Leadnow. Other documents also refer to OPEN's role in launching and guiding Leadnow.
We know for sure that Leadnow was involved with OPEN because there's a photo of Leadnow staff in New York attending an OPEN meeting with the Rockefeller Brothers Fund in 2012. Another photo of Leadnow is at an OPEN meeting in Cambridge, England, and there is a photo of Leadnow staff in Australia in January 2016, shortly after the federal election, winning an award from OPEN, an American organization, for helping to defeat the Conservative Party of Canada.
Leadnow claims credit for helping to defeat 26 Conservative incumbents. That's a stretch, I would guess, but in a few ridings I think it stands to reason that Leadnow may have had an impact on the vote.
For example, in Winnipeg's Elmwood—Transcona riding, where Leadnow had full-time staff, the Conservative incumbent lost by only 61 votes. Leadnow has presented itself as a thoroughly Canadian youth-led organization, the brainchild of two university students, but as we now know, that is not the whole story.
I think it is important to note that this Rockefeller-backed effort to topple the Canadian government did not emerge out of thin air. This effort to influence Canada's federal election was part and parcel of another Rockefeller-funded campaign called the tar sands campaign, which began in 2008, 10 years ago. Indeed, the tar sands campaign itself has also taken credit in writing for helping to defeat the federal government in 2015.
For many years, the strategy of the tar sands campaign was not entirely clear, but now it is. Now the strategy of the tar sands campaign is plenty clear, because the individual who wrote the original strategy and has been leading the campaign for more than a decade has written, “From the very beginning, the campaign strategy was to land-lock the tar sands so their crude could not reach the international market where it could fetch a high price per barrel.”
Now, turning to the CRA, I'll be brief. As an example of what I regret to say I think is a failure on the part of the charities directorate to enforce the Income Tax Act, I referred the committee to three charities. These are the DI Foundation, the Salal Foundation, and Tides Canada Foundation. As I see it, the DI Foundation and the Salal Foundation are shell charities that are used to Canadianize funds and put distance between Tides Canada Foundation and the Dogwood initiative. The DI Foundation, a registered charity, has done absolutely nothing but channel funds from Tides Canada Foundation to the Dogwood initiative, which is one of the most politically active organizations in our country.
In the 2015 federal election, the Dogwood initiative was a registered third party, and it reported, for example, that it received $19,000 from Google. The Dogwood initiative is also one of the main organizations in the tar sands campaign, as it received more than $1 million from the American Tides Foundation in San Francisco. One of its largest funders, in fact, I believe its single largest funder, is Google.
According to U.S. tax returns for 2016, Google paid Tides $69 million. The Tides Foundation in turn is one of the key intermediary organizations in the tar sands campaign, and has made more than 400 payments by cheques and wire transfers to organizations involved in the campaign to landlock Canadian crude and keep it out of international markets.
Mr. Chairman, in conclusion, I think it's important to note that the interference in the 2015 federal election was done with a purpose. It was done as part of a campaign to landlock one of our most important national exports. I hope that my remarks have given you a glimpse of some of the players that were involved, the magnitude of the resources at their disposal, and perhaps also some actionable insights about what your committee could do to better protect the integrity of our elections in the future.
Thank you very much.
Let's start with Tides Canada. The American Tides Foundation, based in San Francisco, incorporated in British Columbia in the late 1990s and then changed its name to become the Tides Canada foundation. The American Tides Foundation, I think it would be fair to say, is the parent organization of Tides Canada.
The Dogwood initiative was initially created out of the American Tides Foundation. Initially it was called Forest Futures, and then it changed its name around 2004 to become Dogwood.
Leadnow, if I'm not mistaken, began around 2010 as a not-for-profit. Dogwood itself is also a not-for-profit, but it has been funded by at least 10 registered charities over the years. As I mentioned, one of the charities that funds it is the Salal Foundation. It was created by the same people, including the former chairman of the board of the Tides Foundation. For 12 years, it was dormant. It was inactive. Then, in 2012, it basically sprang to life, and Salal's revenues have now gone from about $200,000 to more than $1 million. In fact, last year, the number one top recipient of funds from Tides Canada, if I'm not mistaken, was Salal, which got $488,000.
I think what we're seeing is that in the tar sands campaign, the campaign to landlock the crude from western Canada, more than 100 organizations have been funded in the U.S., Canada and Europe. The number one and two, the top one, is the Sisu Institute Society, which funds Leadnow, and Dogwood.
This has been a fascinating study, because we're trying to look at protection of the integrity of the electoral system, but we're starting to, I think, deal with much larger issues that are going to be much more complex for parliamentarians to consider.
Mr. Harris, I am a digital addict. My wife has called me out on that many times, especially Friday nights. I'm not allowed to go on Facebook and Twitter when I get home after a week, just to try to civilize me. I've checked my phone probably 12 times since you were talking. But I did spend half my life without digital—as a kid reading comic books, climbing trees, listening to vinyl, spending time outside the principal's office without a phone—and I'm addicted, and I accept it.
I'm concerned about the picture you're painting of the massive level at which we are jacked into these systems that are growing stronger all the time. I look at young people, and I look at kids I see in the grocery store whose mothers have given them a phone to play with. What do you think the larger long-term impacts are on brain development, on the ability to have young people develop internal spaces, about the ability to imagine and the ability to remember? Are you concerned that, as we're jacked into these much larger systems, we're actually rewiring our internal spaces?
Yes. I'm so glad you brought this up.
There are a number of issues to be concerned about, so I'm going to try and figure out how to formulate my response.
One way to look at this, if you think about protecting children.... Marc Andreessen, who is the founder of Netscape, has this insight that says software is eating the world. That means every single industry, domain, whether that's the way that children consume media or the way we get around in Ubers versus taxis, technology, if you throw it into that domain, will do the thing more efficiently. So software will continue eating the world. However, we don't regulate software, so what that really means is “deregulation is eating the world”.
I don't know how it works in Canada, but in the United States I think we still have protections about Saturday morning cartoons. We recognize there is a particular audience, which is to say, children, and we want to protect them. We don't want to let advertisers do whatever they want during the Saturday morning cartoon period.
As soon as you basically offload that regulated channel of television and formal Saturday morning programming, and say let's just let YouTube Kids handle it, then you get algorithms, just machines, where the engineers at YouTube have no idea what they're putting in front of all of those 2.2 billion channels, of which several hundred million are for children.
That's how to see the problem. We have a five-second delay on television for a reason. There are 100 million people or 50 million people on one side of the screen and a couple of people who are monitoring the five-second delay, or the editorial. If some gaffe happens, or there is profanity or something like that and you want to protect...you have some kind of filtering process.
Now we have 2.2 billion channels. This is the same, whether on the other side of that channel is a child or a vulnerable person in Myanmar who just got the Internet and is basically exposed to vulnerable things. The unified way of seeing this problem is that there is a vulnerability in the audience, whether that audience is a child, someone in Myanmar, or someone in an election. If we don't acknowledge that vulnerability, then we're going to have a huge problem.
The last thing I'll say, just to your point about children, is that when the engineers at Snapchat or Instagram—which, by the way, make the most popular applications for children—go to work every day, these are 20- to 30-year-olds, mostly male, mostly engineers, computer science or design-trained individuals, and they don't go to work every day asking how they protect the identity development of children. They don't do that. That's not what they do. The only thing they do is go to work and ask, “How can we keep them hooked? Let's introduce this thing called a “follow button”, and now these kids can go around following each other. We've wired them all up on puppet strings, and they're busy following each other all day long because we want them just to be engaged.”
Obviously, people have some amount of free choice to double-confirm everything that they're reading and things like that. I try to look, as a sort of a behavioural scientist, at just the reality of human behaviour. What do most people do most of the time? The challenge is that when we are so overloaded and our attention is so finite and we're constantly anxious and checking things all the time, there really isn't that time to realistically double-check everything.
There are two kinds of persuasion. There's persuasion where if I the magician tell you how this works, suddenly the trick doesn't work anymore because you know that it's a technique. There are forms of advertising where that's happened. The second kind of persuasion is that even if I tell you what I'm doing, it still works on you. A good example of this is what Dan Ariely, the famous behavioural economist, says, that it's about flattery. If you tell someone, “I'm about to flatter you and I'm making it up,” it still feels really good when you hear it.
A second example of this is if you put on a virtual reality helmet. I know that I'm here in San Francisco in this office, but in the virtual reality helmet, it looks like I'm on the edge of a cliff. If you push me, even though my mind knows that I'm here in San Francisco, millions of years of evolution make me feel like I should not fall over.
What we have to recognize is that the socio-psychological instincts, such as those that arise when children are shown an infinite set of photos of their friends having fun without them—“I know that is a highlight reel; I know that is a distortion”—still have a psychological impact on people. The same thing is true of the kinds of toxic information or malinformation that Claire is talking about.
You have described it. We've decentralized vulnerabilities so that now, instead of waiting to pay to publish something, I just basically ride on the waves of decentralized chaos and use people's socio-psychological vulnerabilities to spread things that way.
In terms of regulation, one thing we need to think about is at what point a publisher is responsible for the information it is transmitting. If I'm The New York Times and I publish something, I'm responsible for it because I have a licence and I've trained as a journalist and could lose the credibility of being a trusted organization.
One thing the technology companies do is make recommendations. We've given them the safe provision that they're not responsible for the content that people upload, because they can't know what people are uploading. That makes sense, but increasingly, what people are watching, for example, with YouTube, 70% is driven by the recommendations on the right-hand side. Increasingly, the best way to get your attention is to calculate what should go there.
If you're making recommendations that start to veer into the billions, for example, Alex Jones' infowars conspiracy theory videos were recommended 15 billion times, at what point is YouTube, not Alex Jones, responsible for basically publishing that recommendation? I think we need to start differentiating when you are responsible for recommending things.
Thank you very much, Chair.
Just to respond briefly to Mr. Picard's quibble, I think that whenever a foreign organization and foreign funds are moved into interfering situations in the Canadian electoral process, in shell companies or confected Canadian companies to misrepresent the source of that income, the term “money laundering” is quite appropriate.
Mr. Harris, I'd like to come back to you. In a profile in The Atlantic magazine, you were described as “the closest thing Silicon Valley has to a conscience”. There has been an awful lot of discussion of the social responsibility of what one of our witnesses called the “data-opolies” with regard to the imbalance between the search for revenue and profit and growing the companies versus responsible maintenance and protection of individual users' privacy.
I'm just wondering what your thoughts are on whether the big data companies do, in fact, have a conscience and a responsibility and a willingness, a meaningful willingness, to respond to some of the things we've seen coming out of, principally, the Cambridge Analytica, Facebook, AggregateIQ scandal. We know, and we've been told many times, that it's only the tip of the iceberg in terms of the potential for gross invasion of individual users' privacy.
Yes, we have to look at their business models and at their past behaviour. It wasn't until the major three technology companies were hauled to Congress in November 2017 that we even got the honest numbers about how many people, for example, had been influenced in the U.S. elections. They had claimed it was only a few million people. Claire and I both know many researchers who did lots of late work until three in the morning, analyzing datasets and saying it had to be way more people than that. Again, we didn't get the honest number that more than 126 million Americans, 90% of the U.S. voting population, were affected until after we brought them to testify.
That's actually one of the key things that caused them to be honest. I say this because they're in a very tough spot. Their fiduciary responsibility is to their shareholders, and until there's an obvious notion that they will be threatened by not being honest, we need that public pressure.
There are different issues here, but when I was at Google I tried to raise the issue of addiction. It was not taken as seriously as I would have liked, which is why I left, and it wasn't until there was more public pressure on each of these topics that they actually started to move forward.
One last thing I will say is that we can look to the model of a fiduciary. We're very worried about privacy, but we just need to break it down. I want to hand over more information to my lawyer or doctor because with more information, they can help me more. However, if I am going to do that, we have to be bound into a contract where I know for sure that you are a fiduciary to my interests. Right now, the entire business model of all the data companies is to take as much of that information as possible and then to enable some other third party to manipulate you.
Imagine a priest in a confession booth, except instead of listening carefully and compassionately and caring about keeping that information private, the only way the priest gets paid for listening to two billion people's confessions is when they allow third parties, even foreign state actors, to manipulate those people based on the information gathered in the confession booth. It's worse, because they have a supercomputer next to them calculating two billion people's confessions so when you walk in, they know the confessions you're going to make before you make them.
It's not that we don't want priests in confession booths; it's just that we don't want priests with the business model of basically having an adversarial interest manipulating your vulnerable information.
You gave an example in one of your articles about YouTube, and you've mentioned it here also. I'm just going to tell you about something that happened to me.
Last week, I went to a grade 5 civics class and I was speaking with them. There was a Q and A after, and some of the students in grade 5, who are 10 years old, asked me what my favourite YouTube channel or video was. When I go on YouTube, I have an interest in TED Talks, or something politically related where you're watching a speech or something, but I'm also fascinated by how quickly the right side of the screen fills up with suggested topics.
If I'm watching that stuff and I don't have an awareness, either I'm young or maybe not as knowledgeable, I'm technically being hacked. I'm being injected with information that I didn't seek. I might have tried to find something that I found of interest, through an article or an ad or something, and all of a sudden all these videos are appearing, which are furthering the original premise.
If you don't have the ability to differentiate between what is right and what is wrong, then technically that's a hack. But if you look at the amount of information that's being uploaded on any given day, how would...? You talked about regulating the information. How is it possible that YouTube can regulate that information when you have so much information being uploaded? What kind of advice could you give us as lawmakers? How would you even contemplate regulating that information?
This is why I said.... The advertising business model has incentivized them to have increasing automation and channels that are doing all this. They want to create an engagement box—it's a black box; they don't know what's inside it—where more users keep signing up, more videos keep getting uploaded, and more people keep watching videos. They want to see all those three numbers going up and up.
It's a problem of exponential complexity that they can't possibly hire trillions of staff to look at and monitor and moderate the—I forget what the number is—I think billions of hours or something like that are uploaded now every day. They can't do it.
They need to be responsible for the recommendations, because if you print something in a newspaper and you reach 10 million people, there's some threshold by which you're responsible for influencing that many people. YouTube does not have to have the right-hand side bar with recommendations. The world didn't have a problem before YouTube suddenly offered it. They just did it only because the business model of maximizing engagement asked them to do it. If you deal with the business model problem, and then you say they're responsible for those things, you're making that business model more expensive.
I think of this very much like coal or dirty-burning energy and clean-burning energy.
Right now we have dirty-burning technology companies that use this perverse business model that pollutes the social fabric. Just as with coal, we need to make that more expensive, so you're paying for the externalities that show up on society's balance sheet, whether those are polarization, disinformation, epistemic pollution, mental health issues, loneliness or alienation. That has to be on the balance sheets of companies.
Ms. Wardle, I want to talk about the expanse and the changing nature of disinformation. My region, my constituency, is bigger than Great Britain, so one of the easiest ways to engage with my voters is through Facebook. In my isolated indigenous communities, Facebook is how everyone talks.
There are enormous strengths to it, but I started to see patterns on Facebook. For example, there was the Fukushima radiation map showing how much radiation was in the Pacific Ocean. It was a really horrific map. I saw it on Facebook. People were asking what I was going to do about it. I saw it again and again, and I saw people getting increasingly agitated. People were asking how come no newspaper was looking at it and why the media was suppressing it, and they were saying that Obama had ordered that this map not be talked about. I googled it. It's a fake. It didn't do a lot of damage, but it showed how fast this could move.
Then there was the burka ad of the woman in the grocery store. It's in America, but then it was in England, and then it was in Canada in the 2015 election. It was deeply anti-Muslim. People I knew who didn't know any Muslim people were writing me and growing increasingly angry because they saw this horrific woman in a burka abusing a mother of a soldier. That also was a fake, but where did it come from?
Now we have Myanmar, where we're learning how the military set up the accounts to push a genocide. When we had Facebook here, they kind of shrugged and said, “Well, we admit we're not perfect.”
We're seeing an exponential weaponization of disinformation. The question is, as legislators, at what point do we need to step in? Also, at what point does Facebook need to be held more accountable so that this kind of disinformation doesn't go from just getting people angry in the morning when they get up to actually leading to violence, as we've seen in Myanmar?
A big part of our focus ends up being on technology, but we also need to understand what this technology sits on top of, and if we don't understand how societies are terrified by these huge changes we're seeing, which we can map back to the financial crisis.... We're seeing huge global migration shifts, so people are worried about what that does to their communities. We're seeing the collapse of the welfare state. We're also seeing the rise of automation, so people are worried about their jobs.
You have all of that happening underneath, with technology on top of that, so what is successful in terms of disinformation campaigns is content that reaffirms people's world views or taps into those fears. The examples that you gave there are around fears.
Certainly, when we do work in places such as Nigeria, India, Sri Lanka and Myanmar, you have communities that are much newer to information literacy. If we look at WhatsApp messages in Nigeria, we see that they look like the sorts of spam emails that were circulating here in 2002, but to Tristan's point, in the last 20 years many people in western democracies have learned how to use heuristics and cues to make sense of this.
To your point, this isn't going anywhere because it feeds into these human issues. What we do need is to put pressure onto these companies to say that they should have moderators in these countries who actually speak the languages. They also need to understand what harm looks like. Facebook now says that if there's a post in Sri Lanka that is going to lead to immediate harm, to somebody walking out of their house and committing an act of violence, they will take that down. Now, what we don't have as a society is to be able to say, what does harm look like over a 10-year period, or what do memes full of dog whistles actually have in terms of a long-term impact?
I'm currently monitoring the mid-term elections in the U.S. All of the stuff we see every single day that we're putting into a database is stuff that it would be really difficult for Facebook to legislate around right now, because they would say, “Well, it's just misleading” and “It's what we do as humans”. What we don't know is what this will look like in 10 years' time when all of a sudden the polarization that we currently have is even worse and has been created by this drip-feed of content.
I'll go back to my point at the beginning and say that we have so little research on this. We need to be thinking about harm in those ways, but when we're going to start thinking about content, we need to have access to these platforms so we can make sense of it.
Also, as society, we need groups that involve preachers, ethicists, lawyers, activists, researchers and policy-makers, because actually what we're facing is the most difficult question that we've ever faced, and instead we're asking, as Tristan says, young men in Silicon Valley to solve it or—no offence—politicians in separate countries to solve it. The challenge is that it's too complex for any one group to solve.
What we're looking at is that this is essentially a brains trust. It's cracking a code. Whatever it is, we're not going to solve this quickly. We shouldn't be regulating quickly, but there's damage.... My worry is that in 20 years' time we'll look back at these kinds of evidence proceedings and say that we were sleepwalking into a car crash. I think we haven't got any sense of the long-term harm.
I can only speak to the particular area that I am familiar with, which is the use of funds via charities.
When you look at the reporting in the 2015 federal election, the top advertisers, the ones that were all funded as part of the tar sands campaign, if you grouped them together, they were the number one biggest advertiser. If you take those top six groups, they reported more than half a million dollars. That was more than even the United Steelworkers. That's why I looked at that. They weren't way down the list; they were at the top of the list.
In terms of recommendations, yes, ironically it seems to me that the problem and the solution start at the CRA, not Elections Canada.
A couple of other things would help, too. One of them is in the Elections Act, where there is a section that lists things a third party advertiser needs to report their spending on, and a list of things that they don't need to report.
Right now, for instance, the creation of websites is on the list of expenditures they don't need to report. My understanding is that this is because that part of the act was written more than 10 years ago, when expenditures on that were small and not very relevant. I think we need to update and remove that. It is now not a small part of the election spending budget, but in fact the main part.
That would be one thing that could be done.