Thank you to the chair, the vice-chair, and the committee members for the opportunity to give evidence today. I have followed the work of this committee as it relates to Cambridge Analytica fairly closely, especially as it has intersected with investigations in the United Kingdom and the United States. I have been impressed with the committee's unwavering efforts to fact-find and truth-tell as it has probed the company entangled in the transnational election and data crimes investigations of 2016: AggregateIQ, an exclusive vendor of SCL Elections Limited, which is the registered data controller of Cambridge Analytica in the United Kingdom.
Kindly allow me to offer a brief chronology of my personal effort to win full disclosure of an SCL Elections voter profile generated from the 2016 presidential election cycle, under the U.K. Data Protection Act of 1998, in the courts and through enforcement actions of the U.K. Information Commissioner's Office.
In January 2017, I filed a subject access request at cambridgeanalytica.org to request my voter file, after being advised this was possible. I was instructed to pay SCL Elections Limited a £10 fee and provide copies of government ID and a utility bill to validate residency.
In March 2017, I received an attempt from firstname.lastname@example.org to be compliant with the U.K. Data Protection Act of 1998, which included a letter signed by SCL Group chief operating officer Julian Wheatland, and an Excel spreadsheet with voter registration data and an ideological model consisting of 10 political topics ranked with partisanship and participation predictions. I expected to receive much more data, as Alexander Nix, Cambridge Analytica's CEO, had frequently boasted of collecting up to 5,000 data points for each U.S. voter.
In July 2017, I filed a complaint with the Information Commissioner's Office under section 7 of the U.K. Data Protection Act that SCL Elections Limited had refused to answer any questions or respond to any concerns regarding the data provided.
In October 2017, I launched a crowdfunding campaign to file a claim in the High Court of Justice against SCL Elections and related companies.
In February 2018, I gave evidence to the U.K. House of Commons select committee on digital, culture, media and sport when it convened hearings in Washington, D.C.
In March 2018, I filed and served SCL Group and Cambridge Analytica with a section 7 Data Protection Act claim demanding full disclosure of my voter profile providing expert witness statements that evaluated how the provided data could not possibly be complete.
In May 2018, the Information Commissioner's Office issued an enforcement notice to SCL Elections Limited, to comply with its order to fully disclose my voter data file, under criminal penalty.
In June 2018, while giving evidence to the LIBE committee in the European Parliament, on the dais with the information commissioner and deputy information commissioners present, SCL Elections failed to respond to the enforcement order as the deadline expired, as we were sitting there in Brussels.
In December 2018, I instructed an insolvency barrister to challenge the administrators attempting to liquidate most of the SCL Group companies, and I won a court order to get disclosure of the complete administrators' filings, which they refused to share with us.
In January 2019, the ICO prosecuted SCL Elections for failing to respond to its enforcement order to disclose my data. Despite publishing an intent to plead not guilty in its public filings, the joint administrators entered a surprise plea of guilty, then paying court fines and costs. It was reported at this trial that the ICO finally received passwords to servers seized from Cambridge Analytica/SCL under criminal warrant in March 2018. According to court-ordered disclosures I obtained in December 2018, the ICO was seeking these passwords potentially as early as May 2018.
In March 2019, the high court in the U.K. will hear our challenge to the joint administrator's proposal to liquidate the SCL Group companies. Evidence will be presented that highlights concerns that the administrators and directors have misled the court on critical matters. In addition, the high court is notified of evidence discovered by Chris Vickery, another panellist today, that indicated how former Cambridge Analytica and SCL employees had been building new companies while accessing databases of CA/SCL that remain in the cloud.
We will continue to pursue complete disclosure of my data file and won't give up until fully vindicated. Both the ICO and the DCMS committee have repeatedly expressed the clear understanding that because U.S. voter data was processed in the U.K. by SCL, the Data Protection Act applies and the ICO has jurisdiction.
The quest to repatriate my voter file from the U.K. teaches us so much about the fundamental data rights that the United States and Canada have not yet assigned and protected for their citizens. We can now clearly understand how the right of access underpins the essence of data protection as a key to sustaining democracy in the 21st century.
I look forward to being able to answer the committee's questions about my journey in reclaiming my Cambridge Analytica data and what it might portend for the future of our digital democracy.
Hello. It is a pleasure to appear once again before the committee. I've always enjoyed speaking with you, and feel that I can bring a lot to the table.
I reviewed the previous meetings' recordings for this specific subcommittee, talking about data and privacy along the pathway of moving Canada to digital online government services, and where things are now and where they are going to be, as well as different concerns the committee has. While I am open to answering questions about the AggregateIQ/Cambridge Analytica situation, I will not focus on that in my opening remarks. I am going to address some issues that were brought up in those previous meetings that I listened to and reviewed just recently.
Right now, it feels that Canada needs to make a decision on which direction the tech strategy needs to go in, or wants to go in. There is an opportunity to jump headlong into the game with all of the other big players and to try to be on the leading edge of the government digital crossroads, but it seems to me that the most natural position from what I heard in the previous discussions is to take a stance of, okay, let the other guys make the mistakes and do the advance running, sprinting at the head of the crowd, and then incorporate the things that work into your systems, and not the things that don't work. That seems to be the most advantageous position that I heard.
Another contention was on whether or not it should be mandatory to bring people into this digital environment, whether Canadians feel wary or not or trusting enough to give all of their personal data, medical data, over to a Big Brother type of situation. If you make it mandatory, then when there is, or if there is, any sort of data breach or vulnerability or problem that's taken advantage of, you risk a huge hit in public confidence in the system.
I would recommend that Canada try to have it be adopted by success, rather than being forced upon people, so that if a neighbour, by word of mouth, tells somebody else, “I made an appointment with my doctor; it was so easy, you should get online and do this, too”, that would be a lot better than if there were a data breach and those two neighbours were then talking about how much they hated being forced into the situation.
I heard a lot of discussion about blockchain, and some people trying to float the opinion that blockchain is going to solve things. I would be very wary of blockchain technology in its current state, and even in the future. Blockchains are basically where everybody has everything. It's a distributed ledger. It's not necessarily a secret key thing, or technology. I believe the great many failures of various coins on blockchains have indicated the somewhat inevitable issues that can crop up, and it's just not mature enough to be handling medical data and personal data, and especially for voting. That's a nightmare.
Another issue that was brought up was anonymizing data, and how important it is to have these pools of data so they can be studied and shared among the government departments and easily ported from one database to another, and how great that can be. Yes, you can get some great insights from that sort of study and looking at everything from a meta, overall angle, but there really is no such thing as anonymized data. It's a little bit of a misnomer. You can have data that you redact certain elements from, or drop certain things and try to make it hard to re-identify the people, but all you actually do when you anonymize data is that you make it harder and harder for the little players to re-identify folks. I guarantee that the big data brokers and the banks and the insurance companies can re-identify the data in most anonymized datasets simply based upon what they have already and are able to reference. It's just a matter of how much data the entity has that determines how long it takes them to re-identify it, so be very careful with anonymized data and thinking it's foolproof.
I don't just want to bring up issues or problems. I also want to bring forward some ideas, some brainstorming, of different ways to implement secure data-sharing among various government departments. The idea that privacy and security is built in by design is very powerful.
I think there is an opportunity for you to take the mindset of asking, if you were creating all of existence and you could create the laws of physics, the fundamental building blocks of the ecosystem that your data is going to live in, how you would do it so that it's secure.
I would do it in a way that database A and database B don't even speak the same language, can't communicate with each other, cannot pool data together, and I'd have a translator in the middle that they pass the data to, which would then translate it to each other.
That's just an idea I had. The advantage there is that you can have the translator be not available 24 hours a day, seven days a week, so that when everybody is asleep on a Saturday night, you don't have to worry about a bad guy getting into one and being able to access all of the others. It's all about segmentation, breaking things into pieces, compartmentalizing. Even though that makes it a little bit harder on the programming end of things, I think you'll get a much better outcome if you plan this sort of thing ahead of time, do it the right way and make sure everybody involved is of the right mindset.
Finally, I want to say that if there's one thing that needs to be done the old-fashioned way, it's voting. Digital voting is laden with all sorts of problems, with corruption. If there's one thing we need to do with hand-marked papers, it is voting. I'm very disappointed in how the United States has come to handle voting, and I wish much better for your country.
Thank you for the opportunity to speak before your committee today. I have closely followed the work of this committee, including its superb representation by the chair and vice-chairs in last November's International Grand Committee on Disinformation and ‘fake news’. I am honoured to be here, and I appreciate your overall interest in consumer privacy.
I am the CEO of DCN. Our mission is to serve the unique and diverse needs of high-quality digital content. This includes small and large premium publishers both young and centuries old. To be clear, our members do not include any social media, search engine or ad tech companies. Although 80% of our members' digital revenues are derived from advertising, we are working with our members to grow and diversify.
DCN works as a strategic partner for its membership by advising and advocating with a particular eye on the future.
As you are aware, there are a wide variety of places where consumers can find online content. In light of this dynamic, premium publishers are highly dependent upon maintaining consumer trust. As an organization, DCN has prioritized shining a light on issues that erode trust in the marketplace, and I'm happy to do so today. This makes enhancing consumer privacy while also growing our members' interests a critical strategic issue for DCN.
Over the past decade, there has been a significant increase in the automation of content distribution and monetization, particularly with advertising. We've shifted to a world where the buying, the bidding, the transacting, and the selling of advertising happens with minimal human involvement. We do not expect nor do we seek to reverse this trend, but today I hope to explore with you a few of the major challenges impacting the industry, the public and democracy.
The first area I would like to explore is the rise of what your December report aptly labels “data-opolies”. Unfortunately, an ecosystem has developed with very few legitimate constraints on the collection and use of consumer data. As a result, personal data is more highly valued than context, consumer expectations, copyright or event facts.
Today, consumer data is frequently collected by unknown third parties without any consumer knowledge or control. Data is then used to target users across the web, without any consideration of the context, and for as cheaply as possible.
In our mind, this is the original sin of the web—allowing for persistent tracking of consumers across multiple contexts. This dynamic creates incentives for bad actors and sometimes criminal actors, particularly on unmanaged platforms like social media where the bias is for a click, whether it's from a consumer or a bot.
What is the result? A massive concentration of who is benefiting from digital advertising, namely Google and Facebook. Three years ago, DCN did the original analysis, including giving them the label of “the duopoly”. The numbers are startling. In the $150 billion-plus digital ad market across North America and the EU, 85% to 90% of the incremental growth and over 70% of the total ad spend is going to just these two companies.
Then we started digging deeper and, as in your report, we connected their revenue concentration to their data practices. These two companies are able to collect data in a way that no one else can. Data is the source of their power. Google has tracking tags in which they collect data on users across approximately 75% of the top one million websites. We also learned, thanks to evidence provided in the U.K. to the DCMS committee, that Facebook has tracking tags on over eight million sites. This means that both companies see much of your browsing and location history.
Although your work is mostly focused on Facebook, we would strongly encourage you to also review the role of Google in the digital ad marketplace. DCN recently helped distribute research conducted by Dr. Doug Schmidt of Vanderbilt University, which documented the vast data collection of Google.
Google has used its unrivalled dominance as a browser, operating system and search engine to become the single greatest beneficiary in the provision of ad tech services. Google has no peer at any stage of the ad supply chain, whether buying, selling, transacting or measuring advertising. In any other marketplace, this would be illegal. In the financial world, it is akin to being the stockbroker, the investment banker, the stock exchange and the stock itself.
Therefore, we believe that recommendations 12 and 13 in your report are important as you seek to understand the clear intersection between competition and data policy. The emergence of these data-opolies has created a misalignment between those who create the content and those who profit from it. It has also allowed a vicious cycle in which the industry rules and the consumer privacy bar are set to protect incumbent industry interests rather than consumer trust.
We would also encourage you to further explore law professor Maurice Stucke's arguments, along with Anthony Durocher's from your Competition Bureau, recommending a shift beyond price-centric analysis as companies offer free products to exploit consumer data. With the U.K. ICO's findings regarding Facebook's privacy practices from 2007 to 2014, which your own report labels as “severe”, I would call attention to a research paper published last week by Dina Srinivasan, titled “The Antitrust Case Against Facebook”, in which Ms. Srinivasan documents this bait and switch by Facebook in its early years, originally using privacy protection as a paramount differentiator in a very competitive set of free products of social networks that were forced to compete on quality and, over time, lowering the quality of privacy.
Finally, the scandal involving Facebook and Cambridge Analytica underscores the current dysfunctional dynamic. Under the guise of research, GSR collected data on tens of millions of Facebook users. As we now know Facebook did next to nothing to ensure that GSR kept a close hold on that data. Facebook's data was ultimately sold to Cambridge Analytica to target political ads and messaging, including in the 2016 U.S. elections.
With the power Facebook has over our information ecosystem, our lives and our democracy, it's vital to know whether or not we can trust the company. Many of its practices prior to reports of the Cambridge Analytica scandal clearly warrant significant distrust. Although there has been a well-documented and exhausting trail of apologies it's important to note there has been little to no change in the leadership or governance of the company. With this in mind, there is an acute need to have a deeper probe, only made more apparent by the company's repeated refusals to have its CEO offer evidence to DCMS and your grand committee. They've said the buck stops with CEO Mark Zuckerberg, but at the same time he's avoided the most difficult accountability questions. There is still much to learn about what happened and how much Facebook knew about the scandal before it became public. The timeline is troubling to me.
We learned from Mr. Zuckerberg's testimony to the U.S. Senate judiciary committee that a decision was made not to inform Facebook users that their data had been sold to Cambridge Analytica after The Guardian reported in December 2015. The Guardian reporter had said he reached out to GSR as early as late 2014, nearly a year before reporting on it. GSR co-founder Aleksandr Kogan testified to Senator John Thune that he and his partner had met with Facebook several times throughout 2015. Even more incredulous to me was that Facebook hired to its staff Kogan's so-called equal partner at GSR, Joseph Chancellor, on November 9, 2015, an entire month before The Guardian reported. Time and again, Facebook has been asked when exactly Mr. Zuckerberg became aware of Cambridge Analytica, yet Facebook only offers a non-answer by replying that he became aware in March of 2018 that the data had not been deleted. On a personal note, I find this answer offensively obtuse.
Considering that the FTC has a consent decree with Facebook to report any wrongful uses of data, it's incredibly relevant to know when its CEO was first aware of Cambridge Analytica. We now know Facebook spent significantly more time and resources in 2016 helping Cambridge Analytica buy and run ad campaigns than they did trying to clean up their self-titled “breach of trust”. Although Facebook's CEO testified to the U.S. Congress in April 2018 that they immediately worked to have the data deleted upon being made aware in 2015, Facebook has already submitted evidence to DCMS that no legal certifications happened with Cambridge Analytica until well into 2017 when its CEO returned a fairly useless piece of paper.
Finally, Facebook disclosed in September 2018 that Mr. Chancellor no longer worked at Facebook without any explanation after a nearly six- months-long investigation, which began only after the TV show 60 Minutes drew further scrutiny to his role.
Equally troubling in all of this, other than verbal promises from Facebook, is that it's not clear what would prevent this from happening again. Moving forward, we urge policy-makers and industry to provide consumers with greater transparency and choice over data collection when using practices that go outside consumer expectations. Consumers expect website or app owners to collect information about them to ensure that the site or app works. Indeed, data collecting used within a single context tends to meet with consumers' expectations, because there is a direct relationship between these activities to the consumer experience, and because the consumer's data is collected and used transparently within the same context. However, as happened in the case of Facebook and Cambridge Analytica, data collected in one context and used in another context tends to run afoul of consumer expectations.
Also, it is important to note that secondary uses of data usually do not provide a direct benefit to consumers. We would recommend exploring whether service providers that are able to collect data across a high threshold of sites, apps and devices should even be allowed to use this data for secondary uses, without informed and specific consent. A higher bar here would solve many of the issues previously mentioned.
Finally, it is important to shed light on these practices and understand how best to constrain them going forward. I appreciate your efforts to better understand the digital landscape. By uncovering what happened and learning from it, you are helping to build a healthy marketplace and to restore consumer trust.
I'd like to start with you, Mr. Vickery. Certainly, the establishment of digital government in Canada will be very different from Estonia's, given that we have provinces and territories, municipal governments, regional governments and the federal government and there are quite clearly defined lines of authority in terms of who has jurisdiction or not.
Even in the establishment of early forms of limited digital government.... Let's say the Canadian government were to look at only the areas of its jurisdiction in relation to the entire population of the country. One would expect that there would be something of a gold rush by companies looking to be the creators, the administrators or the partners, if you will, in creating such a huge digital operation.
The Canadian Bankers Association, or at least the president of the association, has suggested that banks are the most trusted handlers of personal data. They have two-factor logins and they're more responsible, say, than the Equifaxes or other collectors of data, the data brokers, and certainly more responsible than companies such as Alphabet, Google, Facebook and so forth.
I'm just wondering what sorts of guidelines you would suggest to the Government of Canada if it were to set up digital government. What sorts of companies would you recommend to be on the inside in the creation and the maintenance and guarantor of security?
You can decide after my round of questioning whether it was a graceful presence, but thank you, Chair.
Thank you to our witnesses for being here.
There's a lot I want to explore, but the time is limited so I'll try to keep it tight and bright and follow the chain of events of, say, the interference or the attempted corruption—or successful corruption—of the U.S. election and the Brexit vote.
You talked about accountability, Mr. Kint, in your last piece. Does the chain start with access, illegal or otherwise, to the databases that parties hold on citizens? Parties collect an enormous amount of information about voters, voting intent and location, potentially income and preferences, and that information, once hacked—because there was not sufficient security there—was then allowed to be weaponized through the social media platforms. You talked, in your last comment, about accountability toward Facebook.
This is a life-threatening event for that company. Trust is important to any company, particularly social media. What has the response been like since that line was proven, the hack of the DNC and the Republicans, the targeted lies that were then spread through that election, and Facebook not being accountable to its users for its security?
With these behemoths—Facebook and Google—two things have happened. You mentioned one, Mr. Kint, which is the profit. There's a phenomenal profit, but they get that profit by making use of copyright that does not belong to them. They take a music video they know I'll like, and they show it to me. They'll put an ad beside it, and they'll keep the money; the musician gets nothing. Or they'll take a wonderful photo that was captured by a photographer who could have sold it to newspapers and such before. They'll take it, digitize it, and then someone will look for that photo. The company takes it and profits.
They do it to journalists, to writers, musicians, artists—all types. I'm not searching for any content that Google's made. I'm not interested. Facebook doesn't make any content.
I want to talk about the money, the profit motive, first of all. They've been protected by something called “safe harbour”, which means they can say, “Hey, you wanted to see this. I just showed it to you. I'm clean here.” Here in Canada, for example, many of our media outlets are suffering tremendously. They've lost all their ad revenue as well. It doesn't mean that people aren't reading their newspaper articles. They are reading them, but they're reading them through a Google aggregate or something like that, and again, Google's taking the profit.
Do you see a way for Canada to deal with that? If not Canada alone, should we be working with our allies to say that's enough profiteering off of all these people? That's what's given this phenomenal power.
I want to go back to digital services, and I will ask my questions in French.
When I buy merchandise at a store, I'm not required to provide my email address or any other information, no matter how confusing it may be for the person at the cash register, who wonders what to do on the machine. I'm able to buy something without providing personal information. I shouldn't need to provide information to buy sports equipment.
However, I believe that when I'm dealing with the government, I'm required to provide personal information. I'll be given a social insurance number if I can at least provide my name and some references. It's the same for my driver's licence. If I don't provide references, I can't obtain a driver's licence or social insurance number. As a result, I can't find legitimate work because the employer needs my social insurance number. I'm required to provide personal information to the government.
In order to provide optimal and more effective service, the government can't help but turn to digital services and the Internet. It must develop techniques, ways and tools to provide more effective service. I'm of the school of thought that no system is 100% secure, simply as a result of the human factor or the possibility of an inside job. These are the worst threats that can't be controlled. Therefore, the government is forced to design a service that will be vulnerable.
How far can it go? How far should it go? Should it consider that, in spite of everything, it must provide digital services?