Thank you very much for having me today.
I am here on behalf of Tech Reset Canada. We are an advocacy organization looking at the innovation economy, the public good and the impacts of the innovation economy on our society. I am really happy to get to talk to you today, because it means we're talking more about the issues related to technology.
The Facebook and Cambridge Analytica case has been one I have used often when speaking and doing public education and community events to highlight one core truth right now—there are a lot of unintended consequences coming out of the use of technology. Framing that as the reality we're dealing with, I'm just going to share some remarks regarding our work, what we have found in it and how it ties into this particular issue and, more broadly, data governance and technology and society.
Having said that, I spent some years running public consultations. I am currently living in Toronto, and one of the projects that is front and centre for me is Sidewalk Toronto. Is everyone in the room familiar with this project? It's a subsidiary company of Alphabet, a sister company to Google. It's investing up to $50 million to create a plan for a smart city on Toronto's waterfront. It's just a plan. There's no real estate transfer. It's about a year old now. What it has given us in Toronto, and I think others, is a very focused view of the level of education we have as people in this country to engage in this discourse around technology and society.
What I would like to say about all of that is that a lot of us have no idea what is going on, what data is, where our data goes, who has our data or how our data can be used. All of these issues, which are fundamental and central to making decisions about them, we do not have a good handle on.
I'm at almost the year mark of watching a company have consultations with the public while knowing that nobody understands what anybody is truly talking about. As someone who has done public consultation and who holds the profession and the practice dear to my heart—and I think it is central to democracy—I am extremely troubled at the state of that project and also about the idea that we should be making any kind of quick decision or policy. If we do that right now, I can tell you for sure that it will not be inclusive of the people who live in this country and what they want to do about some of the issues related to Cambridge Analytica and to any sort of tech company and its relationship to people. I just want to set up that this is one big thing, starting at a high level.
Another theme related to this that I think is really important to consider, whether it's Facebook, Google or any other large company, is that we're beginning to blur the line between the market and the state in this country. We're beginning to lose track of who's in charge of what, who's responsible for what, and the implications of data being used by non-government actors.
In this country, we work from a social contract. People give us data—us in terms of government—and people understand what government does with their data generally. We are introducing corporate actors into social situations, whether it's using Facebook to communicate and organize in a community and do many things, or maybe existing in a city. This sort of blurring of this line, I should hope, is becoming more visible to the people in this room. I think it is a thing of grave concern, and we need to delineate and understand who is in charge of this whole....
What's happening now is this enthusiasm for technology, and it's somehow making everybody forget what their roles are, that we have rules and laws, and that those are things that help us determine how our society looks. I don't think it was ever the intention to be enthusiastic about the innovation economy and have that then become governance of social impacts. I really don't think that was something that happened on purpose, and I think we need to be very aware of the fact that this is now happening regardless.
There is an article written in 1998 by a scholar named Lawrence Lessig that said “code is law”. Software code is, in some cases, determining.... They are not “law laws”, but they are determining social norms and the ways we interact with each other. I just think these are things we might not have understood as this began. I do not want to ever think—and I don't want anyone here to think—that people who are technologists even have a handle on the implications of all of this.
Having said those things, I have just a couple more points.
One of them is that democracy moves slowly. This is good. This stuff is hard. I would really caution everyone in this room to consider how much education we need to be doing before we can even be making decisions that are informed by people who live in this country.
I know there's a lot of enthusiasm, and everybody says tech moves incredibly quickly. We have agency over technology. Technology is not something that just pops up and doesn't exist because of humans and their agency, so we need to remember some of those facts.
Another thing to be very clear about is that we are blurring the lines between procurement and the influence of purchasing products, or using products, and how that trickles down to the people who live here.
In my opinion, what is happening in Toronto is problematic because you should not be making policy with the vendor. This is essentially what we're doing. We are allowing someone who is going to be a vendor to influence how the policy for said vendor's work will go. I do not understand how anyone could have thought this was a good idea to begin with. I don't think we should continue this for much longer. In these cases, we really need to be aware of the ways these two issues are linked to each other.
Another thing that relates to this is that we've been thinking about technology as an industry. I see that in this country, a lot of the narrative is about wanting to do well, wanting to be innovative, wanting to do the things that make us leaders in technology, and there being a lot of opportunity for prosperity and wealth development. This is true. However, there's also a much larger narrative about what it means to lead in the governance of technology and the governance of data, and Canada has an opportunity right now to lead.
You have probably heard a lot of good things about the General Data Protection Regulation in Europe. It's not perfect but it is definitely moving towards some of the things we should be thinking about. I am confident that if we really take this seriously, if we look at impacts and engage people better, we can lead.
This is an opportunity. There's a lot of fear and anxiety about what to do. If we don't go fast and we are very considerate in what we're doing, I see a great opportunity here for the country to show global leadership in what to do with data governance and governance around technology. I don't want us to miss that in this need to react to fear, anxiety or issues that are quite complicated. I really don't want to miss that point.
I also want to talk about opportunity as a technologist. I think it is something we need to think more about. How do we develop social and public structures that use all the wonderful things that technology can produce, more for public good and more within government? We need to look at our academic institutions and ask ourselves why we're not developing technology that we are using.
If you go out into our communities where people are talking about digital rights and digital justice, they are wondering why we aren't building tools that we could be using for community organizing, or for social good—lots of the ways people use Facebook or other things.... Why aren't we doing better at building systems, at building competency so that we can be building those products, figuring out different models, and thinking about how we can use these things within government.
I really want to stress this. The idea that government can't keep up with tech, or that there's a problem here because people in government don't.... This is not my belief. I'm telling you what I hear a lot. We really need to shut that down and start to show that if there is an interest in really using technology well across the board in our society, we can be intentional and make investments to make sure that happens. These are all opportunities for the country.
Again, when you respond to fear, you respond quickly, and I don't think that will be a good response. I think this case is a very good one to watch, as is the Sidewalk Toronto example. There are big issues coming out of here. There is nothing wrong. I will say this as a technologist: Everybody will think we are doing wonderful things for technology if we take it slow and figure out what to do.
This includes industry. It is not helpful to industry if you are not clear with them as to what the guardrails are, how their operations have to be law-abiding and how they can be encouraged to reflect some of the values that we as technologists think should be there in terms of sharing values, being open with things and considering things that aren't necessarily proprietary.
There are lots of ways to use technology. There are lots of ways to use math. We shouldn't think this is only a business thing. This is a social thing. There are a lot of really exciting things to do in there.
I'm trying to end on a hopeful note here because I truly believe there is great opportunity. I want to make sure we follow processes that ensure people are engaged in the development of what we're going to do next, and we do not rush that. There is no need. There is a lot of urgency in terms of not going fast. We need to really quickly decide that we are going to not go fast and be thoughtful about the process we follow from here.
I recently co-authored two books on the data-driven economy. The first, with Allen P. Grunes, is Big Data and Competition Policy, and the second, with Ariel Ezrachi, is Virtual Competition. In both books we discuss some of the benefits of a data-driven economy. We also discuss some of the risks, including algorithmic collusion, data-driven mergers and behavioural discrimination. I won't touch on that.
I'd like to talk to you today about the risks if a few powerful firms monopolize our data. I'd like to break it up into four parts. First, what are data-opolies? Second, how have competition officials in the EU and U.S. viewed them? Third, from an antitrust perspective, do these data-opolies pose any risk of harm to consumers? Finally, I will have some final thoughts.
First, what are data-opolies?
Data-opolies control a key platform through which a significant volume and variety of personal data flows. The velocity of acquiring and exploiting this personal data can help these companies obtain significant market power. In Europe, they're known as GAFA—Google, Apple, Facebook and Amazon. As these firms have grown in size and power, they have also attracted significant antitrust scrutiny, particularly in Europe.
In the United States, it's been relatively quieter. I'll give you a couple of stats. From 2000 onward, the U.S. Department of Justice brought only one monopolization case, in total, against anyone. In contrast, the DOJ, between 1970 and 1972, brought 39 civil and three criminal cases against monopolies and oligopolies.
One question is this. Is there a difference in the perception of harm across the Atlantic between the U.S. and the EU over these data-opolies? In the U.S., antitrust plaintiffs must allege actual or potential harm to competition. Ordinarily when we think of harm, we think of a cable company—higher prices, reduced output, lower quality. Superficially, it appears that data-opolies pose little if any risk of these traditional harms. Ostensibly Google's and Facebook's services are free. Amazon is heralded for its low prices. Because of these network effects, the quality of the products can improve.
If you have low or free prices and better quality, what's the problem? Some, such as the late antitrust scholar Robert Bork, have argued that there “is no coherent case for monopolization”.
One factor for this divergence may be the perceived harm. If there is a consensus over the potential harms, then the debate can switch to the best policy measures to address these harms. I've identified at least eight potential antitrust harms from these data-opolies.
The first is degraded quality. Companies can compete on multiple dimensions, including price and quality as well as privacy, so a data-opoly can depress privacy protection below competitive levels and collect personal data above competitive levels. The data-opoly's collection of too much personal data can be the equivalent of charging an excessive price. Data-opolies can also fail to disclose what data they collect and how they'll use the data, and they face little competitive pressure to change their opaque privacy policies. Even if the data-opoly were to provide better disclosure, so what? Without a viable competitive option, the notice and consent regime is meaningless when the bargaining power is so unequal.
A second concern involves surveillance. In a monopolized market, data is concentrated in a few firms and consumers have limited outside options that offer better privacy protection. This has several implications. One is government capture. The fewer the firms that control the personal data, the greater the potential risk that a government can capture the firms, using its many levers.
One risk is covert surveillance. Even if the government cannot obtain the data directly, it can try to get the data indirectly. The data-opoly's rich data trove increases a government's incentive to circumvent the data-opoly's privacy protections to tap into the personal data. This is what happened with Cambridge Analytica. There are several implications of a security breach or violation of data-opolies' data policies. A data-opoly has greater incentive to protect its data, but hackers also have a greater incentive to tap into this data, because of the vastness that it has. While consumers may be outraged, a dominant firm has less reason to worry about consumers switching to rivals.
A third concern involves the wealth transfer from consumers to data-opolies. Traditionally, you'd think of a monopoly taking money out of your pocket. Even though the product may be free, data-opolies can extract significant wealth through several levels. The first is not paying for the data's fair value. The second is that data-opolies can get creative content from users for free, for example, from YouTube videos or contributions on Facebook. The third level is that data-opolies can extract wealth from suppliers upstream. This includes scraping content from photographers, authors, musicians and newspapers, and posting it on their own website. Finally, data-opolies can engage in what's called “behavioural discrimination”. Basically, this is getting us to buy what we would not otherwise want to buy, at the highest price we're willing to pay. It's a more pernicious form of price discrimination.
A fourth concern is the loss of trust. We can view this as a dead-weight welfare loss. Some consumers will simply forgo the technology out of privacy concerns.
A fifth concern is that the data-opoly can impose significant costs on third parties. Here in our work, we talk about the frenemy relationship that data-opolies have with app makers. They need these app developers in order to attract users to their platform, but once they start competing with them, they can then have an enemy relationship. There are various anti-competitiveness practices they can engage in, including degrading the app's functionality. What is particularly important for you is that data-opolies can impose costs on companies seeking to protect our privacy interests. One example, which our book Virtual Competition explores, is how Google kicked the privacy app Disconnect out of its Android app store.
A sixth concern involves less innovation in markets dominated by data-opolies. Here we point out how data-opolies can promote innovation, but also hinder innovation. One tool they possess that earlier monopolies did not have is what we call “nowcasting radar”. They can perceive trends well in advance of, let's say, the government antitrust enforcer—nascent competitive threats—and they can squelch those threats by either acquiring them or engaging in anti-competitive tactics.
A seventh concern is the social and moral concerns of data-opolies. A historical concern of antitrust was about individual autonomy. Here, a data-opoly can hinder the individual autonomy of those who want to compete on their platform. A related concern is data-opolies making their products intentionally addictive. Here you have an interesting interplay between monopoly and competition. Ordinarily, a monopolist doesn't have to worry about consumers going elsewhere. Here, however, the data-opolies can profit by getting users addicted to spending more time on their platform. They can thereby obtain more data, target them with advertising and increase their profits.
The eighth concern is the political concerns of data-opolies. Economic power often translates into political power, and here data-opolies have tools that earlier monopolies didn't—namely, the ability to affect the public debate and our perception of right and wrong. Data-opolies, as shown in the Facebook emotional contagion study, can affect how we think and feel, particularly as we migrate to digital personal assistance and much greater interaction with the data-opolies' products. You have several risks. One of them is bias. The news we receive will be more filtered, creating echo chambers and filter bubbles. The second risk is censorship. A third is manipulation.
Several themes, in conclusion, run through my papers.
The first theme is that the potential harms from data-opolies can exceed those from monopolies. They can affect not only our wallets. They can affect our privacy, autonomy, democracy and well-being.
Second, markets dominated by these data-opolies will not necessarily self-correct.
Third, global antitrust enforcement can play a key role, but here, antitrust is a necessary but not sufficient condition in order to spur privacy competition. There really needs to be coordination with the privacy officials and the consumer protection officials.
Let's start off with privacy protection. There's a perception that consumers aren't concerned about their privacy, but if you look at the data, it actually shows that consumers are resigned about privacy. They want greater privacy protection—this goes across age groups, not necessarily just the older group—but they don't really feel they have any power to do so.
Then think about Facebook and Cambridge Analytica. There was this whole “delete Facebook” movement. Nonetheless, when Facebook reported its first quarterly earnings after the scandal broke, it did not take a hit on either the number of users or their revenues. In a competitive marketplace, you would think, then, that consumers would get products and services that would tailor to their privacy interests, but they don't.
The other thing is just look at the EU and the Google shopping case. There you can see the power that the platform can have in promoting a product. According to the European Commission, Google recognized that its product was subpar, yet by its ability to allocate traffic in such a way as to promote its own products, where it put its own product on the first page of the search results and hid the competitors' products on the fourth or later pages, that had a significant impact on rivals.
That's a concern. I mean, we went through the annual reports of companies, and one of the things they identified as a risk was their dependency on these super-platforms and how these super-platforms, in hindering the functionality and the like, can really adversely affect them. We have the example with the Google comparison shopping case.
I could go through all eight that are in my paper, which was published by Georgetown University, and give specific evidence for each of those eight.
Thanks to you both for very helpful, very informative presentations today.
Just to start, Ms. Wylie, with regard to your concern about sidewalk labs and the Toronto waterfront revitalization partnership, the Auditor General of Ontario has actually launched a value-for-money study to find out exactly what the details are that she is unaware of. She has questions about some of the issues that you raised, but not really understanding whether the assignment of a very large and valuable part of downtown Toronto to the Google sister company's control for $50 million was a deal worth the value that they've placed on it.
I'd like to start first with one of the little-explored areas of the new U.S.-Mexico-Canada trade agreement announced this week. We're still waiting for details on specific points with regard to digital data from the Canadian government. There are translating issues to be resolved. From the office of the U.S. trade commissioner, under what he considers to be the key highlights of the digital trade chapter, is to me a very concerning point, which says:
||The new Digital Trade chapter will....
||Limit the civil liability of Internet platforms for third-party content that such platforms host or process, outside of the realm intellectual property enforcement, thereby enhancing the economic viability of these engines of growth that depend on user interaction and user content.
This would seem to be a strengthening of the, as you say Professor, data-opolies' rush for revenue-generating profit, as opposed to concerns for protecting individual privacy. It's been suggested by some tech commentators here in Canada that in fact this digital trade chapter will make it much more difficult for governments like ours to set new standards that may be closer or not to the GDPR protection regulations, and would basically allow Facebook to remain aloof and above any investigation of Cambridge Analytica's bad practices or illegal practices.
Professor, could you respond first.
One of the things that Google, among others, has argued is that data is non-rivalrous. Basically what that means is that other people can use data and it doesn't really devalue the data itself. Google has argued that this is why it doesn't have any market power.
I think that's no longer its position, but it is true that multiple entities can use data and derive value from data. One concern then is that if one entity then hoards that data, it's not shared with others who can derive benefits from it. That's one concern.
The other concern is this. Let's go back to this sort of frenemy dynamic. I remember when we were writing our book, Virtual Competition, and Uber's concerns were with the local taxi commissions. Its concern was how it could get itself into the various cities, but we pointed out that one of the overarching concerns was that, to survive, it had to be on a smart phone platform. There are two: there's Apple and then there's Android. What Uber needs to exist, its oxygen supply, is basically controlled on this platform. Then you can see that if the platform is starting to go into, let's say, the mapping technology and also the self-driving technology, eventually there can be a collision. When there is a collision such as that, the powerful platform then will promote its interest and not necessarily the interests of others with whom it competes on the platform.
That could be another concern. If you have now this platform and have all this data, this platform can then promote innovation but innovation that, for example, is complementary with its current products and the like. What are then the [Technical difficulty—Editor] companies that compete against the platform? How are they going to be able to survive? Then the concern is how the platform can tailor it in such a way to promote its interests and hinder the interests of, let's say, technologies that might pose a threat to its business model.
When I mentioned Disconnect earlier, there was a privacy app that was going to help us reduce tracking. Google kicked it out of its app store. When we presented our research, someone in the audience had a really good quote that said that in trying to promote privacy, Disconnect is like inviting an arsonist into one's home. That's the perception—that anything that might be a threat to this data-opoly could then be kicked off. That could have a significant chilling effect on innovation, so that's a risk that needs to be taken into account.
It's true that we don't know that much.
In fact, I'll bring this out for you. Facebook did a study. It was called the emotional contagion study. It altered its algorithm, so that it gave some users more positive news and other users more negative news. They wanted to see what impact that had on individuals' behaviour. The ones who got more positive news were more positive in their posts, and the ones who got more negative news had more negative reactions.
It was only because this study was published that it created such an outcry. You realize, then, the power that these companies might have to affect the public discourse.
This lack of transparency will only increase as we migrate from a phone world to the world of a digital personal assistant, in which perhaps one or two of these data-opolies could very much control, with Google with its Home, and Amazon with Alexa.
Now you're going to have, in orders of magnitude, a greater amount of data and greater interaction with the digital assistant, in the home, in the car, on the phone and elsewhere. There's going to be very little transparency on how that digital assistant is going to recommend the products and services it provides—what it features, what it says, what it does and the like.
We're really moving into an unexplored terrain.
In the past 10 years we've had a natural experiment in relying on market forces. The belief was that if we leave it to the free market, the free market forces will allocate data and privacy in ways that promote our needs. The problem—even with the market fundamentalists—is that we didn't appreciate these barriers to entry and these network effects, which are unique in this data-driven market.
One thing is that market forces will not necessarily provide the solution. We should not rely on that. We can have very powerful firms that can dominate an industry for years and could adversely affect innovation as well.
Given that, there is a role for the government. What type of role should the government play? Up to this point the government has more or less taken a “notice and consent” standpoint, which is that the company just has to provide a privacy statement and that, as a result, will be sufficient.
I was at a conference last weekend. Joseph Turow from the University of Pennsylvania does a study every few years. What people have found is that when you say to someone that a company has a privacy statement, they assume the company is protecting privacy, even though the privacy statement could be to the contrary. Putting too much on the consumer to read and to navigate this.... It is too much.
I would argue instead to look at some good privacy-by-design or privacy-by-default mechanisms to make it easier on the consumer so they don't have to read these privacy notices. Even when they read the privacy notices, many of them say there is no ability to negotiate. What would be an alternative to this scenario? Here, it might be data minimization—that a company can't collect data if it's not necessary for them to provide that product, and the individual can say no. They have universal opt-out. They would expressly have to opt in for particular instances, and it would be well explained to them.
That's a little something that I would encourage you to explore.
Let me start by saying that I think the mega-data companies have provided a multitude of very meaningful benefits over the last couple of decades. Artificial intelligence is spectacularly beneficial in some areas, algorithmic programs, and so forth.
However, we were surprised to learn in this committee, back in April, when a Facebook senior Canadian executive and the deputy chief privacy officer for Facebook from the west coast of the United States appeared, and although they and others had many meetings with senior ministers of the Canadian government and senior decision-making officials, there wasn't a single registered Facebook lobbyist on the Commissioner of Lobbying of Canada's registry site.
It's good to note that about a month later, Facebook has registered one official lobbyist, and we don't know whether these executives are still unofficial. They explained their meetings with the government officials as assisting them to understand the capabilities and processes of Facebook in, I guess, governmental terms.
Professor Stucke, you wrote an article entitled, “Should We Be Concerned About Data-opolies?” It is a very detailed article. You made one point, saying:
||Companies need things from the government; governments often want access to data. When there are only a few firms, this can increase the likelihood of companies secretly cooperating with the government to provide access to data. Moreover, a dominant firm is likely to lobby the government on many more fronts.
Could you elaborate on that a bit with regard to perhaps potential compromises of government regulation when it comes to the consideration of a Cambridge Analytica-AggregateIQ-Facebook scandal?
It's interesting because we've written a couple of books and we were ready to defend our thesis. I remember there was this one head of a competition agency who then looked at us and said, “Okay, so what are we going to do about it?” It kind of caught us flat-footed because we were just identifying the problem without necessarily having the solution.
What I would encourage would be, basically, threefold. First is to ask what your competition authority is doing about the market power problem. Marshall Steinbaum and I wrote a piece that just came out from the Roosevelt Institute on reinvigorating antitrust.
To what extent is the Canadian competition authority prepared for the digital economy? I think that's an important issue for you to.... Should the standards change to make it easier to go after these anti-competitive restraints?
The second would be, then, what are the necessary preconditions for effective privacy competition? Some of the themes you heard from today already touch on this: GDPR-like provisions on data portability, issues on who owns the data. What I would encourage then is really to bring together scholars on what some of the things that are necessary that we could put in so that we don't have to regulate, so that we can allow then the market forces to provide optimal privacy by design.
The third component, which we really haven't touched on, would be consumer protection. Here would be both before and after. What is it that we can do to simplify it for consumers so it's not like surrender, so that they actually have the ability to choose and feel comfortable in using this data?
The risks that I hope that I identify show that it's really multifold. You have concerns about journalism right now that the ACCC is looking into. You have concerns about addictions of young individuals and the effects they have on well-being.
There are other important implications that these data-opolies will have. I just identified those three.
I'm wary of relitigating PIPEDA. We did a fulsome study on privacy protection. We made recommendations, and a lot of what both of you have said would be answered to varying degrees if those recommendations were adopted.
On the competition question, this is new territory for us in many ways, so I want to visit that in more detail.
You, Mr. Stucke, identified eight potential antitrust harms. A number of those were related to privacy and over-collection of data, surveillance and implications of security breaches. Let's bracket all that relates to data protection and privacy, because we've had that conversation at length.
Let's talk instead about innovation, the other potential harms and the tools that are required to address those potential harms.
Let's take one other item off the table, which I think is pretty obvious. If a company is using data to prefer their own product, we already have rules that preclude that from happening, so let's take that off the table as well. We heard from the CRTC that, when certain platforms, certain ISPs, prefer their own video platform over others, a streaming platform over others, it's contrary to the law, so let's bracket that.
As for the other potential antitrust harms, in your view, what are the tools required to address them?
I just have one final question, and it's about this illusive question of who owns my data, who owns the citizen's data, who owns my browsing history.
In terms of full disclosure, I have two Facebook websites, as a politician. I post content.
Mr. Charlie Angus: I visit them all the time.
Hon. Peter Kent: Thank you, Mr. Angus, for visiting the website.
I encourage relationship building with those who come to the website. I encourage feedback. As a politician I gather data from that website to be responsibly used. I use Google perhaps as many as 50 times a day. As Mr. Erskine-Smith said, DuckDuckGo and Mozilla Firefox are good, but Google is much better for my applications.
I was struck—and I'm assuming that you were, too, and I'd like your comments—when Facebook's Mr. Zuckerberg appeared before the congressional committee and he would not address the question of who owns the browsing history of those who use his platforms. I'm just wondering. In light of the fact that the Cambridge Analytica-Facebook-AggregateIQ scandal is based on the fact that improperly harvested data, including the vulnerabilities or the very personal aspects of users' browsing history, among other things, came together for this phenomenon that we've come to know as “psychographic microtargeting” and attempts to influence electoral processes.
I'm just wondering if I could have final comments from you, Professor, and then Ms. Wylie, on who owns my data.