Skip to main content
;

ETHI Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Access to Information, Privacy and Ethics


NUMBER 119 
l
1st SESSION 
l
42nd PARLIAMENT 

EVIDENCE

Thursday, October 4, 2018

[Recorded by Electronic Apparatus]

(1105)

[English]

     Welcome, everybody, to the Standing Committee on Access to Information, Privacy and Ethics. This is meeting number 119. Pursuant to Standing Order 108(3)(h)(vii), we are doing a study of the breach of personal information involving Cambridge Analytica and Facebook.
    The witnesses are Maurice Stucke as an individual and Bianca Wylie from Tech Reset Canada.
    We'll start off with Ms. Wylie for 10 minutes.
     Thank you very much for having me today.
    I am here on behalf of Tech Reset Canada. We are an advocacy organization looking at the innovation economy, the public good and the impacts of the innovation economy on our society. I am really happy to get to talk to you today, because it means we're talking more about the issues related to technology.
     The Facebook and Cambridge Analytica case has been one I have used often when speaking and doing public education and community events to highlight one core truth right now—there are a lot of unintended consequences coming out of the use of technology. Framing that as the reality we're dealing with, I'm just going to share some remarks regarding our work, what we have found in it and how it ties into this particular issue and, more broadly, data governance and technology and society.
    Having said that, I spent some years running public consultations. I am currently living in Toronto, and one of the projects that is front and centre for me is Sidewalk Toronto. Is everyone in the room familiar with this project? It's a subsidiary company of Alphabet, a sister company to Google. It's investing up to $50 million to create a plan for a smart city on Toronto's waterfront. It's just a plan. There's no real estate transfer. It's about a year old now. What it has given us in Toronto, and I think others, is a very focused view of the level of education we have as people in this country to engage in this discourse around technology and society.
    What I would like to say about all of that is that a lot of us have no idea what is going on, what data is, where our data goes, who has our data or how our data can be used. All of these issues, which are fundamental and central to making decisions about them, we do not have a good handle on.
     I'm at almost the year mark of watching a company have consultations with the public while knowing that nobody understands what anybody is truly talking about. As someone who has done public consultation and who holds the profession and the practice dear to my heart—and I think it is central to democracy—I am extremely troubled at the state of that project and also about the idea that we should be making any kind of quick decision or policy. If we do that right now, I can tell you for sure that it will not be inclusive of the people who live in this country and what they want to do about some of the issues related to Cambridge Analytica and to any sort of tech company and its relationship to people. I just want to set up that this is one big thing, starting at a high level.
    Another theme related to this that I think is really important to consider, whether it's Facebook, Google or any other large company, is that we're beginning to blur the line between the market and the state in this country. We're beginning to lose track of who's in charge of what, who's responsible for what, and the implications of data being used by non-government actors.
    In this country, we work from a social contract. People give us data—us in terms of government—and people understand what government does with their data generally. We are introducing corporate actors into social situations, whether it's using Facebook to communicate and organize in a community and do many things, or maybe existing in a city. This sort of blurring of this line, I should hope, is becoming more visible to the people in this room. I think it is a thing of grave concern, and we need to delineate and understand who is in charge of this whole....
    What's happening now is this enthusiasm for technology, and it's somehow making everybody forget what their roles are, that we have rules and laws, and that those are things that help us determine how our society looks. I don't think it was ever the intention to be enthusiastic about the innovation economy and have that then become governance of social impacts. I really don't think that was something that happened on purpose, and I think we need to be very aware of the fact that this is now happening regardless.
    There is an article written in 1998 by a scholar named Lawrence Lessig that said “code is law”. Software code is, in some cases, determining.... They are not “law laws”, but they are determining social norms and the ways we interact with each other. I just think these are things we might not have understood as this began. I do not want to ever think—and I don't want anyone here to think—that people who are technologists even have a handle on the implications of all of this.
    Having said those things, I have just a couple more points.
    One of them is that democracy moves slowly. This is good. This stuff is hard. I would really caution everyone in this room to consider how much education we need to be doing before we can even be making decisions that are informed by people who live in this country.
     I know there's a lot of enthusiasm, and everybody says tech moves incredibly quickly. We have agency over technology. Technology is not something that just pops up and doesn't exist because of humans and their agency, so we need to remember some of those facts.
    Another thing to be very clear about is that we are blurring the lines between procurement and the influence of purchasing products, or using products, and how that trickles down to the people who live here.
    In my opinion, what is happening in Toronto is problematic because you should not be making policy with the vendor. This is essentially what we're doing. We are allowing someone who is going to be a vendor to influence how the policy for said vendor's work will go. I do not understand how anyone could have thought this was a good idea to begin with. I don't think we should continue this for much longer. In these cases, we really need to be aware of the ways these two issues are linked to each other.
    Another thing that relates to this is that we've been thinking about technology as an industry. I see that in this country, a lot of the narrative is about wanting to do well, wanting to be innovative, wanting to do the things that make us leaders in technology, and there being a lot of opportunity for prosperity and wealth development. This is true. However, there's also a much larger narrative about what it means to lead in the governance of technology and the governance of data, and Canada has an opportunity right now to lead.
    You have probably heard a lot of good things about the General Data Protection Regulation in Europe. It's not perfect but it is definitely moving towards some of the things we should be thinking about. I am confident that if we really take this seriously, if we look at impacts and engage people better, we can lead.
    This is an opportunity. There's a lot of fear and anxiety about what to do. If we don't go fast and we are very considerate in what we're doing, I see a great opportunity here for the country to show global leadership in what to do with data governance and governance around technology. I don't want us to miss that in this need to react to fear, anxiety or issues that are quite complicated. I really don't want to miss that point.
    I also want to talk about opportunity as a technologist. I think it is something we need to think more about. How do we develop social and public structures that use all the wonderful things that technology can produce, more for public good and more within government? We need to look at our academic institutions and ask ourselves why we're not developing technology that we are using.
    If you go out into our communities where people are talking about digital rights and digital justice, they are wondering why we aren't building tools that we could be using for community organizing, or for social good—lots of the ways people use Facebook or other things.... Why aren't we doing better at building systems, at building competency so that we can be building those products, figuring out different models, and thinking about how we can use these things within government.
    I really want to stress this. The idea that government can't keep up with tech, or that there's a problem here because people in government don't.... This is not my belief. I'm telling you what I hear a lot. We really need to shut that down and start to show that if there is an interest in really using technology well across the board in our society, we can be intentional and make investments to make sure that happens. These are all opportunities for the country.
    Again, when you respond to fear, you respond quickly, and I don't think that will be a good response. I think this case is a very good one to watch, as is the Sidewalk Toronto example. There are big issues coming out of here. There is nothing wrong. I will say this as a technologist: Everybody will think we are doing wonderful things for technology if we take it slow and figure out what to do.
    This includes industry. It is not helpful to industry if you are not clear with them as to what the guardrails are, how their operations have to be law-abiding and how they can be encouraged to reflect some of the values that we as technologists think should be there in terms of sharing values, being open with things and considering things that aren't necessarily proprietary.
    There are lots of ways to use technology. There are lots of ways to use math. We shouldn't think this is only a business thing. This is a social thing. There are a lot of really exciting things to do in there.
    I'm trying to end on a hopeful note here because I truly believe there is great opportunity. I want to make sure we follow processes that ensure people are engaged in the development of what we're going to do next, and we do not rush that. There is no need. There is a lot of urgency in terms of not going fast. We need to really quickly decide that we are going to not go fast and be thoughtful about the process we follow from here.
    Thank you.
(1110)
    You finished within 10 seconds, so that's pretty good.
    Next up is Mr. Stucke, for 10 minutes, please.
    I recently co-authored two books on the data-driven economy. The first, with Allen P. Grunes, is Big Data and Competition Policy, and the second, with Ariel Ezrachi, is Virtual Competition. In both books we discuss some of the benefits of a data-driven economy. We also discuss some of the risks, including algorithmic collusion, data-driven mergers and behavioural discrimination. I won't touch on that.
    I'd like to talk to you today about the risks if a few powerful firms monopolize our data. I'd like to break it up into four parts. First, what are data-opolies? Second, how have competition officials in the EU and U.S. viewed them? Third, from an antitrust perspective, do these data-opolies pose any risk of harm to consumers? Finally, I will have some final thoughts.
    First, what are data-opolies?
    Data-opolies control a key platform through which a significant volume and variety of personal data flows. The velocity of acquiring and exploiting this personal data can help these companies obtain significant market power. In Europe, they're known as GAFA—Google, Apple, Facebook and Amazon. As these firms have grown in size and power, they have also attracted significant antitrust scrutiny, particularly in Europe.
    In the United States, it's been relatively quieter. I'll give you a couple of stats. From 2000 onward, the U.S. Department of Justice brought only one monopolization case, in total, against anyone. In contrast, the DOJ, between 1970 and 1972, brought 39 civil and three criminal cases against monopolies and oligopolies.
    One question is this. Is there a difference in the perception of harm across the Atlantic between the U.S. and the EU over these data-opolies? In the U.S., antitrust plaintiffs must allege actual or potential harm to competition. Ordinarily when we think of harm, we think of a cable company—higher prices, reduced output, lower quality. Superficially, it appears that data-opolies pose little if any risk of these traditional harms. Ostensibly Google's and Facebook's services are free. Amazon is heralded for its low prices. Because of these network effects, the quality of the products can improve.
    If you have low or free prices and better quality, what's the problem? Some, such as the late antitrust scholar Robert Bork, have argued that there “is no coherent case for monopolization”.
    One factor for this divergence may be the perceived harm. If there is a consensus over the potential harms, then the debate can switch to the best policy measures to address these harms. I've identified at least eight potential antitrust harms from these data-opolies.
    The first is degraded quality. Companies can compete on multiple dimensions, including price and quality as well as privacy, so a data-opoly can depress privacy protection below competitive levels and collect personal data above competitive levels. The data-opoly's collection of too much personal data can be the equivalent of charging an excessive price. Data-opolies can also fail to disclose what data they collect and how they'll use the data, and they face little competitive pressure to change their opaque privacy policies. Even if the data-opoly were to provide better disclosure, so what? Without a viable competitive option, the notice and consent regime is meaningless when the bargaining power is so unequal.
    A second concern involves surveillance. In a monopolized market, data is concentrated in a few firms and consumers have limited outside options that offer better privacy protection. This has several implications. One is government capture. The fewer the firms that control the personal data, the greater the potential risk that a government can capture the firms, using its many levers.
    One risk is covert surveillance. Even if the government cannot obtain the data directly, it can try to get the data indirectly. The data-opoly's rich data trove increases a government's incentive to circumvent the data-opoly's privacy protections to tap into the personal data. This is what happened with Cambridge Analytica. There are several implications of a security breach or violation of data-opolies' data policies. A data-opoly has greater incentive to protect its data, but hackers also have a greater incentive to tap into this data, because of the vastness that it has. While consumers may be outraged, a dominant firm has less reason to worry about consumers switching to rivals.
(1115)
    A third concern involves the wealth transfer from consumers to data-opolies. Traditionally, you'd think of a monopoly taking money out of your pocket. Even though the product may be free, data-opolies can extract significant wealth through several levels. The first is not paying for the data's fair value. The second is that data-opolies can get creative content from users for free, for example, from YouTube videos or contributions on Facebook. The third level is that data-opolies can extract wealth from suppliers upstream. This includes scraping content from photographers, authors, musicians and newspapers, and posting it on their own website. Finally, data-opolies can engage in what's called “behavioural discrimination”. Basically, this is getting us to buy what we would not otherwise want to buy, at the highest price we're willing to pay. It's a more pernicious form of price discrimination.
    A fourth concern is the loss of trust. We can view this as a dead-weight welfare loss. Some consumers will simply forgo the technology out of privacy concerns.
    A fifth concern is that the data-opoly can impose significant costs on third parties. Here in our work, we talk about the frenemy relationship that data-opolies have with app makers. They need these app developers in order to attract users to their platform, but once they start competing with them, they can then have an enemy relationship. There are various anti-competitiveness practices they can engage in, including degrading the app's functionality. What is particularly important for you is that data-opolies can impose costs on companies seeking to protect our privacy interests. One example, which our book Virtual Competition explores, is how Google kicked the privacy app Disconnect out of its Android app store.
    A sixth concern involves less innovation in markets dominated by data-opolies. Here we point out how data-opolies can promote innovation, but also hinder innovation. One tool they possess that earlier monopolies did not have is what we call “nowcasting radar”. They can perceive trends well in advance of, let's say, the government antitrust enforcer—nascent competitive threats—and they can squelch those threats by either acquiring them or engaging in anti-competitive tactics.
    A seventh concern is the social and moral concerns of data-opolies. A historical concern of antitrust was about individual autonomy. Here, a data-opoly can hinder the individual autonomy of those who want to compete on their platform. A related concern is data-opolies making their products intentionally addictive. Here you have an interesting interplay between monopoly and competition. Ordinarily, a monopolist doesn't have to worry about consumers going elsewhere. Here, however, the data-opolies can profit by getting users addicted to spending more time on their platform. They can thereby obtain more data, target them with advertising and increase their profits.
    The eighth concern is the political concerns of data-opolies. Economic power often translates into political power, and here data-opolies have tools that earlier monopolies didn't—namely, the ability to affect the public debate and our perception of right and wrong. Data-opolies, as shown in the Facebook emotional contagion study, can affect how we think and feel, particularly as we migrate to digital personal assistance and much greater interaction with the data-opolies' products. You have several risks. One of them is bias. The news we receive will be more filtered, creating echo chambers and filter bubbles. The second risk is censorship. A third is manipulation.
    Several themes, in conclusion, run through my papers.
(1120)
     The first theme is that the potential harms from data-opolies can exceed those from monopolies. They can affect not only our wallets. They can affect our privacy, autonomy, democracy and well-being.
    Second, markets dominated by these data-opolies will not necessarily self-correct.
    Third, global antitrust enforcement can play a key role, but here, antitrust is a necessary but not sufficient condition in order to spur privacy competition. There really needs to be coordination with the privacy officials and the consumer protection officials.
    Thank you.
    Thank you.
    First up is Mr. Erskine-Smith for seven minutes.
    Mr. Stucke, you talked a lot about potential harms. Facebook, Google, Amazon, Apple—they've all existed for quite some time, or in my lifetime. Show me some actual harms.
    Let's start off with privacy protection. There's a perception that consumers aren't concerned about their privacy, but if you look at the data, it actually shows that consumers are resigned about privacy. They want greater privacy protection—this goes across age groups, not necessarily just the older group—but they don't really feel they have any power to do so.
    Then think about Facebook and Cambridge Analytica. There was this whole “delete Facebook” movement. Nonetheless, when Facebook reported its first quarterly earnings after the scandal broke, it did not take a hit on either the number of users or their revenues. In a competitive marketplace, you would think, then, that consumers would get products and services that would tailor to their privacy interests, but they don't.
    The other thing is just look at the EU and the Google shopping case. There you can see the power that the platform can have in promoting a product. According to the European Commission, Google recognized that its product was subpar, yet by its ability to allocate traffic in such a way as to promote its own products, where it put its own product on the first page of the search results and hid the competitors' products on the fourth or later pages, that had a significant impact on rivals.
    That's a concern. I mean, we went through the annual reports of companies, and one of the things they identified as a risk was their dependency on these super-platforms and how these super-platforms, in hindering the functionality and the like, can really adversely affect them. We have the example with the Google comparison shopping case.
    I could go through all eight that are in my paper, which was published by Georgetown University, and give specific evidence for each of those eight.
(1125)
    I have only four minutes left, so rather than doing that, let's turn to solutions.
    We tabled a report in the House in February of this year recommending changes to PIPEDA. We tabled an interim report in June recommending some additional changes to better protect privacy. I'm not sure whether you've read those reports.
    Where do you see the answers?
    One thing is that there is not a simple answer. The way I look at it, you can look at ex post, after-the-fact measures, such as increased antitrust enforcement. That would be one thing.
    What does that look like in this case—that Facebook can't acquire Instagram? What are we talking here?
    That would be one. Right now, looking at the Canadian competition officials as well as the U.S. competition officials, there's very much a price-centric focus on mergers, so it's improving, then, their tools for non-price effects, including data-driven mergers.
    One way would be more informed antitrust enforcement. That's ex post. Then you would have, ex ante, GDPR-like requirements that could help kick-start privacy. That might be greater data portability so that users can transfer their data. Another might be greater resolution on who owns the data and on the property rights an individual has with regard to personal data.
    I would look at it from both an ex post and ex ante perspective.
     With the rest of the time, Ms. Wylie, you in general terms set out some of the big picture problems. If we were to get more granular at some of the solutions this committee should be looking at.... In February we recommended data portability. We've recommended privacy by default. We've recommended not quite GDPR-level standards in some cases because we didn't recommend such a strong version of the right to be forgotten and suggested further study, but certainly well above where we're at right now.
    Are we missing anything, and if so, what are we missing?
    The issue is bigger than privacy. You need to go up a notch and get into ownership and control because we're talking about issues of power. Privacy is definitely an issue.
    I'm going to give you the example from the Sidewalk Toronto project that concerns me and might indicate a solution. One track...and I totally agree. I think we're looking for a bundle of solutions here, not one. In city planning people are looking for more data. They say they need better data, more data, and they need to use that data to inform public service delivery. We should not be losing control of the inputs to our policy creation, whether it's in vehicles or ways of getting at data or ownership or access to data. This is just one little example that applies across every piece of every policy in this country. Honestly, we cannot lose access to data that we need to make policy and that is what is further down the line. When you lose control of data, that for me is terrifying.
(1130)
    I read about the café where they give the coffee to students for free. Students have to make sure they share their information about what they're studying, I think what year they're in, and then they get free coffee as long as they're in university. It's an exchange. They're giving out something that isn't particularly meaningful to them, but when aggregated for the company is quite useful. It is a market exchange. How do you get away from that?
    To put it more bluntly Google is giving me a service for free and I'm giving them data and now we want to say that data has to be used for a public good, but I've given it to them and I've gotten an exchange of value.
    This is early days thinking of this for me, but we are talking about ownership, control of data. We need to start thinking about usage. We need specificity from people as to what they're doing with data and to start to negotiate at a more granular level what can be done with people's data, because right now people are getting open-ended access. At the other end of the contract, the person who is getting the coffee is not being given any real insight into what might be done with that data after the coffee, shifting thinking to usage and being clearer about how people's data are applied to particular individual cases and what the true exchange is. It's not to open it all up and then you've completely lost track of how your data is being used.
    I'm out of time. I hope to come back though.
    Thanks, Nate.
    Next up for seven minutes is Mr. Kent.
     Thank you, Chair.
    Thanks to you both for very helpful, very informative presentations today.
    Just to start, Ms. Wylie, with regard to your concern about sidewalk labs and the Toronto waterfront revitalization partnership, the Auditor General of Ontario has actually launched a value-for-money study to find out exactly what the details are that she is unaware of. She has questions about some of the issues that you raised, but not really understanding whether the assignment of a very large and valuable part of downtown Toronto to the Google sister company's control for $50 million was a deal worth the value that they've placed on it.
    I'd like to start first with one of the little-explored areas of the new U.S.-Mexico-Canada trade agreement announced this week. We're still waiting for details on specific points with regard to digital data from the Canadian government. There are translating issues to be resolved. From the office of the U.S. trade commissioner, under what he considers to be the key highlights of the digital trade chapter, is to me a very concerning point, which says:
The new Digital Trade chapter will....
Limit the civil liability of Internet platforms for third-party content that such platforms host or process, outside of the realm intellectual property enforcement, thereby enhancing the economic viability of these engines of growth that depend on user interaction and user content.
     This would seem to be a strengthening of the, as you say Professor, data-opolies' rush for revenue-generating profit, as opposed to concerns for protecting individual privacy. It's been suggested by some tech commentators here in Canada that in fact this digital trade chapter will make it much more difficult for governments like ours to set new standards that may be closer or not to the GDPR protection regulations, and would basically allow Facebook to remain aloof and above any investigation of Cambridge Analytica's bad practices or illegal practices.
    Professor, could you respond first.
(1135)
     I'm unfamiliar with that provision, so I can't speak directly on that point. More generally, the point is well taken that if these companies have very little to fear in terms of liability, their incentives then can be askew. To Bianca's point earlier, there was discussion about a fair exchange. The point that she raises is correct that users may get a free cup of coffee, but they don't necessarily know, first, what the value of their data is; second, who else can have access to that data; or then third, how that data could be used to profile them. That could have significant implications, not only in terms of economic implications but also implications on governance, implications on start-ups and the like.
    Any sort of limitation of liability of these data-opolies should be something that should be scrutinized quite carefully to ensure that the incentives that the data-opolies have are aligned with the citizens' interests.
    Would that apply also to the responsibilities for those who use their platforms? In other words, it would apply to third parties as in the case of Cambridge Analytica, AggregateIQ, Facebook, and that interaction where there seem to be claims of plausible deniability among all of the players in this scandal because data came and it went and it was manipulated or processed and applied.
    Exactly. Here you have consumers whose data was being used and they could never envision I think Cambridge Analytica. I think it's telling, because if you've had companies come to you and say, look, we're going to promise greater transparency and the like, but they're not going to hold others accountable who have access to that data, then that's a real problem.
     Ms. Wylie, what are your thoughts?
    To build on what Maurice opened with, the major theme is power asymmetry. Knowing that we're sitting with this big power asymmetry, technology entrenches those things. Anything that's going to take what exists now and just entrench it further will only accelerate all of these negative impacts. I'm the same. I'm not familiar with the specifics, but if what this is doing is trying to hold onto the status quo, that's not a good thing.
    Again, the U.S. trade commissioner calls this an unprecedented accomplishment in the area of digital trade, and it provides for the movement of data across borders, which in some cases would concern Canadians and Canadians' privacy.
     I have another key question that I'd like to pose to you, Professor, but I'll save it for my next round when I have a little more time.
    I'll yield. Thank you.
    Thank you, Mr. Kent.
    Next up, for seven minutes is Mr. Angus.
    Thank you. This has been a fascinating discussion. Again, I have to confess I'm a former digital idealist. I thought Google were the most wonderful people on the planet. They used to wine and dine me because they were young upstarts and we needed an innovation economy. That was 2007. Since 2015, we feel like the world has completely disintegrated around our feet in terms of what we thought we knew, politically, with the power of these technology giants. This Cambridge Analytica scandal has opened our eyes on how we need to put more focus on this.
    I think the idea of this smart city is a fascinating example, because we're talking about public spaces and the right of citizens to travel in a public space and also to have private lives. Suddenly, it's a really cool idea to turn that over to a private company with enormous and unprecedented international power, without scrutiny.
    Eric Schmidt, from Google, said he's over the moon with the government's deal, because he said they finally got their wish for someone to give them a city and put them in charge. Then he said that the application, the project, might require substantial forbearances from existing laws and regulations.
    Ms. Wylie, number one, has Google earned that trust? Number two, why should we give them any forbearance from existing laws and regulations?
(1140)
    Absolutely not, and we should not.
    It's that simple.
    Yes.
    Okay.
    One of the outstanding issues with this deal is the ownership of the IP and the data. Again, we've said in Canada that we want to be an innovative economy and data is the new oil, but that agreement doesn't have any guarantee as to who owns the data. Is that something that needs to be examined before we go into any further details with this agreement with Google?
    Yes, it does.
    Just to expand a bit, this is not one data that I'm holding here. I understand that we like to talk about data because it helps us get a handle on what we're talking about, but we need to realize how far down the road some of these companies are with the technologies that they're developing. When you start to hear corporations saying that they're going to open all the data too, so don't worry, you start to think that maybe it's not just the data. Maybe it's the usage. Maybe it's the data mixed with other things.
    This is why we really need to start thinking about this. This data is a primary input to our knowledge about ourselves and how that plays out in policy. We cannot lose access to it. I know this gets dystopian, but if you stop understanding whether what someone is telling you is a pattern or is a fact, that is very dangerous, and that is what this blending idea is potentially leading to. We need to keep control of data, products, IP and everything that we're going to be using.
    I'll end on why this is a smart city topic. This is the governance of a neighbourhood. This is part of a city. We're not doing corporate governance in Canada by accident or because we wanted to do something innovative.
     Professor, I'd like to bring you into this discussion about what they're planning in downtown Toronto.
    My grandfathers were miners and they didn't have much education, but they swore that they would never live in a company house. They would never shop at a company store. The mining communities fought like hell to have independence from the company, yet we're being told that it's going to be amazing to build a company-run city and that it will be all in our favour.
     Some of the stuff that Google, or Alphabet, is offering is that it's going to have innovative transportation like Uber, Lyft and self-driving cars, as well as cloud-enabled smart sensors. Alphabet has an interest already in self-driving cars through its Waymo subsidiary. It has a financial stake in both Uber and Lyft through two VC funds. It has unit building sensors like Nest. It has its own cloud platform.
    Are we basically saying to Google, to Alphabet, “Come in. Set up all your products that will benefit you, and our citizens will like it or lump it. If they're going to live there, they're going to live in the Google company town”? What does that mean from a social perspective and an economic perspective, and does there need to be a serious antitrust provision put in if we're going to deal with these kinds of projects?
    One of the things that Google, among others, has argued is that data is non-rivalrous. Basically what that means is that other people can use data and it doesn't really devalue the data itself. Google has argued that this is why it doesn't have any market power.
    I think that's no longer its position, but it is true that multiple entities can use data and derive value from data. One concern then is that if one entity then hoards that data, it's not shared with others who can derive benefits from it. That's one concern.
    The other concern is this. Let's go back to this sort of frenemy dynamic. I remember when we were writing our book, Virtual Competition, and Uber's concerns were with the local taxi commissions. Its concern was how it could get itself into the various cities, but we pointed out that one of the overarching concerns was that, to survive, it had to be on a smart phone platform. There are two: there's Apple and then there's Android. What Uber needs to exist, its oxygen supply, is basically controlled on this platform. Then you can see that if the platform is starting to go into, let's say, the mapping technology and also the self-driving technology, eventually there can be a collision. When there is a collision such as that, the powerful platform then will promote its interest and not necessarily the interests of others with whom it competes on the platform.
    That could be another concern. If you have now this platform and have all this data, this platform can then promote innovation but innovation that, for example, is complementary with its current products and the like. What are then the [Technical difficulty—Editor] companies that compete against the platform? How are they going to be able to survive? Then the concern is how the platform can tailor it in such a way to promote its interests and hinder the interests of, let's say, technologies that might pose a threat to its business model.
    When I mentioned Disconnect earlier, there was a privacy app that was going to help us reduce tracking. Google kicked it out of its app store. When we presented our research, someone in the audience had a really good quote that said that in trying to promote privacy, Disconnect is like inviting an arsonist into one's home. That's the perception—that anything that might be a threat to this data-opoly could then be kicked off. That could have a significant chilling effect on innovation, so that's a risk that needs to be taken into account.
(1145)
    Thank you, Mr. Angus.
    Next up is Ms. Vandenbeld for seven minutes.
    Thank you very much, both of you, for being here.
    My first question is for Ms. Wylie.
    You mentioned in your remarks that we should not go fast, that we need to consider this, that we can't just react. Now a lot of what we've heard is that this is something that is moving very quickly. We're already very much behind in terms of responding as a government. Can you talk a little bit about, first of all, what you mean by not going fast? Also, is there a danger in not moving quickly?
     This is informed by the fact that I have the Sidewalk Toronto project in my neighbourhood. Can you imagine that after a year of these kinds of concerns on the clarity—nobody understands what this project even is—it would still be going forward?
     What would be sensible would be to shut it down and retender it. There's no rush for this. You have to look at these places where you say, “This is clearly very complicated. It's going fast and we're not getting the information we need to make decisions as a public about it”. That one is an example.
    In other places, I think it's pretty clear. Maybe think about it as staunching serious bleeding but leaving a lot of room for what else to do.
    Thank you very much, because I know that there is the GDPR legislation, for instance, and I know there is a sense of urgency for Canada to do something similar to that.
    If I'm understanding you correctly, it is on the things that we already know, and where there's precedent and maybe examples internationally, that we should be moving quickly. But on the areas where even technologists don't understand it, as I think you said, those are the areas where we really need to take our time and understand before we legislate.
    Absolutely, and to say you have these things.... Let's say you have GDPR. Maybe six months ago everybody told me only good things about GDPR. In the last six months, I've had conversations about where there might be some challenges and where we can do better.
     Look at what's working for sure and then also consider where we might be able to do better.
    Can you elaborate on that?
    Certainly. In terms of being clear, I think one of the things that this need to act does to people is that then everybody leaves everything too ambiguous. When you leave things ambiguous, it's not good for.... If I'm a company and it's not clear to me what is legal or what is the direction, you're basically ramping up concern that you don't know how to interpret something.
    It might let you legislate quickly, but then you're shoving the burden of not really being specific onto people who are going to feel those implications. This is across the board. You have to think about everybody who's going to have to deal with that. I think that's a major one: to be clear.
    Also, it's the same as in my initial answer around contracts between people and the companies. Again, we need more clarity. We need to push these sorts of improvements into places where people feel it, where people have the confidence that this is getting better. We also need to be exploring those things at the same time.
     I think those are two very clear opportunities.
(1150)
    What are the things that you think we're now seeing with GDPR and that, as you said, you would be a little bit cautious about?
    It's just lack of clarity.
    Okay. The ambiguity you're talking about is in relation to that.
    Yes.
    I'd like to ask our other witness the same question.
     What do you think of GDPR and the urgency with which we should act?
     One important thing is that privacy is only one component. There's also market power. Even if you have GDPR, you're not necessarily going to address all the risks involving these data-opolies. That's number one.
    Number two, to follow up on Bianca, is that there is some uncertainty, for example, when it's necessary to get the data in order for the provider to provide you the service. Greater clarity on that would be helpful.
    Third, there are some measures in the GDPR that look hopeful—such as data portability—and can address some of the competition concerns, but one thing to consider is that data portability may not necessarily be helpful when the velocity of the data is at stake. Here's a good example: mapping apps. You can port your data for Google Maps, let's say, but that's not going to be helpful to a navigation app that needs to know where you are at this very moment. The fact that you can port data from six months ago is not going to help that new navigation app compete against Waze, which Google owns, and Google Maps.
    Thank you.
    In terms of jurisdiction, this question is for you, Mr. Stucke. A lot of these large, what you call data-opolies are based in the U.S. In terms of our being able to legislate, to what extent can Canada legislate on our own when we have these platforms that are not necessarily based here in Canada?
     I believe Canada could currently prosecute a price-fixing cartel in the United States that was harming Canadian citizens. Under that logic then, if there are anti-competitive harms that are affecting citizens of your jurisdiction, you could reach out, just as the United States, I believe, did with the uranium cartels and prosecuted those.
    There are issues of comity and the like that need to be taken into account but, no, now you're looking at the Australians with the ACCC. They're looking at these digital platforms and the impact they have on news. There is a very important study that is going to come out from them. The Germans, the Bundeskartellamt is looking at Facebook. You have the European Commission looking at Amazon and how it's using data to perhaps favour itself.
    We live in a global economy, and if companies in one jurisdiction are harming citizens in another jurisdiction, then just as the United States can prosecute those cartels, so too can another jurisdiction prosecute anti-competitive behaviour in our jurisdiction.
    Thank you.
    My next question is for Ms. Wylie.
    We're looking at the risks here. You mentioned in your remarks that there are also a lot of opportunities. Could you elaborate on the opportunities?
    Certainly.
    This will tie in and build off of this.
    I think one thing to remember for the Canadian context is that the fact that we have the Privacy Act and PIPEDA helps us to delineate using data within government versus in private. I don't want us to start losing track of that difference. A consumer is one thing and a resident citizen is another thing. I think we need to hold onto that delineation, perhaps, when we imagine what's next.
    That's where I'm going to say the opportunity is as well. There are opportunities. Again, I keep coming back to the smart city context. We cannot lose control of stuff that is input to policy. To say in procurement that, if it's data related to these things, that ownership should fall to a city or to the country.... If these are inputs, these are things you can update in procurement, rather than trying to manage this all through privacy. I cannot agree enough. There are so many things here that go outside of privacy.
    In terms of controlling inputs and then using data in our government better, there is lots of opportunity there—ample opportunity there—and it means we have to make sure it doesn't get privatized.
(1155)
    Thank you.
    It's basically our digital governance study.
    Yes.
    Ms. Vandenbeld, that's time.
    Next up for five minutes is Monsieur Gourde.

[Translation]

    Thank you, Mr. Chair.
    I thank the witnesses for being here this morning.
    There seem to be some grey areas in the new digital environment that has been prevalent over the past two or three years. I think Canadians feel that they are seeing only the tip of the iceberg.
    Ms. Wylie, you talked about the importance of understanding what is happening. Unfortunately, I don't know whether Canadians are aware of everything that is happening. What do you think?

[English]

    I completely agree with you.

[Translation]

    Okay.
    Mr. Stucke, what do you think?

[English]

     It's true that we don't know that much.
    In fact, I'll bring this out for you. Facebook did a study. It was called the emotional contagion study. It altered its algorithm, so that it gave some users more positive news and other users more negative news. They wanted to see what impact that had on individuals' behaviour. The ones who got more positive news were more positive in their posts, and the ones who got more negative news had more negative reactions.
    It was only because this study was published that it created such an outcry. You realize, then, the power that these companies might have to affect the public discourse.
    This lack of transparency will only increase as we migrate from a phone world to the world of a digital personal assistant, in which perhaps one or two of these data-opolies could very much control, with Google with its Home, and Amazon with Alexa.
    Now you're going to have, in orders of magnitude, a greater amount of data and greater interaction with the digital assistant, in the home, in the car, on the phone and elsewhere. There's going to be very little transparency on how that digital assistant is going to recommend the products and services it provides—what it features, what it says, what it does and the like.
     We're really moving into an unexplored terrain.

[Translation]

    Ms. Wylie, Mr. Stucke, in your opinion, whose responsibility is to denounce that situation and to educate Canadians on the new reality of personal information getting around and being used to profile our behaviour, economically speaking or otherwise?
    Ms. Wylie, what do you think about that?

[English]

    I think a lot of people are sort of culpable in where we are. As a citizen, I have perhaps legitimized the last 10 years of how the government has been acting by not being more vocal. As a technologist, I saw hints of some of this, but in that time I also wanted to believe that action was starting to occur within government to take some more control of the situation.
    That didn't happen. To me, this is also where that shift.... There's a responsibility on behalf of government to protect the people who live in this country. We talk about the trust in government being low sometimes. What's happening now is completely destabilizing the legitimacy of the government. To be acting as though this isn't a big deal—we're just going to do little tweaks here or there—when we don't have fundamental, big changes under way, as well as big questions around the asymmetrical power that impacts the people who live here, a lot, is very problematic.
     At this point in time, this falls to government. I don't want people to have to become hyper data literate in their time off right now. This is why we have laws and policies, to make sure that people don't have to be overly engaged in this. That's where we are now. People are having to get super engaged in this because they realize how vulnerable they are. They're not protected at this point in time. This is problematic to me, and this really falls on government.
    If you're a company and you're allowed, of course you're going to do it.
(1200)

[Translation]

    Mr. Stucke, what is your opinion?

[English]

     In the past 10 years we've had a natural experiment in relying on market forces. The belief was that if we leave it to the free market, the free market forces will allocate data and privacy in ways that promote our needs. The problem—even with the market fundamentalists—is that we didn't appreciate these barriers to entry and these network effects, which are unique in this data-driven market.
    One thing is that market forces will not necessarily provide the solution. We should not rely on that. We can have very powerful firms that can dominate an industry for years and could adversely affect innovation as well.
    Given that, there is a role for the government. What type of role should the government play? Up to this point the government has more or less taken a “notice and consent” standpoint, which is that the company just has to provide a privacy statement and that, as a result, will be sufficient.
    I was at a conference last weekend. Joseph Turow from the University of Pennsylvania does a study every few years. What people have found is that when you say to someone that a company has a privacy statement, they assume the company is protecting privacy, even though the privacy statement could be to the contrary. Putting too much on the consumer to read and to navigate this.... It is too much.
    I would argue instead to look at some good privacy-by-design or privacy-by-default mechanisms to make it easier on the consumer so they don't have to read these privacy notices. Even when they read the privacy notices, many of them say there is no ability to negotiate. What would be an alternative to this scenario? Here, it might be data minimization—that a company can't collect data if it's not necessary for them to provide that product, and the individual can say no. They have universal opt-out. They would expressly have to opt in for particular instances, and it would be well explained to them.
    That's a little something that I would encourage you to explore.
    We had better move on to Mr. Picard, for five minutes.
    Professor, you talked about establishing the value of the data owned by me or by anyone who supplies personal data. Let's say I'm a company that invests a huge amount of money—billions—in software. My deal is that you give me your name and number and address, and I'll give you access to billions of dollars of research for you to have live, and write anything you want on a blackboard, and that's it. In your wildest dreams, you would never be able to access or work with this technology if someone didn't give it to you. That's a fair deal for a few names and numbers compared to the price of the investment I made to develop the software.
    How do you evaluate the data then?
    Right now there really is no way to.... There are instances where data is valued and where data is bought and sold. There, you can ask what the market value of that data is and how it's used, but often there is no sort of market value for that data. It's hard, because you don't know exactly who is using that data for what purpose, and how that data may be used later on when it's merged with other datasets. I really don't know the value of my data, and I don't know if I'm going to get a fair deal, much like the earlier example of getting a cup of coffee. I don't know if that's a fair exchange.
    The other thing is that in a competitive market I would have alternatives and I could see how much they would pay me for my data. One thing is, how do you unilaterally assess the value of the data? Second, is there sufficient, robust competition so I can get a true market price for my data?
(1205)
    Would you expect me to ask Facebook for royalties every time they use my data with someone else?
    What would be the market clearing price in that instance? If you ask Facebook, they have to provide you, first of all, with the data, but also your content that you post on Facebook. You're ostensibly working for free. You're a free labourer providing content on a platform that's used to attract other people. Now if you were to say, “You need to compensate the individual for the data and the content”, what would then be the number? It could be quite arbitrary because you don't have a robust market right now. You have one dominant social platform.
     With my personal data, if I decide to share with the people, with the world, what I ate for breakfast, that's my own business. They know my name because I'm on Facebook. They know my face because my face is on Facebook, except for those who put the face of their dog on it. Anyway, they know who I am, what I eat, what I wear and where I go, because I want to share that with the world.
    In order to be able to do that, I use a software that I don't have the means to develop myself. I would never be able to use such a tool. They give it to me for free, practically. In fact, there's no such thing as a free ride in this world. In exchange, as with any market value, because market value is not a dollar sign but what you get in return for your offer, I have billions of dollars of researched software to make my life funnier and more exciting. That's a fair deal, don't you think?
    Is it a fair deal for the consumer whose data is being collected and tracked?
    That is something that the Germans' Bundeskartellamt is looking at. They're investigating Facebook for, not so much the data that Facebook collects when you're on Facebook, but when you're then going to any other website in which there's a Facebook “Like” button. Their consumers were not aware that their data was being collected. Whenever they went to the website of, let's say, The New York Times, or the Wall Street Journal, or any other website, Facebook was collecting data on them.
    The other point is that, yes, there are benefits—and I've testified before the European Commission on this—in accessing data, but now you can control the terms. The data-opoly can then determine with whom it's going to share the data, and under what terms and under what conditions, and then with whom it won't. Now you're putting a lot of faith in one particular firm. Look at AT&T and Bell Labs, back in the seventies, which had lots of innovations. Now you're relying on that company and what innovations it wants to promote and not promote.
    I have 30 more minutes of questions. How much time do I have?
    You're 20 seconds over, but we're going to have some more time at the end.
    Okay. Thank you.
    I'm usually pretty lax with the time. I like to see themes follow through, so rather than cut them off, we want to hear what they have to say.
    We'll go to Mr. Kent for another five minutes.
    Thanks again, Chair.
    Let me start by saying that I think the mega-data companies have provided a multitude of very meaningful benefits over the last couple of decades. Artificial intelligence is spectacularly beneficial in some areas, algorithmic programs, and so forth.
    However, we were surprised to learn in this committee, back in April, when a Facebook senior Canadian executive and the deputy chief privacy officer for Facebook from the west coast of the United States appeared, and although they and others had many meetings with senior ministers of the Canadian government and senior decision-making officials, there wasn't a single registered Facebook lobbyist on the Commissioner of Lobbying of Canada's registry site.
    It's good to note that about a month later, Facebook has registered one official lobbyist, and we don't know whether these executives are still unofficial. They explained their meetings with the government officials as assisting them to understand the capabilities and processes of Facebook in, I guess, governmental terms.
    Professor Stucke, you wrote an article entitled, “Should We Be Concerned About Data-opolies?” It is a very detailed article. You made one point, saying:
Companies need things from the government; governments often want access to data. When there are only a few firms, this can increase the likelihood of companies secretly cooperating with the government to provide access to data. Moreover, a dominant firm is likely to lobby the government on many more fronts.
    Could you elaborate on that a bit with regard to perhaps potential compromises of government regulation when it comes to the consideration of a Cambridge Analytica-AggregateIQ-Facebook scandal?
(1210)
     Right. It has been a historical concern of antitrust that once you have significant economic power, that could translate into political power, and you then create policies that promote the dominant firm.
    Yesterday in class we talked about the DuPont case, which is an antitrust case. There, DuPont was very successful in erecting tariffs on cellophane to protect its dominant share in the market. There is no doubt that you could probably cite examples of that as well. There was concern that you could then start creating policies that help protect the dominance.
    The unusual thing about these data-opolies is when it starts coming to surveillance. I was recently in Hong Kong giving a similar talk, and one of the concerns there was how the government is getting data from these private data-opolies to help create a credit score on consumers—like a general sort of citizen score—to better monitor and track them and the like. They're basically co-operating with these data-opolies. You have a relationship whereby they're providing data to the government, and then the government is providing favours to the company. This is a historical concern with a new twist.
    Are there comments?
    From being in communities and talking to people...and maybe this just needs to be said because I'm running community meetings and talking to people about data and technology. Some of the sitting people in government seem to think that it's cool to be doing things with technology leaders, that somehow that's making the country appear to be progressive, and that's great for....
    Whatever this enthusiasm is, it sounds ridiculously simple but this is why I find this embarrassing. It is not cool to be hanging out with companies when you know the impacts of these companies, and that how this rolls down into life in Canada is affecting things regarding our sovereignty, democracy and privacy.
    People on the ground are not amused, and this is a message I would like to deliver on their behalf. This is not funny. This is not fun and cool. This is part of how Silicon Valley does its work. It makes things seem cool, friendly and easy to use, but magical. Don't ask how it really works because you're not a technologist. You see how this all plays out. It scares people out of feeling able to challenge what's going on. You fear that you're going to look like a Luddite, you're going to look stupid or your question might not make sense. These are all pieces that, when you posed that question, came to mind.
(1215)
    Thank you, Mr. Kent.
    Next up is Mr. Baylis.
    Assume that you had the right to write the laws right now. We have a moving target, obviously. As Charlie pointed out, what we thought was great 10 years ago doesn't look so great now. With the rules we put in 10 years ago to allow the Internet and data to elaborate and grow, we're now starting to say, "Whoa, what have we done here?" I imagine 10 years from now we might be doing the same thing.
    If you could frame out for us, in a big picture, from a philosophical and actual point of view, what would be the three things you would do right now if you were the government and you had that choice? This is in terms of our privacy, the use of this data and the control of it.
    I'll start with you, Ms. Wylie.
    Thank you.
    This is hard for me because I don't think we know the answers, so I'm just going to riff a bit on some areas.
    One of them is that we need to be having conversations with the people in this country about some of the things we're talking about in terms of privacy. Because I think there are also a lot of opportunities with data, we need to talk about trade-offs.
    The first thing you'd say is that.... I have to admit that I was one these people. As we got into the study, I started saying, “Oh, my God.” Then we scratched the surface a bit more and I said, “Oh my gosh, they're doing this too.”
    You're saying that we should go out and educate our population about what's actually happening right now with their data and privacy.
    Yes.
    You would make the contention that the population is not up to speed.
    I would, and that's me included, even though I'm trying. There were times when we did nuclear public safety campaigns. We talked about fire. We talked about risks to public health and risks to the country. I think it's time for that again. That's one piece.
    Secondly, I'm drawing a little diagram here. I think laws should be made slowly. I think we need to understand.... I don't know if you've heard of it. It's called the pathetic dot. Lawrence Lessig had a model. There are four forces that shape our lives. We are the dot at the centre of these: norms, markets, architecture and law. It is extremely important to remember that, with what we're dealing with now, to do better is going to require work on all four of these fronts and the understanding that this is a confluence discussion. This is not a linear one.
    I understand your first point, which is that we're going to go out and we're going to inform the public. We've done that. Then they're going to cry, “Do something.” What would you want us to do now that they know? What should we do?
    I'm not trying to be evasive. I really don't think what I think is that important here. I need to understand what we all think about some of this stuff when we talk to each other. If I hear other stories from other people what I think today might start to change. I think that is all the way up....
    Do you have a philosophical perspective on, say, “If you want to collect my data you have to ask for my permission,” or “I don't mind you collecting my data so long as it's not used”? Do you have some line or anything when it comes to the acquisition and use of data?
    Yes. I think clarity in what is being exchanged is important. I also think, and this goes back to the first point, that limiting.... It is more of a single-use idea rather than “just have it” and then it can keep evolving in terms of what you're doing with it. Clarity in that language in a terms-of-service situation, so that people aren't.... Teresa Scassa is a legal scholar. She describes how it's not consent right now; it's surrender. I think that's quite an accurate way to put it, so some focus over there would be helpful.
    You would say inform the public, and once you've informed the public let there be.... I might say, “Hey, I don't mind you knowing all this stuff about me because it makes my life easier,” but you might say, “I don't want you to know any of this stuff about me, because I find it intrusive.” In a philosophical sense, say, inform the public and then have maybe a sliding scale of what can and can't be done. We could implement this on things like terms of use and so on.
    Yes, and also on things like standards. Explore other mechanisms that aren't just the law. I don't think we can pull this all off within law, particularly not privacy law. There are too many other things going on.
    I also think just because you ask specifics, I know in this city case I'm looking at, identifying the data that is critical to public service delivery and planning, that should be hived off as something that we pay attention to and understand how that works.
    Thank you.
    Professor.
    It's interesting because we've written a couple of books and we were ready to defend our thesis. I remember there was this one head of a competition agency who then looked at us and said, “Okay, so what are we going to do about it?” It kind of caught us flat-footed because we were just identifying the problem without necessarily having the solution.
    What I would encourage would be, basically, threefold. First is to ask what your competition authority is doing about the market power problem. Marshall Steinbaum and I wrote a piece that just came out from the Roosevelt Institute on reinvigorating antitrust.
    To what extent is the Canadian competition authority prepared for the digital economy? I think that's an important issue for you to.... Should the standards change to make it easier to go after these anti-competitive restraints?
    The second would be, then, what are the necessary preconditions for effective privacy competition? Some of the themes you heard from today already touch on this: GDPR-like provisions on data portability, issues on who owns the data. What I would encourage then is really to bring together scholars on what some of the things that are necessary that we could put in so that we don't have to regulate, so that we can allow then the market forces to provide optimal privacy by design.
    The third component, which we really haven't touched on, would be consumer protection. Here would be both before and after. What is it that we can do to simplify it for consumers so it's not like surrender, so that they actually have the ability to choose and feel comfortable in using this data?
    The risks that I hope that I identify show that it's really multifold. You have concerns about journalism right now that the ACCC is looking into. You have concerns about addictions of young individuals and the effects they have on well-being.
    There are other important implications that these data-opolies will have. I just identified those three.
(1220)
     Thank you.
    Next up for three minutes is Mr. Angus.
    We're going to go through another round after this and then we have some committee business at the very end. We need to discuss some things in camera.
    Go ahead Mr. Angus. You have three minutes.
    Thank you, Mr. Chair.
    Have I said lately what a great chair you are? You're going to give me a few extra minutes if I keep ragging the puck here. I want to thank you for your excellent work.
    You're about to get another seven.
    Voices: Oh, oh!
    Professor, I'm really interested that you nailed the question about the Competition Bureau in Canada. Our Competition Bureau is a great model for the 1980s and 1990s. They deal with price regulation and have a very narrow frame. This is not the kind of thing they step into. We have a Privacy Commissioner who is now stepping into all manner of issues, but some of this does lead to antitrust and he doesn't have the authority.
    We need to be looking at these.... Our domestic models are based on very 20th-century problems, and we're moving into a 21st-century world very rapidly.
    I'd like to talk about this need for antitrust. We can understand our own data. I could put it on Facebook and an old high school friend could meet me. Someone could sell me something. Where could my data go wrong? Someone could defraud me. We have no ability to comprehend mass data and the power that say Google or Amazon has.
    The Bank of Canada, which is not a radical organization, has spoken up about the danger of the innovation economy in Canada suffering because of the power of these data-opolies. The Economist talks about the creation of these innovation kill zones. They can anticipate where new start-ups are coming and they can put them out of business. We have not seen the kind of competition we expected in the market in the digital economy.
    In terms of antitrust, how important is it that we have some kind of antitrust mechanisms in place to protect not only the rights of citizens but also—here I am a socialist, talking about the market—making sure that we have a good market of competition?
    I'm not going to say antitrust is.... I'm from antitrust. I worked at the Department of Justice for many years before teaching. I could see the power antitrust can have. It's not the silver bullet. It's necessary, but it's not sufficient.
    You do need to re-amp the tools, the way you point out. We talk about this in our book, Big Data and Competition Policy, and there was a report that came out by the Canadian Competition Bureau recently on big data.
    I've seen the work the European Commission is doing, and what the French and German competition authorities, the CMA from the U.K. and Australia are doing. Now the United States is starting to have hearings on this as well. This is a key component that any competition authority needs to ramp up to better understand the risks in this economy. There are multiple risks that we didn't even talk about today.
(1225)
    Thank you.
    Ms. Wylie, you mentioned earlier about dystopian realities. We are now in a dystopian reality. We have Google dropping its “Don't be evil” motto, working on censored search engines in China.... We've had some of these big data firms tying in with military operations. We've also seen the slide of democratic regimes in Europe, in eastern Europe, to much more authoritarian models.
    My concern about a company like Google Alphabet having so much control over public space is that we do have the potential—we can see it around the world—for these powers to be misused.
    How important do you think it is that in any of these smart city models we have citizen engagement, citizens on the boards and citizen rights to ensure that public space is still protected as public?
    It's extremely important.
    I think the challenge at this point in time is that technology often works to the end-user, to me. I have an app. I use a product.
    When you have companies starting to set up projects like Sidewalk Toronto, where every product line is almost a parallel government line of business. It might work a bit better than a city website. Who knows? In these cases, this is getting extra dangerous, because we're having this opportunity to hide what's going on, to hide it behind the technology and start to confuse people. Is this a corporation or is this a government?
    I think that's really dangerous. Beyond having engagement and representation is also making sure that we're doing this fundamental education around who does what. What do we want to keep protected in terms of who does what?
    This blur is real and this blur can happen through data. Education is very important to do.
     Thank you, Mr. Angus.
    We'll go through another round.
     I just want to challenge you, Ms. Wylie. You said that you weren't going to respond to Mr. Baylis's question because you thought you weren't qualified to, but I think, for people who see the fire, it's important that you either put it out or you give people who can put it out your information on how to do it. I challenge you. Don't feel like you can't advise us on what it should look like, how to protect our data or how to use our data, etc. You're here for a reason, so feel free to give us your opinions. We think you're qualified to be here.
    Next up for seven minutes is Mr. Erskine-Smith.
    Thanks very much.
    I'm wary of relitigating PIPEDA. We did a fulsome study on privacy protection. We made recommendations, and a lot of what both of you have said would be answered to varying degrees if those recommendations were adopted.
    On the competition question, this is new territory for us in many ways, so I want to visit that in more detail.
    You, Mr. Stucke, identified eight potential antitrust harms. A number of those were related to privacy and over-collection of data, surveillance and implications of security breaches. Let's bracket all that relates to data protection and privacy, because we've had that conversation at length.
    Let's talk instead about innovation, the other potential harms and the tools that are required to address those potential harms.
    Let's take one other item off the table, which I think is pretty obvious. If a company is using data to prefer their own product, we already have rules that preclude that from happening, so let's take that off the table as well. We heard from the CRTC that, when certain platforms, certain ISPs, prefer their own video platform over others, a streaming platform over others, it's contrary to the law, so let's bracket that.
    As for the other potential antitrust harms, in your view, what are the tools required to address them?
(1230)
    I'll start off first with data-driven mergers. Let's say that Facebook were to acquire IAC, which is the largest dating platform. It has Match.com and the like. Under the competition authority, you would look at that. It's not necessarily a horizontal merger, because they don't directly compete. It's not a vertical merger, because it's not like a supplier, manufacturer or distribution chain, and it's not really a conglomerate merger, although you argue that maybe Facebook might be a perceived potential entrant. Under that, you wouldn't really have any antitrust significance, but now the issue is whether the acquisition of that data will help Facebook attain or maintain its dominance in other markets. That's one issue.
    How do you assess these data-driven mergers, and how do you assess whether or not Google is even dominant or has monopoly power when you're relying largely on these small but significant non-transitory increases in price standards? You have very much price-centric tools to assess dominance....
    That's fair, but let's get to an alternative to that, then. Let's also address the fact that network effects not only benefit the company, they benefit the consumer at various points. I use Google Maps, and Google Maps is a better product because more users use it. If it were just me and Peter Kent using it, I wouldn't use the app. It wouldn't be particularly useful to me.
     I think we've missed this conversation in many respects all the way around the table. That cup of coffee, I get it. It's more valuable to me than the fact that someone now knows what I study. The fact that someone knows what I study is not valuable to me on an individual basis, but aggregated, it's very useful to the company. The company is able to create value by combining all of our collective data together. I think there is a good exchange in certain respects, and maybe a bad exchange in other respects in different contexts.
    How do we empower the Competition Bureau to address this problem?
    Just on network effects, there's good but there's also bad.
    Of course.
    It could help powerful firms become even more powerful until they're entrenched in the marketplace. You have multiple—
    Of course, and in other cases you can provide better product.
    Yes, but you also have multiple network effects. I would point out that DuckDuckGo has a much better privacy policy, but it doesn't have as good a search engine, and it might be just disadvantaged by these network effects. It's a doubled-edged sword.
    How do we empower the competition authority? I think it's in multiple ways. One of them is to go away from price-centric tools when you're dealing with markets that are ostensibly for free. Second is to look at the importance of data—
     We move away from price-centric tools to what?
    One thing is to have, then, alternatives such as a small but significant non-transitory decrease in privacy protection, and there would be coordination with the privacy official and the competition official.
    All right.
    The EDPS is working toward that effect.
    The other thing would be looking at data as an important mechanism, even when it's not bought and sold. I would look at Apple's acquisition of Shazam. There, the European Commission, I think for the first time, looked at that merger like this: Could the data itself help Amazon maintain or increase its market power? Those are the types of questions. Before, that wasn't really asked.
    Then the other thing would be to look at the abuses that these data-opolies could have, and how they can, in many different ways, deprive data to companies that previously might have had the data, or to somewhat disadvantage them. Now you're saying we have the tools for that. That's great, but then I would ask if they are necessarily working to the extent that we would expect. Look to see the enforcement actions that are being taken elsewhere. Is that happening in Canada? If it is, how come our tools aren't necessarily deterring that behaviour?
(1235)
    Then here's the last question I have for you. You've mentioned Germany and you've mentioned the EU. Are those the two jurisdictions you would point us to and say this is the model that Canada and the United States should pursue in relation to antitrust?
    I'm not saying it's the model, but they're starting to ask the right questions.
    Is there no model you would point to?
    No. We are now in the new frontier, whereby the tools that we have don't necessarily translate well into this new.... There's not a well-established model. We're now starting to find out what we should do to address this sort of behaviour. It might be you're not necessarily going to rely on an effects-based standard, but you'll have simpler presumptions—for example, what a dominant firm can or can't do—and just put greater limits on their ability to engage in certain behaviour.
    I'm out of time, so all I would say is that I really appreciate your answers. Where you have specific examples of tools that you think the Competition Bureau should have, if you could follow up in writing, it would be much appreciated.
    Okay. Thank you.
    Thank you, Mr. Erskine-Smith.
    Next up, for seven minutes, is Mr. Kent.
    Thank you, Chair.
    I just have one final question, and it's about this illusive question of who owns my data, who owns the citizen's data, who owns my browsing history.
    In terms of full disclosure, I have two Facebook websites, as a politician. I post content.
    Mr. Charlie Angus: I visit them all the time.
    Hon. Peter Kent: Thank you, Mr. Angus, for visiting the website.
    I encourage relationship building with those who come to the website. I encourage feedback. As a politician I gather data from that website to be responsibly used. I use Google perhaps as many as 50 times a day. As Mr. Erskine-Smith said, DuckDuckGo and Mozilla Firefox are good, but Google is much better for my applications.
    I was struck—and I'm assuming that you were, too, and I'd like your comments—when Facebook's Mr. Zuckerberg appeared before the congressional committee and he would not address the question of who owns the browsing history of those who use his platforms. I'm just wondering. In light of the fact that the Cambridge Analytica-Facebook-AggregateIQ scandal is based on the fact that improperly harvested data, including the vulnerabilities or the very personal aspects of users' browsing history, among other things, came together for this phenomenon that we've come to know as “psychographic microtargeting” and attempts to influence electoral processes.
    I'm just wondering if I could have final comments from you, Professor, and then Ms. Wylie, on who owns my data.
    In the United States that's the great unknown. It came up in a Supreme Court case, where there was geolocation tracking of an individual. During the oral arguments before the U.S. Supreme Court there was precisely this question: Who owns the geolocation data? Here it's unclear. Then to what extent do you own all of the property rights or only some of the property rights? What would then be the bundle of the property rights that the consumer owns, that the company owns, and the like?
    There, it's unknown. It's not yet been legally resolved.
     Ms. Wylie.
    Mr. Zimmer, I want to thank you for checking on me, in terms of my confidence to respond but the reason I'm saying I don't know is because of what I know. I'm saying this in a professional capacity of knowing that the people who tell me they know what to do right now are the ones that I run from the fastest—honestly.
    I want that to be a thing everyone hears me say. I've worked a lot with this and there are a lot of unknowns. That's why I don't want to say I have the answers. I know it's frustrating that we're not sure, but it's because we need to get into more of this stuff. I would just posit that part of the problem is that, when I talk to economists, they have one language, and when I talk to lawyers, they have another language. We need to be working together more on all of these issues, to get us to the next level.
    In terms of the idea of who owns my data, I've been in some interesting conversations about this. One helpful thing I heard was that data is a representation of a fact. No one owns facts. Sometimes it's about the quality of the capture of a fact, and then maybe you can go and say that you've not captured my fact correctly. This is difficult. I would point to Teresa Scassa, who has written a paper recently about data ownership, which gets into the reasons why this question is difficult.
    It's the nature of data that it's not just one, and not finite in its existence. It is challenging in what it is. I think we're still trying to figure all of that out, and the different relationships between what we consider our data and others who are using it.
(1240)
    Thank you, Chair.
    Thank you, Mr. Kent.
    Next up is Mr. Angus.
    Thank you.
    This has been very fascinating. Ms. Wylie, it's been great to have you bring a citizen's lens to this discussion.
    One of the things I'll say, on the positive side of being a Canadian politician in Parliament during the last 14 years, is that we've had some really interesting examples of civic engagement with digital issues. The neutrality battle was very much driven by consumers and citizens, and I think it helped frame the policy in this country. Citizen engagement with copyright influenced two governments to withstand heavy U.S. corporate pressure on the DMCA and notice and take down. We have notice and notice, and we've received all kinds of fist-waving from the Europeans and the Americans, but we've held to a distinctly Canadian position on where we are in the digital realm.
    What surprises me about the Cambridge Analytica-Facebook scandals is that we haven't seen as much in the way of grassroots civic engagement, but you were involved in community discussions and you were out having these discussions. Do you believe, from what you're hearing, that this is an issue that citizens are becoming more engaged with, an issue they want to have a voice in and be heard on?
    Absolutely. The predominant emotion is fear. We need to get in there and make that not the only part of the discussion. Fear can often introduce nostalgia, which is just going to bring us back when we need to go forward. We need to be talking about what is not working and is scaring people, as well as what is an opportunity and how to make the best use of it.
    In terms of this idea of Google's smart city, one of the concerns we're seeing is that we're talking about massive data monopolies. When we developed 20th-century cities, we didn't have private companies setting up electric grids block by block. We moved towards public utilities. If Google does really well, are we going to see Amazon set up one neighbourhood over, or someone else?
    There aren't really other options available for someone big enough to do that. How do we engage in building smart cities, urban centres that offer the absolute optimum of digital engagement, but all within a public space? Do we need to refocus this whole conversation?
    I think we do. I think where the Sidewalk Toronto project went off the rails was that the DRP didn't create conditions. The DRP didn't say the data and the digital infrastructure would be public. When you use procurement as an option, you can set terms within it. It doesn't require master forms. If we want to shape some of these things, we can do it within some of the existing legislation. Are we being clear, however, about the business and responsibility of the state, about where and how the market can help? The market can definitely be a part of it, but those requirements need to be written by the government.
     Professor, in our country we have about the population of California spread out over the area of the second largest country in the world. Most of our population lives within a stone's throw of the U.S. border. We're not like the Europeans where they can establish separate, complete standards because they have such a large population. We are very interdependent with the U.S. on trade, on everything. They're like our cousin, so it's usually a pretty good relationship.
    In terms of our establishing an innovative economy, we've talked a lot here about data sovereignty and its importance, yet we are now getting more and more politically tied in with the big data-opolies.
    In terms of the power to utilize data to drive an innovation economy, how important is it to limit that relationship with the data-opolies?
(1245)
    It's a great question. We've talked mostly about data and personal data, but one thing is to allow the free flow of non-personal data that can also help in innovation—like the workings of a car, like the data that comes from your car that could then go to the car manufacturer and the like—and enable that data as well.
    One thing is that you need data in some industries to innovate, so you need access to that data. But then you also need the ability to compete if you're going to exist on a super-platform such as Amazon or Facebook or Apple or Google or any other one of these super platforms.
    The third thing is...and this is just to follow up on a point that Bianca made. I was at a conference and what we talked about was that the market will not always provide services. In the United States, when we started off, we felt it was a fundamental right for every citizen to get mail. If you left it to market forces then some remote regions might not necessarily get mail. We didn't say that the market would provide it. No, that was a service that the government provided. I think that we have lost that in the last 30 or 35 years, that there are some essential services that the government has to play a key role in providing, like the mail, like other things that maybe market forces, even in a competitive market, may not provide.
    I think that's an important role here so that we get the benefits of a data-driven economy, but in a way so that the economy is inclusive, protects our democracy and also can protect our privacy and improve our well-being.
    Thank you.
    Thank you, everybody on the committee, but thank you especially, Maurice and Bianca, for coming today to be here with us. We appreciate your efforts on the file and we look forward to talking to you again.
    Please send any submissions to our office that you would have for the committee afterwards, any ideas you might not have thought of when you were sitting here. We'd be glad to have them as part of our study.
    We'll move to suspend for about a minute until we clear the room. We'll go into committee business quickly to talk about a few things.
     [Proceedings continue in camera]
Publication Explorer
Publication Explorer
ParlVU