Thank you for the opportunity to speak to the Standing Committee on Access to Information, Privacy and Ethics during your hearings on the Personal Information Protection and Electronic Documents Act.
My name is Donna Bourne-Tyson. I am the university librarian at Dalhousie University; president of the Canadian Association of Research Libraries, known as CARL; and a board member of the Canadian Federation of Library Associations. Joining me today is Susan Haigh, executive director of CARL. We are pleased to be here to share the research library perspective on the right to be forgotten.
CARL is the national voice of Canada's 31 largest research libraries, 29 of which are located in Canada's most research-intensive universities. CARL also represents Canada's National Science Library and Library and Archives Canada. CARL members' parent universities attract over $6 billion in research funding annually, and our member libraries spend over $285 million annually on information resources to support learning, teaching, and research.
CARL members act as a foundation for Canadian-led innovation by providing access to knowledge as well as preserving vital information required to support Canada's research community. Academic research libraries are at the vanguard of technology as the sharing and dissemination of information shifts to digital environments. In this light, CARL has watched the emergence of the right to be forgotten with great interest.
Our position is that there are important rights and freedoms to be weighed, respected, and judiciously balanced in any legislation or regulatory approach to the right to be forgotten. As we noted in our short submission to this committee in April, we are guided by the “Statement on the Right to be Forgotten” issued by the International Federation of Library Associations, IFLA, in February 2016.
CARL has elected to focus comments on the right to be forgotten, but we do support the perspectives on PIPEDA more broadly that will be outlined today by our colleagues from the archival community. Research libraries play an increasing role in research data management, and we are very engaged in defining and practising what we might call the ethical management of data. The library and archival communities see data management as key to ensuring appropriate protection of individual privacy while, at the same time, enabling more data to be openly accessible and allowing technology-based research that mines anonymized or aggregated datasets.
Now I will turn to our views on the right to be forgotten.
In 1987, CARL adopted a freedom of expression statement that confers responsibility on Canadian research libraries to “facilitate access to...expressions of knowledge, opinion, intellectual activity and creativity from all periods of history to the current era including those which some may consider unconventional, unpopular, unorthodox or unacceptable”. This statement echoes the fundamental right to expressions of knowledge, creativity, and intellectual activities as embodied in the Canadian Charter of Rights and Freedoms.
At first reading, the right to be forgotten appears to run counter to this responsibility. This is not to say that libraries do not believe in protecting the right to privacy. Rather, as I will discuss here, the right to be forgotten is a complex, emerging, ethical and technological issue that demands a careful balancing of fundamental rights that, at times, can appear to be in conflict.
Libraries are, by their very mission, upholders of the public interest and are sensitive to the concerns around personal privacy on the Internet. The library community recognizes that information on the Internet can cause harm, particularly in cases where the information is false or defamatory. The right to be forgotten can be a legitimate means for individuals to address these situations.
Libraries are also the preservers of the public record and defenders of freedom of speech and access to information. The research library community has identified three dangers to be avoided by any legislation or regulatory approach to the right to be forgotten.
First of all, privacy, however important, must always be weighed against other rights, such as freedom of access to information and freedom of expression. These freedoms are not honoured when information is removed from access or is destroyed. While content can be removed from the Internet by its owners, a “right to be forgotten” approach must ensure that the privacy rights of an individual who is the subject of the content do not unduly impinge on the expression rights of creators of the content, such as authors and publishers.
Another danger of the right to be forgotten is the potential for the over-removal of content. If a right to be forgotten is encoded in PIPEDA or another piece of legislation, lawmakers and/or regulators must be proactive in reducing the incentives of platforms like Google or Facebook to simply delist information upon any request. In the section of its transparency report that addresses “right to be forgotten” search removals in Europe, which is accurate up to May 28, 2017, Google has evaluated over two million URLs for removal, with 750,487 URLs removed.
While Google does appear to be attempting to balance competing public interest in its decisions, it is important to remember that for each time an individual's privacy is protected through a right-to-be-forgotten request, it may muffle the speech of those whose content is being delisted, raising the spectre of censorship.
Another closely related issue is the integrity of the historical record. Information on the Internet may have future value, both for the public and for researchers. We believe an expert assessment of the impact on the historical record, preserved for future generations of Canadians, and ways to mitigate that impact should form part of every decision to remove information. In recommending this, research librarians recognize that the digital age has increased the accessibility of historical records that might otherwise have persisted only in physical libraries or archival repositories.
In that light, an approach to the right to be forgotten that downplays visibility by suppressing access through search engines seems marginally more acceptable than outright removal. In effect, delisting removes information from the public view obtained through a simple keyword search, but does not actually remove it from the reach of the more skilled and persistent researcher, who may also search for repositories that are not indexed by search engines.
Therefore, in our view, a limited and nuanced application of the right to be forgotten is appropriate. The removal of links to references to a minor juvenile crime or to sexually explicit photographs of a private citizen are examples of a proper application of the right to be forgotten, but what of the removal of links to references to a business failure, an injudicious statement by a corporate CEO, or public records that have not been sealed by court order or judicial practice?
To cite a recent specific example, a request was made to remove from the Internet a thesis that contained a chapter relating to organized crime activities by a named person who had since changed his life. The request was not acceded to because it was determined that the work was valid research and because the request was not supported by the thesis author and copyright holder. In that example, CARL would say that the correct decision was made; the thesis should not have been removed from the Internet simply because the person did not want any references to his criminal past to be on record.
The right to be forgotten should not be able to be too casually invoked by individuals, or their requests too readily acceded to by search engines. If implemented, such a right must have limited application, with clarity as to the conditions under which it may apply. There are complex considerations to be weighed and rights to be balanced, very likely requiring case-by-case assessment. In most cases, a review by an informed, but impartial, party is essential. A right-to-be-forgotten regime that requires a judicial order for any information or data removal seems merited, rather than leaving companies like Google or indeed research libraries with the task of deciding on sensitive, ethical situations pertaining to individual Canadians.
In closing, CARL, on behalf of the research community that its library members serve, calls for a very constrained approach to the right to be forgotten, one that will generally require a judicial order, and will not apply where retaining the links in search engines is necessary for historical, statistical, or research purposes; for reasons of the public interest; or for the exercise of the right of freedom of expression.
Good afternoon, and thank you for the opportunity to speak to the committee today. My name is Greg Kozak, and I'm here today speaking on behalf of the Association of Canadian Archivists. I am a professional records manager and I also teach as an adjunct professor at UBC's School of Library, Archival and Information Studies, focusing on access to information and privacy legislation.
The ACA is a national association of professionals who work in the public and private sectors. We have close to 500 individual members and 200 institutional members across the country. Our scope of interest spans the entire life cycle of records, both digital and physical, from their creation to their final disposition, whether that is destruction or permanent retention.
We're also advocates for consistent, accurate, and transparent information management practices that respect national and international standards. Our membership thus includes records managers who deal with current records within their organizations and archivists who deal primarily with historical records in archival institutions or programs. Sometimes, both responsibilities overlap.
We are interested in providing comments on existing or proposed legislative or regulatory texts that may affect our ability to manage trustworthy records and preserve, control, and provide access to authentic records over the long term. It is on these points that we would like to focus our remarks.
Trustworthy records are records that are created in a way that ensures accuracy, completeness, and reliability and that are then maintained and preserved so that their identity and integrity—their authenticity, that is—are unquestionable. Trustworthy records are records that can be used as evidence of the facts and acts that they attest were referred to for both legal and research purposes.
In our increasingly digital and connected world, keeping trustworthy records has become more complex. Much of this complexity relates to privacy issues and to the management of personal information.
Specifically, we see two areas related to privacy in which trustworthiness of a record is challenged. The first is the processing of the data in the creation and maintenance of records.
In his letter to the committee, the Privacy Commissioner of Canada stated that “it is no longer entirely clear who is processing our data and for what purposes”. To add to this point, we would like to note that we do not know how our data is being processed or by what means. The growth of visual analytics as a method of analysis and a reliance upon complex algorithms mining various datasets for decision-making result in a complex web of interactions whose outcome is likely to infringe on the privacy of the people whose information was collected.
In such situations, good records management is a prerequisite to the protection of privacy, as it would control the processing of the data of individuals while ensuring the creation of a reliable record of actions of those who are entrusted with them.
The second area in which trustworthiness of records is challenged is in the use of certain security measures to de-identify personal information contained in records. An example of this is tokenization, whereby a known individual's identity is replaced with another unique, non-obvious identifier. The controlling agency retains a table of concordance that permits it to match a unique identifier with the known individual.
The issue here is that such security measures are creating records that are difficult to manage over the long term. Again we can see a convergence between records management and the privacy requirements. In order to establish a level of trust over de-identified records, we still need to know what actions were performed on them.
Considering the challenges described above, it is clear that solid information management practices are a foundational element to effective privacy management. The ACA thus recommends that organizations be required to include records management capabilities within processes and systems that encompass privacy needs. This aligns with the direction of the European Union's general data protection regulation, which requires privacy by design and default; in other words, records systems designed with privacy in mind.
Our next comments deal with the preservation of records, which is the second hat that we wear.
Archivists acquire records that stand as testimony of human action. These records, created by public and private organizations and individuals, span all fields of endeavour—administrative, scientific, legal, financial, and cultural. Archives acquire records that show humanity at its best, its most ordinary, and its worst.
Preserving records is a societal good that ensures the historical accountability of one generation to another and permits the public to access unique sources of information for a broad range of purposes, such as historical research, scientific inquiry, and addressing past injustices through reconciliation efforts.
In this regard, we recommend preserving PIPEDA's existing mechanisms that permit private organizations to donate records containing personal information to archives for long-term preservation, allowing archival institutions or programs falling under PIPEDA to acquire records containing personal information, and carefully considering the implications of introducing a right to be forgotten or a right to erasure.
At the moment, PIPEDA permits organizations to donate records containing personal information of long-term value to an archival institution for preservation. This mechanism should be maintained to ensure archives are able to acquire and maintain records of private organizations. It is vital that private organizations be able to donate their records, to ensure the all-of-society representational nature of archival holdings.
One area where PIPEDA could be improved is allowing archival institutions covered by it to acquire records that fall under the archive's mandate. Currently, such archives need consent from the data subjects to acquire records containing personal information. In practice, it is very unlikely that organizations would seek consent to allow records containing personal information to be donated to a third party.
Therefore, the ACA recommends that archival preservation of records be recognized as consistent with the initial purpose for which personal information was collected. This reflects the approach adopted by the EU's regulations, where further processing for archival purposes is not considered to be incompatible with the initial purposes of collection. However, the organization must have a bona fide archival mission consistent with ACA's code of ethics and professional conduct, and not have been set up as an archives for the purposes of avoiding the act.
Third, the ACA believes that if a right to be forgotten or erasure were introduced, it would impact the ability of archives to preserve records. It is essential to ensure a careful balance between protection of an individual's reputation and the integrity and authenticity of the public record. PIPEDA is already based on the principle that personal information needs to be kept accurate, complete, and up to date. A wider application of this principle could help rectify instances where incorrect or inaccurate personal information may result in reputational harm, reducing the need for a right to be forgotten.
Regardless, the test to determine reputational harm must be clear, and the bar should be set high enough to remove frivolous or inconsequential requests.
We should also view such a right to be forgotten from a historical perspective. Specifically, it is to be considered that personal information becomes less sensitive over time. This is already acknowledged in PIPEDA, where it is established that information about someone who has been dead for more than 20 years, or in a record that is over 100 years old, can be disclosed freely.
Similarly, the EU's regulations do not apply to a deceased person. Therefore, reputational harm will diminish over time, and there will be a point when it causes no harm. Thus, the legislator should be mindful of introducing any measure that may irreversibly remove or conceal records.
I'll make one final comment on the application of cloud environments in privacy.
Increasingly, records are created, maintained, and preserved in cloud environments that are characterized by location independence. This type of environment was in fact the catalyst for the European data protection regulation, and is a strong aspect of the drive in several countries towards jurisdictional location requirements for the data related to their citizens.
In Canada, some provinces require that public bodies ensure that personal information under their care or control is stored and accessed only in Canada, subject to legislative exceptions. The Canadian government does not prohibit government institutions under the Privacy Act or organizations under PIPEDA from using cloud service providers that store personal information outside Canada but recommends that the privacy risk be identified, including the need for transparency, consent, and notification of the individual the personal information is about.
The ACA believes that PIPEDA should make a definite statement on the issue of the jurisdictional location of data of private individuals; otherwise, what happens to them will be mostly decided by legal opinion rather than by clear, consistent rules.
That concludes our submission. Thank you very much.
Thank you, Mr. Chair and esteemed members of the committee, for allowing us the opportunity to provide our comments on the review of PIPEDA from a retail perspective.
The Retail Council of Canada, RCC, has been the voice of retail in Canada since 1963. A not-for-profit, industry-funded association, we represent over 45,000 storefronts of all retail formats, including department, specialty, discount, and independent stores; grocers; and online merchants. Retail employs approximately 2.2 million Canadians, and as such is the largest private sector employer in the country.
I am the vice-president of the grocery division and regulatory affairs for RCC. This means that I am responsible for coordinating a range of regulatory files that impact retailers as sellers of products, as private label owners, or as employers. I manage files from food safety to consumer product safety, from drug labelling to regulatory co-operation. This includes matters such as anti-spam regulations, as well as digital privacy and security.
While we are not in a position to comment on the intricacies of PIPEDA, we are pleased to offer some general observations from a retail perspective. Generally speaking, in our view PIPEDA strikes the right balance between taking actions to protect digital privacy and taking a forward-thinking, technology-neutral approach.
As you know, a core concept in the legislation is that of consent. This is a very valid principle. We understand that the Office of the Privacy Commissioner held consultations on the issue and will be releasing a report later this year, and we would be pleased to participate in any consultations the commissioner may consider on guidance around valid consent.
Another core principle of PIPEDA is the mediator/conciliatory partner approach. This approach has a proven track record of working very well. In fact, our members have indicated that they can be and indeed are much more forthcoming in this context than they could be in a more formal, legal context. After all, we are all seeking the same goal: customer trust. Consumer trust is the core incentive to strong privacy protections, not expanded legislative powers and penalties.
RCC members are very aware of privacy issues and take their consumers' information very seriously. From our perspective, additional prescriptive requirements or enforcement powers would accomplish little in this regard, except to add to compliance costs.
RCC members spend a lot of time and effort trying to ensure that their systems are safe. However, the sophistication of hackers and scammers knows no limits and, despite best efforts, they will continue to find ways to circumvent the security systems that lawful businesses have put in place.
Unfortunately, it is easy to blame businesses that try to protect the information they have, because in most instances they can be located and the scammers cannot. Creating stricter requirements and broadening enforcement powers would unfortunately do little to change this situation, except to increase the cost of doing business in Canada.
RCC supports the current collaboration and communication between the Office of the Privacy Commissioner and provinces that have their own privacy legislation, and would hope that this continues as other jurisdictions consider legislating in this area. This would avoid the potential for uncoordinated and inconsistent reporting requirements.
Finally, it is important to remember that consumer data benefits consumers and Canadian businesses alike. Consumer data allows companies to understand what makes individual consumers tick and enables them to tailor and offer products that consumers may want to buy. It shows societal trends, which allows them to adapt their businesses and product offerings. It may indicate where bricks-and-mortar locations might be appropriate. It is useful for feedback on their business: where it went wrong and where it went right. Consumers can benefit through the steps companies take to improve the products they offer based on information they gather. Targeted advertising, when appropriately consented to, can reduce the time consumers spend looking for products by focusing on the things of most interest to them.
To conclude, retailers are supportive of PIPEDA and its technology-neutral approach. It has a proven track record.
Thank you again, Mr. Chair and members of the committee, for the opportunity to be here today.
Thank you very much, Mr. Chair.
Members of the committee, thank you for the invitation to appear today to speak to you on such an important subject.
We, meaning Google and I, haven't had the opportunity to appear before this committee in quite a while, so I'd like to take a few brief moments to tell you about Google in Canada.
In 2002, Google opened its doors in Toronto. It was one of our first offices outside the United States. After 15 years of growth, we now have more than 1,000 Googlers working across four offices: in Toronto, Kitchener-Waterloo, Montreal, and right here in Ottawa. We are excited about Canada. We are excited about the way we've been able to build world-class engineering teams that work on products used by billions of people every day.
Those products are being worked on in the four offices I just mentioned. Our products are being used to map northern communities, to make national parks more accessible to all, and to make our morning commute as painless as possible.
We are also increasingly working with Canada's community of artificial intelligence and machine learning researchers in both Toronto and Montreal. Canada, as we all know, is a world leader in this field, and the opportunity for scientific breakthrough, practical innovation in consumer and business products, and industry-wide growth bodes well for the Canadian economy.
I will turn to the subject under discussion today, PIPEDA. I've been in this field for more than 10 years, and I've always debated how to say it, so I'm glad to hear that there's a mixture.
As a principles-based privacy framework, PIPEDA is as relevant today as when it was first introduced. The broad principles that underpin privacy and data protection regulation have held fast through many cycles of technological change. We expect that the same will hold true as we see mobile devices gain in popularity and as machine learning gains wider use.
Of course, the specific application of these privacy principles will change and evolve, as it always has. At Google, we believe that data-driven innovation is compatible with a commitment to privacy. Our commitment focuses on four elements.
The first is choice. We provide users with meaningful privacy choices throughout the lifespan of their Google account: when creating their account, as they use our services, and when they abandon or delete their account.
The second is transparency. We help users make good privacy decisions by making it easy to see what data Google collects to power the personalization of their services and the advertising they may see.
The third is control. We provide our users with powerful, meaningful privacy controls, ensuring that they are experiencing Google on their own terms.
Finally, and I would say importantly, comes security. We invest heavily in keeping users' data accessible to them and only to them.
At Google we know that there is no “one size fits all” approach to protecting user privacy. Privacy means different things to different people, and we want to help our users to feel comfortable and confident about the information they share with us, even as they interact with our products on desktop, tablet, phone, or home devices.
We place value on being upfront and transparent with our users and speaking to them about privacy in clear language that they understand. In 2015, we introduced a site, privacy.google.com, that answers some of our users' biggest questions, such as what data Google holds or collects and what we do with that data. We've also made users' settings easier to find, understand, and manage, putting it all together in one place called My Account.
I want to underline that while I'm listing websites and URLs, the effort that has been put into experimentation and user experience design to make these useful has been a decade-long investment and process of refinement.
We're not stopping there. We continue to innovate and to improve users' access and control over their account data. For example, we are giving users unprecedented transparency through a site called My Activity, where they can see and manage the information used by Google resources.
How are they reacting? There were 1.6 billion unique users to this My Account site in 2016, and importantly, for we all realize how we use devices and how we access the Internet nowadays, more than 50% of that traffic was from mobile devices. Users have questions about their privacy and their security, and they're getting those answers relatively easily on a device that is really quite small.
With a focus on data security and access control, reasonable user awareness and empowerment, and data portability, we—both Google and the industry writ large—can ensure both privacy and innovation. It's the misuse of data, not its collection, that should concern us most. Let's consider the application of machine learning and the use of algorithms.
These techniques are already deployed in many features that Google's users know and love, such as Google Translate, spell-checking, or spam filtering, and within products such as Gmail, for instance.
Those of you who use our email products may be familiar with something called Smart Reply, which is generated by machine learning and uses our latest neural nets to suggest short responses relevant to incoming email, like “sure, I'll jump on that” or “that looks good to me”. People use it for 10% of all replies in our mobile mail products, so when you see that next time you'll know it might not be that genuine.
Google Home, which is a stand-alone device that provides access to our services, is also screenless and voice-controlled. We had to think of a new way to deliver our privacy notice to users by designing a specific sign-up and user consent flow for this product using a Home mobile app, and to make users aware that they can access their privacy controls through their Google account. You've had conversations around this sort of subject in your previous meetings, and it is truly a complex area.
At Google, we feel well positioned as we transition to a new era of computing in which people will experience computing more naturally and seamlessly in the context of their lives, powered by intelligent assistance and the cloud. This transition is as significant as the move over the last decade from desktops to mobile devices.
I'll just touch on two specific points that came up in your previous meetings, and we can follow up in the questions, if you like. You've heard from several witnesses about the challenges of maintaining children's privacy online. We are acutely aware that all our users need to understand the technology they use every day. We invest in making information available to parents. Through tools like the Safety Center, Family Help Centers, and in-product notifications, we work to provide parents and families the information they need to guide decisions about their children's use of technology. We want to provide parents with the tools and information they need to make their own choices regarding their children's online activity. We have built features into our Family Link app, which at the moment is only available in the United States, and our YouTube Kids app to enable parents to decide what is right for their family. The goal is to give kids an experience, guided by their parents, where they can build the skills to engage in smart and responsible online practices as they grow as individuals.
Finally, you've asked previous witnesses, and you've heard from Ms. Bourne-Tyson, about Europe's right to be forgotten.
Information-finding services like search engines are critical for sifting through the vast amount of information online. Many have likened the ruling by the Court of Justice of the European Union to removing cards from a library card catalogue but leaving the books on the shelf. However, on the Internet there are no shelves to browse, no way to walk through the stacks and follow the alphabet to the information you seek. Decisions to delist URLs can affect users' access to media properties, past decisions by public figures, and information about many other topics.
Of course, we at Google understand that there are instances where it's appropriate to remove content from search results because, for example, it's been deemed illegal under local laws. Our products have well-established systems for users to flag content that violates our policies. Authorities may also submit requests to locally block content that is deemed illegal under local laws, including laws about privacy. We have worked hard to be a responsible actor. A crucial aspect—which has been mentioned already today—of this responsibility means balancing privacy with other values, specifically the right to free expression.
While the CJEU may have established a right to be forgotten in Europe under European laws, it is important to note that freedom of expression is a broadly recognized, and passionately defended, right here in Canada and across the Americas. Any framework that has such significant implications for the freedom of expression must be accompanied by transparency, accountability, and recourse mechanisms. And any discussion of the possible application of a right to be forgotten in Canada should recognize and address the complex dialogue around this issue that continues to exist today in Europe.
Thank you for this time, and I look forward to your questions.
I would like to thank everybody for appearing here today. I'll start off with Colin from Google.
I think it was Mr. Long who asked you the question about children and so on, and your answer was quite good, but I want to elaborate on that. It's up to the family unit and so on, but in today's age, I look at latchkey kids and at single-parent families. I look at how busy families are and at basic skills. We're not teaching basic skills like riding bikes, swimming, and things like these, and then I hear that come out.
I know Google is a gold standard and things like that, but should there not be some investment in technology, or are you guys investing in something? I know we control alcohol and things like that, which we don't give to our children, but then all of a sudden we give them a box that can.... I don't want to say it can cause more damage, because alcohol and drugs can, but you can cause damage in seconds, whether it's by predators or whether it's by information getting out.
I just wonder, technology-wise, when we've advanced so far.... You go to some websites and they say, “click here if you're 18.” You just put in the thing and that's good enough. I don't think that's anywhere near a gold standard, and I really feel that if that's the standard we're leaving to parents, then professionals, people like you, should have some technology or some investment in that.
I'll leave that with any of you for an answer.
Going back to Jason, my background is and was sometimes retail. I appreciate being here representing large businesses, as well as small business. When you look at large businesses and whether it's a Winners or TD bank, whether it's a business that is incorporated and has a board, and whether that's traded or not, we look at diversity of boards and whether we can be populated with more women.
But I don't really hear a lot of talk about diversification with IT people. When you set up governance boards—you know, we want a banker, we want a lawyer, we want an accountant, we want a former business owner—there doesn't seem to be that stress around the corporate board that makes decisions. Unfortunately, in business some of it is driven by profit so you say, reputation, reputation. If you look at the case of Winners, I think it's a landmark case: they stored credit card numbers on the same server. I don't know if they were fined in the end, but what they did for their customers was to say, “We'll take any returns back without a receipt”.
When I go back to fines and I look at whether my credit card was breached or my information..., I have to change my credit card for safety. I have to take some time, and that time I consider valuable. I could be doing other things.
Individual fines, things like that.... There are a lot of good corporations that keep having breaches. If you have a good brand, then your reputation comes back better. You look at the case of Maple Leaf; it's a whole different case, but again they got out of that.
When I say the retail side, I've built corporate boards, and they were driven on profit, but there's a new age of reputation and branding. You say you teach best practices. How do we integrate more IT people who are making these corporate decisions? Would it be safe to say that you see boards moving that way in your organization, or is this something that is always going to be more lawyers and accountants?