:
Mr. Chair and honourable members, thank you very much for the opportunity to speak to you today.
With me today are Caitlin Lemiski and Helen Morrison, senior policy analysts with my office.
I first appeared before this committee in my previous role as assistant privacy commissioner of Canada. Also in February of this year, I appeared before you in my capacity as registrar of lobbyists for British Columbia.
As assistant privacy commissioner of Canada, I led the first investigation by a data protection authority of a social media platform. As information and privacy commissioner for British Columbia, I conducted the first investigation in Canada of the use of a social media site by a political party. Following that investigation, we issued guidelines on social media background checks.
Today I would like to provide you with an overview of British Columbia’s privacy oversight model, followed by a review of some of our recent work related to social media. I will then offer my views on the ways in which Canada’s privacy laws are meeting the challenges posed by social media and how governments could strengthen enforcement of our laws.
In terms of regulating the private sector, the Office of the Information and Privacy Commissioner monitors and enforces B.C.’s Personal Information Protection Act, known as PIPA. PIPA determines how organizations may collect, use, or disclose personal information. We share the regulatory space with the federal privacy commissioner because B.C.’s PIPA has been declared substantially similar to PIPEDA. PIPA has wide application, though, including coverage of non-profits. It also applies to employee personal information.
PIPA provides the commissioner with order-making powers. For example, I can order an organization to stop collecting, using, or disclosing personal information. I can also require an organization to destroy personal data collected in contravention of the law. In my experience, order-making power provides me with the authority necessary to ensure that businesses are meeting their statutory obligations.
The purpose of PIPA is to govern the personal information practices of businesses and organizations in a manner that recognizes both the privacy rights of individuals and the need of organizations to collect and use personal data for reasonable purposes. Recognizing this balanced approach, privacy laws do not and should not prevent organizations from developing and using technologies that benefit our digital economy.
I fully appreciate the innovation and value of social media. It allows human expression to manifest in new and exciting ways, and it facilitates public participation. Social media also allows people to connect with family and friends, to follow the latest news, and to build online communities.
That said, I share the privacy commissioner of Canada’s concerns that social media companies may not be giving Canada’s privacy laws enough attention. All organizations, including social media companies, must follow the rules around knowledge and consent and limiting collection, use, and retention of personal data. These rules are particularly significant given the speed with which information on social networks can move and replicate.
I also acknowledge that the international context in which these companies operate can be a complicating factor. Canada has a very different statutory framework for privacy than in the United States where most of the world’s most popular sites are based. However, this does not absolve social media companies from complying with Canada’s privacy laws. All organizations doing business within our borders are accountable for their personal information management practices. They must follow the law.
Some of the recent investigative work undertaken by Canadian commissioners demonstrates that Canada is able to address some concerns with social media and privacy. However, it's an uphill battle.
In British Columbia, my office recently investigated the collection of Facebook passwords and profile information by a political party that used this information to vet potential leadership candidates. What we found was that although the political party obtained consent from the leadership candidates, the collection of passwords and profile information contravened the act. Under PIPA, an organization may collect personal information only for the purposes a reasonable person would consider appropriate in the circumstances.
We also found that in viewing the candidates’ social media profiles, the political party collected information about the candidates' friends and the friends of friends, without their knowledge or consent. As a result of our investigation, the party agreed to stop collecting passwords and adopted the guidelines issued by our office on social media background checks.
In another investigation, we examined the Insurance Corporation of British Columbia’s offer to the Vancouver Police Department of the use of its facial recognition database to identify possible suspects from the 2011 Stanley Cup riot. The relationship between social media companies and facial recognition technology is very significant, as many of these companies integrate this technology into their services. For example, last year, Facebook integrated facial recognition into its photo services, allowing for the automatic tagging of persons in uploaded photos. Facebook chose not to roll out this functionality for its Canadian users.
Indeed, ICBC’s offer to the Vancouver police highlighted our awareness of the power of facial recognition technology and how attractive it may be for law enforcement. Law enforcement’s use of social media is a particular concern, because social media companies possess some of the largest corporate collections of photographs of individuals.
There are important questions about whether individuals actually provide meaningful informed consent for the collection of their biometric information for facial recognition. If social media companies collect this information without proper authority, then any subsequent use of that information by law enforcement may not be authorized. Moreover, tests have called into question the reliability of this technology. For example, at one U.S. airport, a facial recognition pilot project correctly identified volunteers only 61% of the time. Based on this low success rate, the airport abandoned plans to use facial recognition. Yet, those issues remain, because technology will improve, and law enforcement will want to use it. The relationship between law enforcement and social media, particularly in relation to facial recognition software, is an area that would benefit from greater attention and study.
Statutory requirements, regardless of their content, can have little effect unless organizations actually follow them. In my view, the greatest challenge to privacy and social media is a lack of awareness among businesses of their obligation to limit the type of personal information they collect. For example, in British Columbia, many organizations do not understand, and are surprised to learn, that PIPA does not allow them to collect personal information just because it may be publicly available on the web.
In the context of pre-employment screening, an organization’s casual approach to collecting personal information online can lead to unsettling results. For example, although it would normally be inappropriate and illegal for an employer to collect information about a prospective employee’s age, sexual orientation, or the fact that they may or may not have children, an employer may learn these details by accessing a social media profile. Personal information on these sites is prone to inaccuracies. In addition, like a dragnet, organizations may catch far more than they intended when collecting personal information from these websites.
Some say that individuals just have to take responsibility for what they post online. While it is true that we should think before we post, this doesn’t mean that we should refrain from reasonable opportunities to express ourselves. In the end, it's all about context, and Canada’s privacy laws recognize this by limiting collection and use to what is reasonable in the circumstances.
As Canadians’ views about communication and expression evolve, the challenge for commissioners and governments is to help organizations understand these new distinctions. Mothers should not refrain from posting information about their parenting experiences for fear of repercussions from their employers, and friends should be free to make comments about products and services to each other without unreasonable market surveillance and profiling.
These observations are consistent with a 2010 report by the Office of the Privacy Commissioner of Canada, which states that “traditional notions of public and private spaces are changing. Canadians continue to consider privacy to be important, but they also want to engage in the online world.” Sustained public education and engagement will be necessary to promote awareness and compliance with Canada’s privacy laws in the world of social media.
In conclusion, social media companies should use the innovations that make them so popular to uphold the values of privacy that are important to Canadians. Protecting privacy is about more than obtaining an individual's informed consent. It is about what is appropriate in the circumstances.
Although principle-based, technology-neutral laws adapt to new technology, in my view, strong enforcement tools, such as order-making power and mandatory breach reporting, are critical for the federal privacy commissioner to regulate the space.
Thank you very much for the opportunity to appear before you today. I'd be pleased to respond to any questions.
:
Good morning, ladies and gentlemen. My thanks to the chair and members of the committee for inviting me to speak to you today.
I'm not going to speak to you about privacy regulatory matters and existing statutes. The reason for that is you've heard from Commissioner Stoddart on that subject. You've heard from legal scholars like Michael Geist. You've just heard from Commissioner Denham. There's very important work that needs to be done in the regulatory and legislative space.
The reason I'm not talking to you about that today is not because I do not have strong regulation in my own jurisdiction; I have order-making power, and I cannot emphasize enough how important order-making power is to a regulator. I also have, under PHIPA, the Personal Health Information Protection Act, a wonderful ability in terms of mandatory breach notification. We have these tools at our disposal, and they're excellent, but I'm not going to be talking to you about that today.
I'm going to talk to you about the future of privacy. I'm going to take the next 10 minutes to talk to you about something called privacy by design. Before I start that, though, please allow me to introduce my colleagues. I'm joined by Michelle Chibba, my director of policy, and David Goodis, my director of legal services.
Privacy by design is all about ensuring that the user has control of their data. Increasingly what we are experiencing all around the world is that with the enormous growth of mobile technologies, wireless WiFi everywhere, online social media, mobile devices, with the growth of information sharing and availability, it is becoming extremely difficult to regulate this information strictly after the fact—meaning you allow the privacy harm to arise, someone complains, we investigate, and then we offer a system of redress. That's very valuable and must continue, but using those tools, we only catch, in my view, the tip of the iceberg in terms of the potential pool of privacy infractions and privacy-invasive activities. Privacy by design is all about being proactive and trying to prevent the privacy harm from arising to begin with.
You'll see that privacy by design was adopted as an international standard two years ago in Jerusalem by the international community of privacy commissioners and data protection authorities. It was unanimously passed as an international standard and has since then been actually reflected in work coming out of both the United States and the EU. The FTC in the United States, the Federal Trade Commission, has just in January of this year put out its piece on how it sees privacy moving forward in terms of regulatory structures and private sector self-regulation. They've recommended three practices. The first of those three practices is following privacy by design.
If you look at the regulation put out by the EU on data protection earlier this year, you'll see the language of privacy by design, and privacy as the default permeates the entire regulation. You may be interested to know that privacy by design has now been translated into 25 languages. I assure you this is no small feat. It is reflected in all of the major languages around the world. I just want to give you an idea of the import of privacy by design and how seriously it's being taken all around the world.
Now I'm going to walk through, very quickly, the seven foundational principles of privacy by design. Let me try to summarize this for you. The essence of privacy by design is to embed privacy into the design of not only information technologies but accountable business practices, policies, and procedures in a proactive way, in an effort to prevent the privacy harm from arising as opposed to reactively offering a system of redress after the fact.
The essence of privacy by design is being embedded as what we call the default setting. By that I mean that when privacy is the default condition, you, as the user, the data subject, can be assured of privacy. You don't have to look for the privacy. It's guaranteed. It's automatic. It's embedded in the system as the default setting. That is key, and that is an integral part of privacy by design.
The other essential feature is that it talks about operating in a positive-sum, not a zero-sum, environment. Zero-sum means that you can have one or the other of two interests. You can have privacy versus security, privacy versus social media, or privacy versus biometrics. Get rid of the versus.
Positive-sum means privacy and other functionalities. You have to have privacy functioning in an environment in which it can operate in unison with other interests, as it must. The future is all about creativity and innovation. Who knows what's around the corner in terms of the next technology and the next development? We welcome that. We insist upon privacy being part of the package.
You've all heard a great deal about big data. I'm not going to talk to you about that today, because there is no time. Just for your information, here's a little teaser. Tomorrow we're launching a paper we did jointly with IBM called “Privacy by Design in the Age of Big Data”. We're releasing this tomorrow morning at conferences in Washington, D.C., and Toronto. If you look at our website tomorrow, please take a look at our paper on how you can have privacy and big data.
I'm going to talk to you for the remaining four minutes I have about an example of how privacy by design actually works on the ground. I don't want you to think this is simply a theoretical formulation or some academic construct. It's real. It's operating right now on the ground.
Let me tie this to Facebook and other social media. As Commissioner Denham mentioned, Facebook has the capability of facial recognition technology. So photographs that are uploaded to Facebook can be tagged with an identity through facial recognition technology. You can imagine what a treasure trove this will be for law enforcement and other interests, with the pictures, the faces, of 900 million users, potentially, being tagged, using facial recognition technology, and potentially matched with pictures of faces taken from a crime scene, for example. The police would come knocking on the door of Facebook with a warrant. Of course, Facebook would have to give them the information.
I'm going to tell you about a technology we've introduced here in Ontario that would not allow that to happen, even though it would allow facial recognition technology to happen. It is facial recognition technology using privacy by design biometric encryption.
Let me just tell you very briefly what this is. In Ontario, the OLG, the Ontario Lottery and Gaming Corporation, is the corporation that runs our casinos in this province. We have 27 casinos in Ontario. They're run by the OLG.
They came to me a few years ago and said that they had a problem. They have an addicted gamblers program, a problem gamblers program, called the self-exclusion program. Quite simply, if you are an addicted gambler, and you're going through the equivalent of a 12-step program, such as Gamblers Anonymous, and you go through the entire program, the last thing they'll ask you to do is go to the casino of your choice and ask to be placed on the self-exclusion program. That means that you want to give up gambling, have gone through the whole program, but know that you might fall off the wagon and try to go back into that casino and gamble, and you don't want to do that.
The self-exclusion program is completely opt-in. It's voluntary. You go to the casino of your choice and you say, “Sign me up. I want you to keep me out. If you see me trying to enter your premises, I'd like you to ask me to leave, please.” You fill out the form. They take your picture. You sign it, and this is completely your choice.
The problem was that this program wasn't working very well. In the past, the form you filled out with your picture on it and all that would live in some back office somewhere in a file cabinet.
In the meantime, these addicted gamblers who fell off the wagon would try to sneak back into a casino. They would go to the front of the casino—there are 27 of them across the province—and they would sneak back in. They were very good at it and they would successfully get back in. Unfortunately, many of them would lose their life savings. They would lose their families. They would lose their jobs. It was terrible. Then they would sue the Ontario government—the casino—for not honouring this program and keeping them out. It was a lose-lose.
So when the OLG, the Ontario Lottery and Gaming Corporation, came to us and asked for a solution, they said, “Here's what we can do”. They said they had cameras at the front of all casinos. Casinos all around the world have cameras at the front for security purposes. They said that if they were to match the cameras at the front with the faces in their backroom files, then they could identify these self-excluded gamblers and keep them out.
Here's the problem with that—facial recognition technology can pick up the faces of a lot of people entering the casino, not just the problem gamblers. Plus, this could then be made available to others for secondary uses like law enforcement. I wanted to ensure that wouldn't happen.
So what we did was that we asked them to use a program called biometric encryption. What this means, very simply, is that this is a system of using a facial recognition data capture in a way such that it cannot be used for any other purposes. When you use biometric encryption, no biometric template—as it is called, which is a digital representation of the face or the finger—is retained in the database.
Quite simply, all that means is that if law enforcement comes knocking on the door and wants to access your database of biometric templates to see if there's a match to a crime scene, you can't give them the biometric template because it doesn't exist. The only use that can be made of this information is for this particular purpose, the primary purpose which is intended.
I can explain to you later, if we have time in questions, how this works. But this has been tested in other jurisdictions. In the Netherlands, priv-ID, another company, has done this, and I can give you other examples.
This is a wonderful, privacy-protected biometric solution that allows the particular privacy biometric problem to be addressed, but doesn't allow the information to be used for any other purpose.
:
Thank you very much for that question.
I think there's one way that we can do it. I'll refer you to a paper that we released this past summer—I'm trying to remember the name of it—“Privacy by Design in Law, Policy and Practice”. The idea for the paper came from Commissioner Pamela Jones Harbour, who is a former commissioner with the Federal Trade Commission. When she was talking to me about privacy by design, she said we could impose it as a requirement, a condition, in our consent decrees, in decisions that the FTC issues upon completion of an investigation, and we could include it as something on a go-forward basis that a company would have to follow proactively from that point on.
Justice La Forest kindly reviewed the paper that I just mentioned, which you can find on our website, and he said that privacy by design is an excellent idea that should be incorporated into administrative means of law addressing privacy on a go-forward basis.
One way we could do it—I know that Bill is looking at changes to PIPEDA—would be to have some way of saying that on a go-forward basis, at the conclusion of an investigation, a company would be required to follow privacy by design in any particular area that was problematic.
The other thing about privacy by design is that it's not a punishment. We always say privacy is good for business. There should be a privacy payoff to businesses that follow good privacy practices. Consumer confidence and trust are being eroded very quickly in this day and age, and you can strengthen that on the part of your customers. It is not something that is in fact a stick. It is both a carrot and an inducement to introduce privacy protections in a way that ultimately will save the company resources, because they'll be able to avoid privacy infractions and privacy investigations, and potentially, class-action lawsuits that are coming out.
There's so much happening on the privacy front that when we talk to companies about privacy by design we do it because they invite us to tell them how to do it. They want to do it, not only for the right reasons but for business-related benefits as well.
I think there is a way forward by imbedding it into new regulatory structures.
:
Mr. Angus, I love that you refer to “unintended consequences”. I didn't use the slide here, but I have a big slide when I talk to, especially, tech companies. My slide just reads: beware of unintended consequences. That is always the fear.
I called last year the year of the engineer, because I talked to engineers at all of the leading companies around the world. I talked to Adobe. I talked to Intel, and HP, to Google. I've talked to Facebook and others, but I specifically wanted to talk to their engineering, computer science technical teams to translate, if you will, or operationalize the principles of privacy by design into code.
Of course, we've been talking to lawyers for years, so I'm not worried about lawyers and policy writers understanding how to translate the policy requirements into policy codes, etc., but the engineers were being left out and the computer scientists. When I talked to them, I said, this is very simple. I can't write the code for you, but I can translate this into what does “primary purpose” mean, and how do you ensure that data minimization principles are being reflected in your operational procedures.
Privacy as the default is such a critical feature. We try to explain this not only to engineers—and they get it, of course—but to laypeople. I always have what I call my neighbours' test. I have very clever, smart neighbours, but they're not in the privacy field. So I try to explain it to my neighbours, and if they grasp the concept, which they will, then we're off and running. It has to be accessible to the public and to engineers alike, and the notion of privacy as the default resonates. As one of my neighbours said, does this mean I get it for free? I don't have to ask for it? I don't have to scour the privacy policy to find it? I just get privacy for free? I said, yes, it would be embedded in the system by default as an automatic feature. She said, “Sign me up. That's what I want.”
That's the kind of discussion we have. As I said, we talked to all of the major companies. For Google+, when they were doing their beta test for Google+, their new online social media, we participated in the beta. They're very interested in the privacy issues. They've come up with this concept of circles and restricting privacy and sharing within a given circle. So you could have one for your workplace colleagues, one for your neighbours, one for your family, etc.
We've talked to all of the major companies about privacy by design, and I would hazard a guess that if you went to any of them they would know about it.
:
Thank you, Mr. Chair, and thank you to our witnesses today.
It's great to have two provincial commissioners here. There's certainly a lot of experience in your presentations.
I'm going to come at this from the perspective of one of those computer programmers who dealt and struggled with some of these issues in a life prior to becoming a member of Parliament.
Mr. Michael Geist was before our committee here a few days ago—and I think Mr. Angus was going down this road—but I'm deeply concerned, not only as a personal consumer and person who's making his way through the Internet these days but because I have young children at home whose situation I'm worried about. Of course, as a parliamentarian, I'm always worried about issues pertaining to the privacy of my constituents and so on. So this is quite a timely and interesting study that we're doing right now.
I agree wholeheartedly with the premise that defaults.... Mr. Geist's comments were that the “devil is in the defaults”. It would seem to me that some of the default settings that we have, whether they're at the operating system level, whether they're at the browser level or the interface level, whether they're at the data level, are somewhat concerning. I would just like to give both witnesses an opportunity to talk about that a little bit more.
I certainly do agree that—in your privacy by design presentation, item 2, “Privacy as the Default Setting”—is something that I think most Canadians, if they were given an opportunity to have this reasonably explained to them, would enjoy.
I also believe fully that I should be explicitly asked, as a consumer, if any of my personal information should be collected. It should not be written up in some 15-page legal document, where unwittingly, with the press of one little button, I must accept the entirety of a document. I have no ability to parse out and accept those parts that I do agree with and those parts that I disagree with, I must accept the entirety of signing on to an account, or whatever the case might be, in order to partake in whatever transaction that I'm doing.
I just wonder if there are some practices out there or some recommendations you have that would help consumers navigate through this ever-increasingly complicated web.
In terms of ideas for limiting use by social media companies, obviously, people are voluntarily putting information online on their profiles. The company should only use that data for the purposes that are clearly stated, and that's the whole principle of transparency.
If the company then wants to use the information for a new purpose, then it has to go back to the users and explain the new purpose and get their consent. A really good example is that if you're a Facebook user and then all of a sudden Facebook rolls out its facial recognition technology software. That's a new use of the data. It's a more precise use of the data. It can lead to all kinds of function creep. I think in that case the company needs to go back and explain the new uses, the shiny new toys that are available to users, and get their consent.
That's really important. If they have new partners, if there are more third-party applications that are using the data, again, let users know and make it easy for them to say no or to control the use of their data.
The second question you asked me is about the number of investigations we have done of social media sites versus investigations that involve social media. I gave you the example of our investigating, really, the employment situation and how employers or third parties are making use of social media. I wanted to draw that investigation to the attention of this committee because I think it's really important to look at how social media is used by litigants, by law enforcement, by employers, by post-secondary institutions, because I think that's part of your study as well.
We've done several of those investigations, and I will share our social media background check guidance with you. I'll send them to the clerk of the committee for your review.
:
I've met with people and spoken at both Facebook's headquarters and Google's, and there is considerable interest in privacy by design. With regard to Facebook, if I had to guess their position, I would say they view privacy by design as being incompatible with their business model. That view is essentially, use as much information for as many purposes as you can, and then if you go too far—as they did with the news feed—then you can pull it back in terms of people's privacy preferences.
I have the greatest respect for Mark Zuckerberg. I've spoken to him. He totally gets that privacy is all about control, and I would suggest that he certainly values his privacy and controls it. But in terms of the business model, I think they would not be interested in it.
Google, on the other hand, is interested. If you look at Google+, which is their online social media, they have tried to incorporate privacy by design features. They invited me to speak to their head engineers, who were designing it, about privacy by design and how you incorporate this in terms of data minimization and making privacy the default. That was the concept behind “circles” and trying to minimize data collections.
I'm not going to oversell this. I think businesses will come to this gradually, if the business model is predicated on reaching as many people as possible.
Having said that, there is a way you can have online social media and privacy, and that's the Google+ experience in circles. I know many people who are on it. I don't know what the numbers are right now. I think they've exceeded 50 to 60 million, but we'd have to confirm that. It has an ability to restrict the information you share to the narrow audience that you want to share or speak with.
If I may, sir, I want to add one comment relating to your first question. With regard to the notion of minimizing data and collections and how you restrict it to the primary purpose, one example we did in my jurisdiction involved the creation of an enhanced driver's licence that could be used across the border instead of a passport.
They, of course, have to collect information. We put directly into the regulation what information, what personal identifiers, could be collected: the name, one's address. We said they should identify the fields specifically, as opposed to leaving it open-ended. We were able to do that. One way of trying to restrict the collection of personal information is by identifying specifically, very narrowly, that which you are permitting.
I concur with my colleague, Commissioner Denham.
We have order-making power in Ontario, and I'm telling you it would not be the same without it. But let me be clear—it is a last resort. The order-making power, which gives you the teeth, is the stick. We rely on it infrequently.
I'll give you the example of PHIPA, the Personal Health Information Protection Act, which applies to both public and private sector health organizations in Ontario, of which there are many. That was introduced in 2004. I've only issued 11 orders under that—so in something like eight years, 11 orders—because there is enormous incentive on the part of organizations to work collaboratively with us early on, and we always try to do that. We work very collaboratively. We strive to reach informal resolutions to investigations and problems, and we've had hundreds, thousands of them. It works very well. The carrot is a much better inducement when they know the stick is there.
On occasion we've had to issue an order and we do it not gladly but certainly willingly, if necessary. Often the order serves the purpose of an educative tool. It sends out a very clear message to everyone of what the standard of practice is now, and what our expectations are in this area. So order-making power is absolutely essential.
We have mandatory breach notification under PHIPA. That is also very important because that informs the population involved in the breach. It gives them the openness and transparency of knowing what is taking place.
We've also had, through the Regulatory Modernization Act in this province, a policy-led hook, if you will, in terms of looking closely at how you embed privacy-types of solutions into regulatory activities. So it's very important to have that cooperation.
I should also tell you, though, that my staff and all of us are out there regularly meeting with organizations. So not only is public education very important, but you have to meet with the organizations that fall under your jurisdiction so that they gain a better understanding of what your expectations are and how they embed privacy by design into their practices, into their technologies, and into their day-to-day activities.
They need to learn that from us, and we do this regularly. I think that allows us to minimize the number of orders we issue, but everyone knows the order-making power is there. It's a very powerful tool.
:
There are several things we can do. Obviously, raising awareness and education is our job, and we're getting the word out there strongly. You should know that internationally, word about privacy by design is growing. As I mentioned, in 2010 it was made an international standard. If you go to our website, www.privacybydesign.ca, there's a lot of information that we share regularly.
Most organizations do PIAs, privacy impact assessments, when a new technology or a new best practice or process is introduced. You can require, or certainly request, that in the PIA process, privacy by design is reflected. If I can again encourage you to go to our website, last year we had a PBD PIA. PBD is privacy by design. This PIA was specifically developed to reflect the requirements of privacy by design in the PIA. It's one of the essential tools in any practice. When you have a new technology or business practice, you do a PIA to identify the privacy risks and address them before the program or business practice becomes operational.
By requiring the seven foundational principles of privacy by design to be reflected in the PIA, and thereby reflected in the new program or business practice, you can be assured, at least, that the issues are being addressed. The kind of data minimization you were speaking to earlier that would speak to preventing unintentional access to the data used for other purposes, the harms that arise when data are used in ways that were never intended—all the problems we are so concerned about—can be addressed right from the beginning. That's the beauty of privacy by design. It tries to identify the privacy harms right at the initial stages, when the technology is emerging or the program is just being developed.
If you embed privacy protective features at the nascent stage, right at the beginning, it's much easier to minimize the harm and address it before the program is operational or the technology is fully operational. It makes a big difference. I would point you to the PIA process as an ideal place. Also, we have it on a CD. I can send it to anyone who's interested.
How do you do privacy by design? I was asked in 2010, when privacy by design was made an international standard, if my office could offer some assistance to other regulators around the world on how to do this privacy by design thing. How do you actually operationalize it?
We developed a curriculum that I think is very accessible. It walks you through the various steps of the principles and how you would do it. I make that available to anyone who's interested. We've shared it with many universities and Intel and other companies. All the tech companies have it. It basically walks you through how you do privacy by design.
Thank you.
:
Okay, I'd love to do that. It's really quite simple, though it sounds very complex.
Imagine your pictures being taken or your fingerprints being taken. The normal process involved in facial recognition programs or biometric programs is, as I said, to capture what is called a biometric template, which is a digital representation of the essential features of your face or your finger. That template is what is captured in the database and that is what is used for purposes of comparison.
The problem is, as I said, if the police come knocking on the door with a court order. You have to give them access to the database. They will be able to match that template of your face, the digital representation of your face, with a face that they might have taken a picture of at a crime scene. They get a match, and boom, your information is used for another purpose that was never intended.
Au contraire, with biometric encryption, what does it do that's different? It uses the unique features of your face or your finger to encrypt or code some other data: a PIN number, an alphanumeric, something meaningless, a nonsense number—it doesn't matter. And that biometrically encrypted data, this other data, is what's kept in the database.
So there are two things. If the police come knocking on the door, what do they get? You have to open the database to them. First of all they get nothing, because without your actual face present, one can't decrypt or decode what is in the database. So first of all, they can't get access to it even though you're going to open the doors.
Okay. What if there's a brute force attack? This happens. There are great hackers out there. What if they break into the database? What do they get? They get nothing of value. They don't get your face or your finger. They get this other meaningless nonsense number that was encrypted using the unique features of your face or finger, so they get garbage. Be my guest; they're not going to get anything of value. The beauty of it is that, for the purpose for which it was intended, it works perfectly. And if you go to our website, you'll see that the University of Toronto worked with the OLG, the Ontario Lottery and Gaming Corporation, to perfect the system. They reached levels of not only privacy but security and accuracy that were unprecedented for biometrics.
The large company Morpho out of Paris, France, which is the leading biometric company in the world—they just bought Sagem, which used to be the leading company; it's now Morpho—is looking at biometric encryption to develop a prototype, a pilot that it's going to be working on in the fall on how we can incorporate biometric encryption into a hardware device or something. So people are looking at this around the world. It's in its infancy.
But the beauty of the OLG example is that I can guarantee to all the regular patrons of the casinos in Ontario that they don't have to worry about their facial images being captured when they go out for an evening's recreation. I can also assure the addicted gamblers who want to be kept out that there will be a much greater success of having their wishes abided by through this program.
The success rate, if you will—it's called the hit rate—of the program of self-excluded people has grown, tripled and quadrupled. Before, we had very little for identifying these poor individuals. Now the success rate is through the roof, and there are something like 15,000 addicted gamblers in the province who have signed up for this program. We can help them do what they want us to do and keep them out, while not impacting the privacy of anyone else. And we've also told these individuals that, while they will be kept out of this program, their information will not be used for any other purpose whatsoever—no secondary use, full stop.
:
Thank you, Commissioner Denham. Like you, we have order-making power, and we can order the cessation or the destruction of collections of personal information that have been collected contrary to the act.
I did that a few years ago with the Ottawa police, believe it or not. They had collected information that I ordered destroyed. I had the pleasure of meeting Vern White, who was then the police chief in Ottawa and is now Senator White.
So we do have, in terms of what comes under our jurisdiction, the ability to order the destruction of these collections. Then we can ask for third-party audits to ensure that the data has been destroyed, although I had no concerns with the Ottawa police doing so.
As Commissioner Denham mentioned, the right to be forgotten is extremely important. It features prominently in the new EU data protection regulation that has been drafted.
Also, it is becoming more and more important because of the limited control you have in online social media and other fora in terms of online access. Is it really being destroyed? Is it being deactivated? How long...? What assurances do you have?
I'm going to suggest to people that you have very few or virtually no assurances in terms of private sector information that exceeds, certainly, my jurisdiction, and that may exceed others' jurisdictions. Even our ability to audit is very difficult to do. It takes a lot of effort. What the FTC and other organizations are doing now is building in the need for independent third-party audit, so that if the destruction of records has been ordered or required, it can then be confirmed after the fact.
But I just want to point you to one thing, and I'll say this as my final comment. Over time, I think it's going to become increasingly more difficult if companies and governments don't follow privacy by design in terms of proactively offering privacy as the default feature. You're not going to be assured of privacy or a destruction of your records. It's going to be a free-for-all.
We've been working with the University of Toronto to develop a new concept called SmartData. If you go to our website, you'll see that we just had an international symposium on SmartData, which is the developing of virtual tools that will work for the data subject and will be your virtual agent online to protect your data and act on your behalf in a contextual way.
I'm not going to take any more of the committee's time, but I just wanted to point you to SmartData. You can go to it on our website or we can send you some information. Again, we're calling it the embodiment of privacy by design—to basically give consumers, the users, the tools that will enable them to also protect their own data.
Thank you.