Skip to main content
Start of content

ETHI Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Access to Information, Privacy and Ethics


NUMBER 057 
l
1st SESSION 
l
41st PARLIAMENT 

EVIDENCE

Tuesday, November 27, 2012

[Recorded by Electronic Apparatus]

  (1530)  

[Translation]

    I call the meeting to order.
    Pursuant to the orders of the day, we are continuing our study on privacy and social media, which we started several months ago.
    Today, we are privileged to have a witness representing Facebook. As usual, there will be a 10-minute presentation, then a period when the witness can be asked questions.
    Without further delay, I will give the floor to Robert Sherman.
    The floor is yours, Mr. Sherman.
    Mr. Chair, committee members, my name is Rob Sherman, and I am Facebook's manager of privacy and public policy. Thank you for giving me the opportunity to speak to you about Facebook's commitment with respect to protecting personal information.
    I will address the committee today in English.

[English]

     At Facebook our mission is to make the world more open and connected. We're committed to providing an innovative, industry-leading service, helping people to connect and share with each other online. We're equally committed to providing privacy tools that enable people to control the information they share and the connections they make through our platform. The trust of our users is fundamentally important to us at Facebook.
    Thanks to the transformative effects of social technology, people can enjoy constant connectivity, personalized content, and interactive social experiences across a range of devices. On Facebook, for example, people have a highly individualized experience that's based on information that their own unique circle of friends has shared. Canada, with 18 million monthly active users, is among the most engaged Facebook populations in the world. Four of five Internet users in Canada are on Facebook.
    The growth of this interactive social web has brought tremendous social and economic benefits to society, and we're heartened to see the growing use of Facebook in Canada. Members of Parliament use Facebook to reach their constituents, and small businesses in Canada increasingly are relying on Facebook and other social media to generate exposure for their companies, increase sales, and obtain new business partnerships.
    As an example, Shopify, an Ottawa-based e-commerce software company, has seen a 31% increase in referral traffic coming from Facebook since June of this year. The online retailer eLUXE increased newsletter subscriptions 37%, again on Facebook.
    Facebook provides a platform for thousands of active developers in Canada to build applications, products, and games. Through our preferred marketing developer program, Facebook offers support and resources to Canadian companies that are building these products and these companies in turn are able to provide highly skilled jobs in technology and generate millions of dollars in revenue in Canada.
    While economic development and social engagements are critical benefits of the Facebook service, we believe trust is the foundation of the social web. People will only feel comfortable sharing online if they have control over who will see their information and if they have confidence in the people who will receive it. Facebook builds trust first and foremost through the products and services that we provide.
    We realize that people have different approaches to sharing information on our service. For example, some people want to share everything with everyone. Some people want to share very little with a small audience, and most people are somewhere in between.
    So a one-size-fits-all approach to privacy would never satisfy every person's expectations. Instead, we strive to create tools and controls that help people understand how sharing works on Facebook, so they can choose how broadly or narrowly they wish to share their information. A key focus of our business is our commitment to basic principles of transparency and control.
    I want to highlight our work in these areas and provide an overview of the steps we've taken to demonstrate our accountability. With respect to transparency, our goal is to be transparent and open with our users about how their data may be used. We recognize that long and complex privacy policies can make it difficult for people to understand how their information is being used, but we also believe it's important to provide people with specific and concrete information about our data management practices. For these reasons, we designed our data use policy to be both easy to understand and comprehensive. The policy, which is accessible from almost every page on our website, describes in plain language our data use practices and includes a straightforward guide to privacy on Facebook.
    We use a layered approach, summarizing our practices on the front page, and then allowing people to click through the policy for more details. Content is organized by topic, which lets people find exactly what they're looking for quickly and easily. People who want to read the entire policy on one page can do that as well. If they have questions about specific issues, they can find an answer by conducting a search within our help centre.
    We wanted to provide the information people want to know in the way they want to receive it, so we designed Facebook's data use policy based on feedback from users, regulators, and other stakeholders. When we announce proposed changes to our data use policy or our statement of rights and responsibilities, we give people the ability to comment on changes before they take effect. Our choice to give users a significant role in how Facebook operates, and to seek their input before we make these policy changes, reflects a leading best practice in our industry.
    With regard to control, in addition to our commitment to transparency, we continue to find new and innovative ways to build individual control into the user experience. Over the past year and a half, for example, we've launched more than 20 new privacy-enhancing tools that empower people to control their information. Whenever people post on Facebook, our inline audience selector enables them to determine the audience with whom the post will be shared. Importantly, these controls are available at the exact moment and in the exact context in which the person is making a decision about his or her data. In other words, if I post a picture of my family on Facebook, I can decide then and there who will see that photo.
    Facebook's activity log allows people to see all their posts in one place. They can review privacy decisions they've made, change the audience for their posts, and delete posts altogether. We also inform people when someone else has identified them in a post. This is a process we call “tagging”.

  (1535)  

    Tagging is an innovative privacy-enhancing technology, giving people control over information that's shared about them on Facebook. If people don't like a post they're tagged in, they can take action. For example, they can remove the tag, report it to Facebook, or send a message directly to the person who posted it. We're proud to give users this control, because we value their privacy and their trust.
    In November we launched more prominent and detailed privacy information, presented to new users during the sign-up process on Facebook.
    Another tool we offer is “download your information”, a place where people can download an archive of information associated with their Facebook accounts, including photos, posts, and messages. This tool makes it easy for people to take their information with them if they want to use it elsewhere.
    Finally, we offer an application dashboard so people can review the specific kinds of information each application can access on Facebook and make choices about what access apps should have to their Facebook accounts going forward.
    Transparency and control don't effectively promote trust unless we're accountable to our users and to our regulators for honouring the commitments that we make. To that end, we implemented a comprehensive privacy program that incorporates privacy by design. This program involves a broad cross-functional privacy review of products at all stages of development and before they're released.
    The Irish Data Protection Commissioner recently completed a comprehensive audit of Facebook's privacy practices and indicated that he “found a positive approach and commitment on the part of Facebook to respecting the privacy rights of its users”. The audit report described Facebook practices in detail, and summarized additional ways we're working to improve privacy protections that we offer.
    Following guidance from the Federal Trade Commission, we've established a biennial independent audit to ensure we're living up to our privacy commitments.
    Finally, a word about family safety. As we work each day to earn the trust of our users, we recognize that we must focus our efforts on the interests of the entire Facebook community, including the teens who use our service. To properly educate and engage young people on how to safely use the Internet, communication between parents, teachers, and teens is vital. To facilitate this conversation, we provide resources on security awareness and online safety. Our family safety centre, for example, contains specific content for parents, teens, educators, and law enforcement. A Facebook safety page provides dynamic safety content that people can import directly into their newsfeeds. We've also established a safety advisory board, an expert organization with many internationally recognized safety experts who provide us with advice on products and policy.
    In Canada, Facebook has taken the initiative to address local safety issues. During bullying awareness week, for example, we partnered with Canadian non-profits to launch the “Be Bold: Stop Bullying” campaign. This campaign centres around an interactive social pledge app and a resource centre that contains educational materials on bullying prevention.
    Facebook is always striving to develop better tools to keep and build the trust of those who use our services. We look forward to continuing our dialogue with the special committee, the privacy commissioner, Parliament, and other stakeholders about how government and industry can work together to best promote economic development in Canada while protecting the privacy of Canadians.
    Thank you again for the opportunity to testify today.

  (1540)  

[Translation]

    Thank you for your presentation and for making yourself available to us.
    Ms. Borg…
    Mr. Angus will start.
    Mr. Angus, you have seven minutes.
    Thank you, Mr. Chair.

[English]

    Thank you very much for coming. I'm very pleased to have Facebook's participation in this study. Clearly Facebook has become the centre for social media around the world, certainly here in Canada. I can say as a heavy Facebook user—my wife would probably say addict—it has transformed how I do business in a riding bigger than Great Britain. It's allowed us to communicate with people. It's allowed us to hear stuff that's happening on the ground. It's allowed us to build communities. So we're fascinated by the work Facebook does.
    I'm interested in the word you used from the beginning, “trust”. When I talk with students, they're all heavy Facebook users, but the sense of trust is something they're concerned about. The issue of privacy they see as being very important.
    I'm seeing now, all across Facebook, people posting their own personal copyright statements, because they're afraid that, the way they read Facebook's guidelines, Facebook owns their property, not them. Is it necessary for people to post their own copyright provisions?
    Sir, first I should say thank you for your interest in Facebook and for your use of our community. We appreciate it when people are engaged with our service and use it to communicate.
    As you note, we have recently seen a number of people posting comments on their Facebook timelines that say essentially, “I don't want Facebook to own my content”. The concern is that if you don't post this statement on your timeline, then Facebook will own your content.
    That's not true. We say in our data use policy that the users who post content own it. They give us a limited right to use it in connection with Facebook, while it's on Facebook, but they own the content and they have privacy settings that control how it's used.
    We've tried to engage, over past day or so when this has come up, in communications with our users in a number of different ways, to help them understand that this is the case and isn't something they have to worry about.
    For example, we have a Facebook and privacy page where we've posted some information about this and a link to our policy so that people can read the statement for themselves. We have a fact-check section that we've launched on our website, so where there are rumours that people want to know more about they can go to that place and find it. Obviously, we've talked to the media as well. We hope that people will feel comfortable sharing on Facebook.
    Thank you
    Facebook has a number of services, but on the page there's the wall, where people post comments and post their pictures. Then there's Facebook messages, where people make comments to each other or pass information.
    Many people actually don't use the Net. They just go onto Facebook, using Facebook messages as an e-mail service. How secure is that data? Or is that just information like all the other information?
    We take all of the data that's stored on Facebook incredibly seriously. We have a dedicated team of professionals working to promote and protect the security of all data that our users store.
    In that regard, we treat Facebook messages the same way we treat other data. We protect it; it's stored in dedicated data centres that have access controls, procedural controls, to prevent people from getting access to them.
     While nothing is entirely secure on the Internet, we hope that people feel very confident in communicating on the platform.
    Well, here's the thing. My 14-year-old daughter told me, when I was home last time—I can't even remember when I get home any more—that her private messages appeared on her wall.
    I'd find that pretty shocking. It seemed to me quite a data breach that what was passed between friends....
    I asked around, and I had other people confirm the same thing. This was from people who were much older, people who actually said they had to go in and delete their messages.
    How is possible that this kind of data breach occurred, that private messages were posted in public for anyone to see?
    The issue you raise is an incredibly important one. When we first heard about it, we took it very seriously. We had a dedicated team of staff look into the issue.
    What we concluded was that no private messages were being posted publicly—
    Well, I just told you that my 14-year-old daughter went on and had to take them off, and other adults I talked to told me the same thing happened to them. So it did happen.
    Maybe it would be worth talking after the hearing. We could get some more information on those specific instances. If that did happen, we certainly don't want it to happen. We want to do what we need to do to stop it.
    With regard to the situations that we investigated—and we investigated all of the situations brought to our attention—we found that these were older public messages, where people had communicated before they were using private messages separately. So these were just where people had communicated back and forth on each other's walls—this was being shown in “timeline”—but they weren't private messages. We were able to confirm in a number of different ways that this was the case.
    But if you think your particular situation is different from that, we should certainly follow up on it.

  (1545)  

    Yes, certainly, because it would seem to me pretty surprising that you did an investigation and didn't find this. I had adults tell me they were called by their friends, who said “You better get on your timeline, because right in your timeline are private messages that you and I sent to each other as private messages. Now they're appearing on a public face.” It seems to me that would be a major data breach.
    I'm glad you tell me that it didn't happen, but when people tell me to my face that it did happen to them, and that they had to go back and find those private messages and remove them, it seems to me that the private message line isn't that secure and there needs to be a discussion about this.
    I agree, and certainly we've spent a lot of time thinking about it. If these things happen, I agree with you, it's a very serious issue, and it's something that we need to take steps to look at.
    Just technologically, the way Facebook operates is that the private messages and the timeline are on different systems, so it actually would take a fair amount of work for us to integrate them. That is one other reason why we have some confidence that this hasn't happened. But again, we want to be exhaustive in making sure this hasn't happened. We can follow up and make sure that we've looked into this upsetting situation.
    I look forward to following that up with you.
    I know that police and authorities will sometimes go to Google, Twitter, or Facebook, because there's all kinds of stuff happening on there, to ask for information to be handed over.
     Do you do transparency reports, like Google or Twitter, to say how many requests there are in a given...? How do you deal with law enforcement?
    As you point out, there are law enforcement agencies that do seek to get access to information on Facebook. We try to be, one, incredibly protective of our users in a way that balances the needs for law enforcement to conduct legitimate investigations against users' privacy; and two, transparent with our users with regard to the policies we use when responding to those requests.
    Do you have transparency reports? In the case of cyber-bullying or cyber-threats, the police are going to have to go to you. You're going to have to be able to get that information. It's the only way to deal with it.
    Google has told us that they have transparency reports. Do you have those kinds of reports?
    We don't publish transparency reports in the same way that Google does. We publish our law enforcement guidelines on the web. Anybody, whether a law enforcement officer, a citizen, or a user of Facebook, can take a look at what standards we use to decide on responding to law enforcement requests, and what circumstances we'll disclose, and what circumstances we won't. We hope that people will feel comfortable in terms of understanding that they know the process we use to make those judgments.

[Translation]

    Thank you, Mr. Angus. Your time is up.
    Mr. Dreeshen now has the floor for seven minutes.

[English]

    Thank you very much, Mr. Chair.
    Thank you very much, Mr. Sherman, for being here today.
    As the largest social media site in the world, certainly Facebook's willingness to come here today and take part in our study is something that's commendable and certainly of interest to all of us who use Facebook.
    I'd like to start by commenting on your statement of rights and responsibilities and your data user policy. My comment is that Facebook, as I say, deserves some recognition here, because these two documents are actually written in plain language and they don't read like most of the terms of use documents do. We've had an opportunity to go through a lot of different types of businesses to see what they have there for the customer to look at, so I think that's significant. If people haven't taken the time to read through them, I think they should, and they shouldn't be intimidated and expect some legal document that's going to be confusing to them. That's certainly not what they are.
    I do have a specific question about the data use agreement. Under the section “Other information we receive about you”, it says that Facebook collects data about the activities of you as a user “whenever you interact with Facebook, such as when you look at another person's timeline, send or receive a message, search for a friend or a Page, click on, view or otherwise interact with things”, and so on.
    My question is what does Facebook use the data for? Is it stored indefinitely? For example, a user's list of all the names that they've ever searched for on Facebook, or all of the pages they've viewed: what is this used for, and is it stored indefinitely?
    There’s another point I want to ask about. So often when they look at this people have the idea that what they are using is free. I mean, you don't have the value of the company that you have where everything is free; I'm just wondering if you can give us a little bit of a concept of what your business model is as well so that people can put the two thoughts together.

  (1550)  

    I appreciate your comments with regard to the data use policy and the statement of rights and responsibilities. We take very seriously the obligation to be transparent with our users. We try to present information about our data use practices in a number of different ways that are easy for people to understand. So I'm glad to hear you've found that the data use policy falls into that category.
    The provision of the data use policy that you mentioned talks about the information we receive. Largely this is consistent with the way most websites on the Internet operate. Whenever you click on something on Facebook, whenever you interact with something, your Web browser sends a message to Facebook that says, send me back this information. So we keep records of those interactions. Those are retained on an ongoing basis. We have, for different kinds of information, different retention periods, so in some cases information will roll off, and by “roll off” I mean either be deleted entirely or be rendered anonymous by removing personal identifiers on a rolling basis, typically every 90 days for social plug-in impressions, for example. With regard to other data, there are different retention periods.
    You mentioned search information specifically. When people search on Facebook, we collect that information, as I've said. We store it in an activity log, which is one of the tools I've talked about. That allows you to go back and look at all the things you've searched for. You can delete those any time just by clicking the delete button that appears next to each search. The goal there is, again, to be transparent with people about the information we have. That information is used right now to improve the service so we can make our search functionality better by knowing what people are searching for and what they're clicking on. Those are the main purposes for which we use that information. There are also our technical, debugging kinds of uses as well.
    You also raised a second question with regard to our business model and how Facebook makes money. I think it's an important point that we try to stress to our users and to make sure people understand. The main Facebook business model is we operate Facebook and offer it for free to users who want to use it. In exchange, we pay for it by showing advertising on Facebook. We have a page called “Ads on Facebook” that provides information about how this works. In general, when you post information on Facebook, for example, information about your interests, you like a page that is relating to a particular topic, that's information we might use to decide which ads to show you.
    Advertisers will come to us and will say I'd like to show this ad to people who are interested in a particular topic. We'll show the advertising to the users. Obviously we don't provide individual information back to the advertiser about who's seeing the ad, but we'll provide general information that a certain number of people have seen the ad. That way we hope we give people control over the information they've given to us, but that we also are able to use that information to show them advertising that's more relevant to them than what they otherwise would receive.
    I think that's important, because we hear that if the advertisers are in there they have access and they know about an individual. That's a critical aspect of this. It's just an opportunity to get ads out to people, so that these products will be front of mind and so on, which I think is significant.
    I would also like to talk about the default settings. What is the reasoning behind Facebook's default settings being wide open, or public, on virtually all of the Facebook features, requiring the user to restrict all aspects of their account as they wish, rather than having the default setting set to friends only?
    With regard to the settings on Facebook, we try to be very clear with people about the way that settings work. A centrepiece of the way in which our service operates is what we call “inline” privacy controls. That means if you put a piece of information on your timeline, right next to it you'll have a button that will allow you to choose who will be able to see that information. In some cases, the default, meaning the setting that it's at when you first create your Facebook account, is public. Then, too, there are other situations where it might be something other than that.
    In general, our view is that providing information publicly helps people to communicate and connect. We think there's real value in enabling people to share. When you look at other social services on the Internet, you see that many of them are generally public by default. We think encouraging people to engage in a public discussion is helpful and promotes our community.
    That said, we think it's also important for people to make their own decisions about what information they want to share and with whom, which is why we see a lot of use of that setting. We see people who choose to share their information with friends, or with a more narrow group in some cases. Some people choose to post things to “only me”, which is the setting we use to suggest that only you will receive that information. You can remember it for later and have access to it, but it won't be shared with other users on Facebook.
    We think providing a platform that enables social integration but that also empowers people to make their own choices is the right approach.

  (1555)  

[Translation]

    Thank you. Unfortunately, your time is up.
    Mr. Andrews now has the floor for seven minutes.

[English]

    Thank you very much. Welcome.
    First I would like to have a little chat about what you mentioned at the end of your presentation with regard to the Irish Data Protection Commissioner. How many of these privacy commissioners or government agencies is Facebook dealing with around the world?
    Our services are operated in Canada and the U.S. by Facebook, Inc., which is based in Menlo Park, California, and by Dublin-based Facebook Ireland Limited in the rest of the world. Our primary regulators are the Federal Trade Commission in the U.S. and the Irish Data Protection Commissioner in Ireland. Those are the relationships we primarily rely on. We spend a lot of time with those organizations.
    We also have other relationships because we operate all over the world. We have relationships with regulators and policy-makers in various countries. For example, we have a robust relationship with the Privacy Commissioner here in Canada. We have found that we have had a very positive relationship with her office, and have been able to discuss many of our emergent issues and products with her office to get their feedback. That has been a very positive relationship. Many of our innovations in privacy have come out of our discussions with her.
    Between the Irish and the FTC, if one of the two of them tells Facebook to attain a certain level of privacy, does it go throughout the whole organization in every country? If the Irish set the bar here, and the FTC's bar is there, do you raise the global bar to the most up-to-date and what's requested?
    In general, we try to operate our service in a way that is consistent globally. We want everybody on Facebook to have the same experience. When we make privacy decisions, we try to make them in a way that works for all of our users in all the jurisdictions where we have relationships. In general, when we receive feedback from either regulator, we take that feedback seriously. There may be instances where we make a decision that certain features will work differently in some jurisdictions, but we prefer to avoid that where possible and maintain a consistent experience for everybody.
    When speaking about advertising with the last member, you said the words “general information” is provided to advertisers. Do you want to elaborate a little on what general information is provided? Do you have a list—I think you may have a list on Facebook—of all your advertisers and who you share this information with?
    When I referred to general information, what I meant was aggregated information. When we show advertisements to large groups of users, we may tell the advertiser general information about the people in that group. For example, we might say this advertisement was shown to 100,000 people, or we might say 50% of the people who saw your advertisement told us they were male. We're not providing information that is specific to an individual, just information that will give an advertiser a sense of the population they are addressing. That's what I meant by general information.
    We don't have a specific list of all of the advertisers on Facebook, because that set of companies is constantly changing. The information they receive is not individually identifiable to users. When there are advertisements that appear on Facebook, there's an “x” that appears next to each ad. You can click “about this ad” to learn about Facebook advertising generally. There's also an indication there in most cases of the identity of that specific advertiser.
    We've also heard during our committee testimony that some of the advertisers are now linking up information off-line and trying to link the information they get with actual users. Are you familiar with this? Is this a concern of Facebook? Is it going on with some of the data you're providing to the advertisers?
    With regard to the information that we provide to advertisers, again, there's no information specific to individuals in an identifiable way. I'm aware that there are some advertisers who have a practice of linking up this information. We do provide some analytics in a general way across large groups of people about off-line purchasing behaviour, and that's something we've talked about on our privacy page and we've explained to people how we do that in a privacy-sensitive way.
    With regard to the entities that are able to receive information on Facebook, we have agreements with them that restrict their ability to leverage the information and use it in ways that we're not authorized on.

  (1600)  

    Talking about wanting out, if someone wants out of Facebook, is there that option? I think you mentioned it. If they pull all their data out, once it's out, is it gone? Do you retain it for a certain period of time?
    There are two different processes that we allow people to use in addition to downloading their information, which is the process that allows them to gain access to their information. You can do that without deleting your Facebook account. But assuming you've decided to sign off of Facebook, we provide two options. One is called deactivation and one is called deletion. We describe those together and give people the choice.
    Deactivation is when you may want temporarily to suspend your Facebook account, but you may want to leave it intact so that you can come back to it later, have access to all of your content, and have access to all of your friends. That's one option that we provide.
    The second option is deletion, which is what it sounds like. People can come to us and say, “I don't want to be on Facebook anymore and I want you to get rid of my account”. When that happens we tell them that there's a 14-day period during which they can change their mind. We instituted that because a lot of people started to delete their accounts and then came back later and said they'd changed their mind, and we weren't able to recover their data. So we now have a waiting period that we tell people about.
    After that period, we begin the process. I should say, on the date that you delete your account, it's deactivated, so it no longer appears on the Facebook service. Fourteen days later we begin the process of deleting your data or anonymizing it in every place it exists on Facebook.
    You mentioned those retention periods. Do you have different retention periods for different items? Perhaps you could elaborate on that.
    Thank you for the question.
    When I said that we have different retention periods for different items, the reason is that we keep information for different purposes. In general, we want to have information in our records for only as long as it's needed to provide services. For example, if you post something on your timeline, there's no fixed retention period that's associated with that. We leave it on your timeline as long as you leave it there, and if you choose to delete that content or delete your account, then we begin the process of removing it from the various places on our service that it exists.
    There are other pieces of information for which we have a more routinized data retention process. I referred earlier to social plug-ins, which are the light buttons that you see on various places on the web. In those cases, for logged-in Facebook users we store it for 90 days, and after that period we either delete or anonymize those pieces of information as well.
    So different pieces of information are subject to different processes, but we try to be thoughtful about the way in which we retain the information.

[Translation]

    Thank you, Mr. Andrews. Your time is up, unfortunately.
    Mr. Butt now has the floor for seven minutes.
    Thank you, Mr. Chair.

[English]

    Thank you very much, Mr. Sherman, for being here today. I don't think there's a single member of Parliament—I would be surprised if there is one—who isn't a Facebook subscriber, a Facebook customer. I could be wrong. I know Mr. Angus quit Twitter, but I think he still likes his Facebook page.
    I'm glad you're here to give us a better sense of what you are trying to do. I'm fairly sure that your company's view is to have corporate responsibility and to make sure you're doing the best job you can do.
    I come at this as a father of a 13-year-old and an 8-year-old daughter. My initial question would be whether you've taken any additional measures as they relate to minors who are Facebook subscribers and are participating. Do you do any monitoring of content within your organization to, let's just say, protect young people against themselves to some degree? I realize that when you post something, you've made a conscious decision to go and do that. But Mr. Angus is talking about private messages showing up, and other things going on.
    Are you doing anything special, out of the ordinary, for underage users of the system, rather than for adults of the system, where we would assume that with adults, cooler heads would prevail when they're participating in Facebook?
    Do you have anything special or specific that you do around underage users of Facebook?
    We do. We take the safety of underage users, minor users, of Facebook very seriously. We actually have a dedicated manager on my team who focuses exclusively on those issues. The reason is that it's an important issue, not just to us who work on privacy, but to everyone at the company.
    One example of a way that we try to create a safe environment for teenagers who use Facebook is the default settings. We talked earlier about that. The default settings in general are more limited for teenagers. The thinking is that adults should make their own decisions about who they want to share with, but we want to put minors in a place that's a bit more limited, speaking in a smaller community.
    We don't monitor the content of our users in general, but we do have reporting functionality. We try to use a tool called “social reporting”, for example, which allows people who are concerned with Facebook content to engage in a conversation. For example, if you see content that you're concerned about as a user, you can report it to the user who posted it, to a trusted third party—for example, an adult you know. You can also report the content directly to Facebook. We have a team of professionals who review reported content and make judgments about what steps we should take. There are also some technological measures that we use, independent of teenagers' communications, just to look at the ways that adults communicate in order to help keep teenagers safe when they use Facebook.

  (1605)  

    If someone feels they're being bullied or stalked or in any way inappropriately contacted through Facebook, what's your mechanism to deal with that? If I report as a Facebook user.... Maybe you could also explain the process or how it works with...what do you call it? It's “defriending”, I guess, or getting someone off your site. You may have accepted them as a friend, but you find out they're actually an abusive friend. They've tried to befriend you on Facebook for a malicious reason. They essentially are there to cause difficulty.
    Can you explain how that system works—one, how someone is going to report if they feel they're being inappropriately contacted or abused on Facebook; and secondly, the process for defriending somebody?
    I haven't figured out how to defriend yet. I'll have to take a lesson. You can tell me today and I'll learn.
    An hon. member: I do it all the time.
    Mr. Brad Butt: You do it all the time? I don't know how to do it yet. I haven't figured it out yet. I need to get a lesson today.
    I do very little defriending, but I'm familiar with the process.
    I haven't done any defriending.
    In general we hope that people have a positive experience on Facebook and want to communicate. But we know there are situations in which people want to stop the communication, so we have a number of mechanisms in place to address that situation.
    The first is the ability to unfriend somebody, which essentially is when you've engaged in a relationship with them on Facebook and you decide you want to terminate that relationship. Either party in the relationship can stop a friendship, and there are a number of different ways to do it.
    The easiest way to do it is to go to their page, to their timeline, and there will be a button that will allow you to remove that friend relationship. That will still allow that person to see you on Facebook. They'll still see things you post publicly, or in groups or things like that, but they won't see things you share specifically with friends.
     If you want to go a step further because there's somebody who is concerning you, you can block that person, which is a stronger mechanism. That prevents that person, for example, from creating a message to you. If they've been sending you private messages that you feel are inappropriate, you can prevent them from creating messages to you by using that block functionality.
    There are other situations that may come up. When it goes beyond simple contact that you may find objectionable, we also want to know about it and to take steps where appropriate. On our “help center” page, there's a button at the top right at the corner that says “report an issue”. That gives you information on how to contact Facebook when you have this kind of problem, and other problems with content that you see on Facebook as well.
    Thank you very much.

[Translation]

    Thank you very much, Mr. Chair.
    We are starting the five-minute round of questions with Ms. Borg.
    Thank you, Mr. Chair.
    Thank you, Mr. Sherman, for being here today to answer our questions. We have been studying this matter for some time. And it's good to hear from you directly about what is being done with our personal information and what measures you are taking to protect it.
    I carefully read your most recent data use policy, published on November 21, 2012. Unless I'm mistaken, this is what you know about us: our GPS coordinates, our friends, our interests, our family members, the people we went to elementary and secondary school with. You can share that information, this portrait you have of us, with our friends, our partners, advertisers who buy ads on the site and developers who design the games, the applications and the websites we use. So you are sharing this information with a lot of people.
    Do you get express and informed consent to share this information?

[English]

    Thank you very much for your question.
    As a preliminary matter, you listed a number of pieces of information that Facebook receives, and we describe in our data use policy the various ways that we may receive information from our users.
    It's important to point out that we don't have that information about all of our users, so we rely on the information that people choose to give to us.
    As an example, you mentioned GPS coordinates. We receive GPS coordinates from your mobile phone when you use Facebook, but we ask for permission to do that first. So you will specifically authorize your phone to give us your GPS coordinates if you need to do that in order to use location-enabled features within Facebook.
    You can also choose, for example, on Apple's iOS platform, when asked, not to allow us to see your location. That will prevent you from using the location-enabled feature but will still allow you to use other aspects of Facebook.
    I think it is important to point out that we list all of the categories of information that we may receive, but it's not the case that we receive that information about everyone.
    With regard to the ways in which we share the information, different categories of information are shared in different ways. In general—we talked about advertisers—with regard to applications, we have a process that we discussed in detail with the Privacy Commissioner's office when we came up with it. That process tells the user what information the app would like to receive about them and it asks for permission before the person gets to that app.
    There are other situations in which we may receive consent that is not through a specific dialogue but through users' acceptance of our data use policy. For example, we have service providers that help us provide the Facebook service. They provide technical services, for example, for us. Those entities may have access to Facebook data, but they are subject to contracts that restrict their use of it. In those instances we rely on our users' acceptance of the data use policy as consent to allow those entities access for that limited purpose.

  (1610)  

[Translation]

    I am a Facebook user, and I find it very useful for being in contact with my constituents. I chose a public profile because I am a public figure.
    Let's say I want to choose something more limited, that I only want Mr. Boulerice to have access to what I am going to put on my wall because he's my friend. Can I do the same thing, for example, with respect to the websites, the applications? Can I choose a closed circle? It seems to me to be a little contradictory that I can choose who of my friends will have access to it, but not which of the big companies will have access to it.

[English]

    That's an important point. I think the relationship you have with your friends is different from what Facebook has with service providers and other kinds of entities with which we interact. We do provide controls in many cases. For example, we have application controls that let you choose the circumstances under which applications and websites can get access to your information.
    You may be in a situation where you would like your friends to see information but would like only certain apps to see it. Those are things we enable you to chose on an app-by-app basis.
    When I say “apps” I am really referring to mobile applications and also web-based applications.

[Translation]

    With respect to advertisers, can I choose who will get my information? If, for example, I absolutely do not want someone to know my music preferences and which bands I like, can I say that my information and my consumer profile cannot be disclosed to that person or company?

[English]

    With regard to your music interests, for example, we allow advertisers to make a judgment that they want to show advertisements to people who like a particular type of music. In those instances we may show the advertisement; we may identify that you like that music because you have told us on your timeline, but we won't then tell the advertisers “this particular person likes this kind of music” unless you have chosen to make that information public.

[Translation]

    Thank you.
    Unfortunately, your time is up, Ms. Borg.
    Mr. Calkins has the floor for five minutes.

[English]

    Thank you, Chair. I very much appreciate it.
    Thank you, Mr. Sherman, for being here today. We've had a really good study in regard to this particular issue. I don't expect that you've made any efforts to follow along with the lines of questioning, but I think you are getting a sense of where we are with our committee.
    Essentially our Privacy Commissioner is looking for expanded powers and authorities in order to deal with some of these particular issues. I am just wondering how you would feel about that.
     I apologize if I am asking you a question that has already been asked. I had some urgent business that I had to attend to earlier.
    Could you tell us about the relationship Facebook has with...or do you have a relationship with the Canadian Privacy Commissioner?
    We do have a relationship with the Canadian Privacy Commissioner. In fact, we find that relationship to be very productive and positive. We're able to talk with them about decisions that we make from a privacy standpoint and get their feedback, which I think helps us make a better product and helps us better protect the privacy of Canadians.
    I think when you look back at the relationship that we've had with the Privacy Commissioner's office over the years, you see that many of the innovations that we've had are on privacy. Many of the things that are hallmarks of the way privacy works on Facebook came out of those consultations, so I think it's been an incredibly positive relationship.
    I'm not an expert on Canadian privacy, but I'm familiar with the study, and I should say that I appreciate, and Facebook appreciates, the work the committee is doing to study these issues.
    With regard to the question of whether the Privacy Commissioner's power needs to be expanded, I think my sense is that if you look at a company like Facebook, we're a good example of the fact that the existing regime works quite well. We've had consultations with the Privacy Commissioner on an ongoing basis and we've made changes to our product, in fact, in response to her feedback. We've made those judgments based on the fact that the Privacy Commissioner has suggested ways that we can better protect the privacy of Canadians.
    I think those are things that, if you were designing a privacy regulatory regime, would be the outcome that you would seek to create.

  (1615)  

    I appreciate that.
    I'm going through the information on your sign-up page. I've never directly, personally, used Facebook. I think I've had, through some work that I've done, a page or a place that folks can visit through some outreach that I'm trying to do, and it works quite well in that respect.
    I do have some questions for you that I've asked others in the past insofar as deleting and deactivating information goes. Apparently Facebook, when I read through here, seems to have the ability to and can clearly differentiate between deleting and deactivating, which I think is very important.
    People post a lot of personal information on Facebook. I mean, that's what it's all about. That's the raison d'être of the site. What assurances can you give to me and to this committee that when they want to have that information pulled down from the site and to deactivate or delete their account, that this information is actually deleted, if the users do choose to delete that? How far does that go insofar as going into your backups, going into any information that might have been collected or disseminated for either reuse or for marketing purposes?
    We think user control, user trust, is an essential part of the Facebook experience, and it's an issue that we spend a lot of time thinking about. We understand that people won't feel comfortable using our site if they don't trust us, so we want to do everything we can to be transparent about how Facebook works and how people can have control over their data.
    So when you use the deletion function, whether it's the account deletion function or just the function to delete a particular piece of content, that starts what we call an “active deletion” process, where it removes it immediately from accessibility on the site from our active servers. It then goes to the various places and backups and things like that in alternative servers, where we keep the information, and sends the command to those servers that indicates that the information should be deleted.
    That process takes a bit of time, because we do have backups and so on, but we do try to have a process in place to make sure that information is deleted in a way that's reasonable and consistent with the instructions we receive from users.
    You mentioned deactivation, and I do want to distinguish between deactivation and deletion. Deactivation is not a situation in which a user requests deletion of their information. We actually just suspend their account, but maintain it.
    No, absolutely; the information is there so it can be reactivated in the event that a user wants to. I understand the practicality of that. I'm just worried that some consumers might think they're actually deleting something when in fact an account is deactivated.
    The question I want to ask you right now is this. I don't know if you have any survey information. Obviously, when I look at the site, and it's very typical of what most sites are, it says, “By clicking Sign Up, you agree to our Terms and that you have read our Data Use Policy, including our Cookie Use.” That's what it says on the page that I have open here in front of me. The terms that you have are 19 clauses long, and it's written largely in legal jargon. That's fine; it's a binding agreement. Your data use policy is quite broad and would take an educated reader or user quite some time to read and discern, particularly the part about cookie use. For those who aren't very familiar—even though the younger generation has grown up with computers, I didn't have that luxury, but I've figured it out since—all of that is a single yes-or-no agreement by the end user who wants to use your product.
    First of all, how many people do you think will actually read all of that before they click on it? Do you have any indication from your users of how many of them actually have done so, even though they're responsible to do so?
    As well, would you ever give any thought to having a situation whereby a user would have options to agree to certain terms and certain conditions and give them the option, depending on their feedback, of tailoring your site and the services that you offer to them, based on their preferred level of user interaction and user interface with your company?

  (1620)  

[Translation]

    Thank you. I will give the witness about 30 seconds or a minute to answer your question.

[English]

     Thank you very much.
    I think with regard to all of these tools, we try to be transparent with users and provide them with information. We hope they are written in a way that's reasonably easy to understand, and we've received feedback from people in a number of different ways that they are able to understand it.
    We try to present the data use policy in what we call a layered format. Essentially what that means is if you go to our privacy page, you can get the high-level information and drill down if you want to do that. With regard to cookies specifically, which you mentioned, one of the pieces of information we provide to people is a special “frequently asked questions” about cookies, which is written in plain English and includes detailed information about how we use cookies and the purposes for which we use them.
    So we try to make that information accessible. I don't have statistics on how many people read it or don't read it, but we think, through the feedback we have received, that people understand that.

[Translation]

    Thank you.
    Mr. Boulerice has the floor for five minutes.
    Thank you, Mr. Sherman, for being here today. I must admit that I am one of those politicians who uses Facebook quite a lot most of the time.
    I will seize this opportunity. So along the same lines as Mr. Calkins' questions on your data use policy, I'll read you an excerpt from the chapter that deals with the use of information you receive. I would like you to tell me if I have understood correctly. It reads as follows:
    While you are allowing us to use the information we receive about you, you always own all of your information. Your trust is important to us, which is why we don't share information we receive about you with others unless we have:
    received your permission;
    given you notice, such as by telling you about it in this policy; ...
    In other words, people who do not read the policy and click on "I agree" are basically giving you the right to use personal information, such as their photos.

[English]

    We try to provide information about how Facebook works, how information is shared in a number of different ways, in addition to the data use policy. Obviously we provide the data use policy to every user of Facebook before they can access our site. We require them to accept it, and we hope they do read it.
    We also provide information about how information is shared in a number of different ways, including in our interface. For example, we have the inline privacy controls, which, if you have used our site, you're familiar with: when you're posting, you get to make a judgment about who will see that information.
    We think there are a number of different ways that we provide information to people. We also have a help centre where you can search for information if you want to know about how we do a particular thing and hope that makes the information accessible. We do try to provide a readable data use policy.

[Translation]

    But am I correct in saying that you consider the simple fact that they have clicked on the words "I agree" to mean that people have been informed that they are authorizing you to use their personal information?

[English]

    Yes. When people agree to the data use policy, we understand that they have agreed.

[Translation]

    Thank you.
    You collect information about people's interests, their age, where they live. In that way, you can target the applications, the games and the advertising they will receive on their page. And that is the business model for most social media sites.
    I would like to know if there is a specific code of ethics, especially with respect to adolescents. For example, do you prevent weight loss products from being advertised on the pages of adolescent girls who are 13, 14 or 15 years of age?

[English]

    We do have a set of advertising guidelines, which I think covers the areas you're referring to. You can read them on our site. If you go to the bottom of any page, there's a link called “terms”, and that leads you to a place that provides all of our governing documents, including our advertising policies. There is also additional information in our help centre about that.
    We do have policies that restrict advertisers' ability to target based on certain sensitive characteristics—for example, based on race or ethnicity—and we limit them on that basis. There are other content-based restrictions as well on how advertising can work.

[Translation]

    Suppose I'm the parent of an 8-year-old and a 12-year-old—which is actually the case—and I learn that they have created a Facebook profile. Should I want to delete it, but they stubbornly refuse to give me their password in order for me to do so, who could I talk to?

  (1625)  

[English]

    One of Facebook's policies is that you have to be age 13 older to gain access to our site. That's because our view is that our site is designed for people who are above that age. So we take a number of different steps to prevent children, including ages 12 and 8, as you mentioned, from gaining access to the site.
    Those tools aren't perfect. One of the things we do is we delete the accounts of children under 13 once we've verified that they in fact belong to people who are under 13. If you go into our help centre, you can find information on how to contact us. We would obviously want to verify that you are the parent of the person who created the account before doing so.

[Translation]

    Thank you.
    With respect to deleting or removing something from your site, the policy says, "We store data for as long as it is necessary to provide products and services to you and others, including those described above. Typically, information associated with your account will be kept until your account is deleted."
    What do you mean by "typically"? What are the exceptional circumstances in which you would not delete the information when I delete my profile?

[English]

    I think the portion of the policy that you're referring to is information that's received in connection with advertising. In that case, we say we receive information in connection with advertising, and then we delete it when we no longer need it. That is a general policy that applies regardless of whether you delete your account.
    When you delete your account on Facebook, then we will remove any personal identifiers from data that we've collected or deleted so that you're not identified by any information we have.

[Translation]

    Thank you for your answers. The time is up.
    Mr. Carmichael now has the floor for five minutes.

[English]

    Thank you, Chair.
    Thank you, Mr. Sherman, for appearing today.
    I have two areas that I'd like to address in the time we have. First, I'd like to go back to the deletion element. You've talked about that a couple of times.
     I think I clearly understand the difference between deactivation and deletion. The concern I have is on the deletion aspect. What happens to the data?
    If I, as a consumer or a user, choose to delete—we bypass the 14 days, I want out—do I have my information fully deleted at Facebook and there's no more record of me, other than what's already been shared, I would think, through the communities?
    When you choose to delete your information after that period, we begin this process that we call active deletion. That sends a message to the various places on Facebook that store information about you so that we can provide service to you. What happens is the content is deleted or logs that have identifying information are removed. We may keep logs that are anonymous to you beyond that time, but the idea is generally that the information will get deleted. No system is perfect, but we do everything we reasonably can to make sure we honour that commitment.
    You said you start a timeline. What's the timeline?
    The timeline is the 14 days after you submit a deletion request.
    Okay, so I've done that, I've bypassed that, I've come back to you, and I've said I want out. I can push a button and effective at that point you will delete all the information I have put on your system over the course of our relationship?
    We'll delete or anonymize that information. In some cases we're not able to find all the places and delete the content, but we can remove links to you individually, so that when we delete your account we won't know who that information is associated with, and it won't be accessible on our site.
    Right. Okay, thank you.
    I was just chatting with my colleague here, and we talked about the privacy and user agreement. One of the concerns we've had, as we've participated in this study, is that these user agreements seemingly are not at all user-friendly. You talk about them being layered. I understand that. It appears to me that it's simply an all-or-nothing proposition: either I accept it or I don't. If I accept it, I'm in. If I don't, I go away.
    I'm wondering, with technology today, why Facebook, as the leader in this industry, wouldn't provide levels of agreement for users who want to take the time to work through an agreement with you to participate in your programs.
    It's an important point, and it goes to one of the issues we spend a lot of time thinking about, which is user control, and making sure people have the ability to make the choices that are right for them about how their information is used.
    We don't give people the choice to accept certain portions of our terms of service but not others, largely because it would be very burdensome for us. It wouldn't be efficient for us to give people those options to negotiate and to provide different versions of Facebook for different people. Given the fact that we're now at a billion users, it would be prohibitively difficult to do something like that. That said, we build robust controls into our product that allow all our users to make decisions on how they would like specific pieces of information used.

  (1630)  

    Madame Borg talked about her music. She doesn't want to share that particular interest with a particular advertiser. You have a section here that says you're going to share that data with your advertisers within....
    Now, I understand that business concept. I would consider myself more of a nominal user, but I would think that younger folks today are going to have very definite preferences about what they do or don't want to share when you get into that type of agreement.
    I'm wondering why, number one, you wouldn't allow that type of.... I hear you on efficiency, but technology today surely allows the agility within your architecture for an individual who says she doesn't want to be advertised to for her music; she doesn't want to share that.
    So why not?
    I should clarify that if Madame Borg decides to share her music interests publicly, then obviously anybody in the public can see it.
    Then it's gone, yes.
    If she doesn't make that decision, if she chooses to share it with friends or something more narrowly, that's not information we will give to advertisers in an identifiable way unless she separately agrees to allow us to do that.
    That's within the privacy framework.
    That's within the privacy framework. What we might do is use that information to determine that she might be more interested in seeing ads for classical music than rock music, for example.
    I hear you, but—

[Translation]

    Mr. Carmichael, your time is up. I would ask that you wrap up quickly.

[English]

    I just got started.
    Thank you.

[Translation]

    That's the rule for everyone.
    So, Ms. Borg has the floor for five minutes.
    Thank you very much.
    I am pleased to be able to ask another question. We obviously have several, but we need to be very selective.
    You have changed your data use policy frequently. The most recent change was on November 21, so quite recently. There is obviously some attempt to keep things up to date, based on the current context.
    Do you think the most recent policy reflects the concerns and many complaints from users and the international community about Facebook? I'm talking about legal action, complaints to the Office of the Privacy Commissioner of Canada, to the Federal Trade Commission, and others.

[English]

    As you point out, we make efforts to update our data use policy on a regular basis. We consider it to be a guide to privacy on Facebook, so we want to provide information to people that's current and that reflects the way our site currently works.
    Many of the changes that we do make to the data use policy are in response to feedback we've received from policy-makers. As an example, the most recent round of changes...right now, the one that you referred to, that's dated November, has not been adopted. It's a proposal that we've made to our users and we're seeking comment on it. But the one that's currently effective actually reflected a number of specific pieces of feedback from the Irish Data Protection Commissioner's office. There were areas where they thought they were comfortable with our practices but wanted us to be very explicit in our data use policy about how those worked, so we updated the policy specifically in response to that feedback.

[Translation]

    Thank you.
    Once again, I would like to thank you for being here and agreeing to testify as part of this study.
    We also had an opportunity to ask questions of a Google representative. Unfortunately, the people from Twitter have turned down our request to have them come and appear before us.
    So I will take the opportunity to present the following motion:
    That the Standing Committee on Access to Information, Privacy and Ethics request that the committee Chair write an official public letter inviting Twitter to testify to the committee in the scope of its study on privacy and social media at their earliest possible convenience.
    Thank you, Ms. Borg.
    First, I will say that the motion is in order and that it is related to what we are currently talking about. I don't know if you want to discuss this. I must also mention that Twitter was raised when we were in camera.
    I am sorry for this interruption, Mr. Sherman.
    I don't know if the committee members want to discuss this motion in public right away or wait until after Mr. Sherman's testimony. I think that would be most appropriate, since we have not discussed it publicly, unless we have unanimous consent to deal with it otherwise.
    Mr. Angus, you have the floor.

  (1635)  

[English]

    Thank you.
    I certainly want to thank Facebook for coming. I think it's been excellent.
    We've had Google here. We're almost at the end of our study, and—I don't think this is controversial—we can't really say we've done a full study unless we've heard from Twitter.
    I don't think we're suggesting that Twitter isn't going to come, but I think we won't have a full sense unless we have the main players. We really appreciate the support we've had so far, so we'd just like to end by saying that we need Twitter, and then I think people will know that we have a study that's done its work. We've heard from a good variety of voices. We should be putting this study to bed.
    So I'd like to hear from Twitter. I invite my colleagues to just say let's invite them. Then it's on the record that we've asked them, and we can finish our study.

[Translation]

    I see that no one else wants to speak. Are you ready to vote on the motion?
    Mr. Warkentin, you have the floor.

[English]

    Out of respect to our witness here, I think it would be helpful if we completed the testimony. I don't think there's any disagreement generally about this, but I think we'd like to have a longer discussion about it in terms of what we'd be instructing the chair to do, at which time I think we could deal with it.
    Out of respect to our witness, I think we need to finish up and then we'll move on.

[Translation]

    Mr. Angus, please be brief.

[English]

    I'm sorry. We were under the understanding that it was finished. That's why we brought it up,
    If you want to continue the round of questioning, we can.
    We have a witness still sitting.
    Okay, well, we don't mind. We trust that we'll deal with this motion.
    So yes, we're perfectly fine with that.

[Translation]

    Perfect. I think that is the easiest way to proceed. I will let Mr. Carmichael finish. Then, we will deal with the motions once the witness has left.
    Mr. Carmichael, go ahead.

[English]

    I'll be brief, but I just want to follow up with you, Mr. Sherman. We talked about a couple of issues relevant to privacy and data management. My colleague talked about how some of the data, obviously, has ended up where it shouldn't have.
    My concern is that, with technology and social media today, we're not dealing with a perfect science. There's still a lot of flux, if you like, in the development and growth of your technology and others.
    We visited Washington, and we've talked to other commissioners such as the FTC. We heard that the EU has just established a new framework, and the U.S. under President Obama has established a new framework of guidelines and controls. My concern is that we're hearing from other leadership organizations that are saying we've got to tighten the regulations, we've got to put tougher regulations out there. Now, Facebook is a leader, but you've got a lot of competition out there, smaller organizations who I think operate on a push-the-envelope-and-apologize-later approach.
    Do you have a comment on the actual regulations? We're looking at the possibility that one of our recommendations may well be to give our commissioner more authority and greater control over the environment that she has to deal with, and I wonder if you have a comment on that relative to the more global network you're planning. We heard about Ireland. We heard about some of the other areas where you have to be conscious of it. You don't operate from Canada at this point; it just comes across the border out of your U.S. operations.
    What would be your recommendation relative to governance of technology through our privacy commissioner in terms of what you're seeing with other jurisdictions, and should we be providing more stringent governance for our commissioner to operate under?
    Thank you for the question. I think it's an important one and I appreciate that the committee is taking the time to think through that issue, which is a critical one. You mentioned enforcement regimes in the U.S. and Europe in comparison to Canada's, and as you point out, they're very different.
    I think when you look at each of those regimes, although they're different, they've all been effective. As you know, we're based in Menlo Park, California, but we have a robust relationship with the Privacy Commissioner's office. I think that's a reflection of the fact that the existing regime works quite well. We're able to have consultations with her office in a way that's productive, enables us to get to good results, and allows us to make decisions that are best for Canadians. That is not adversarial in the way that you might see if the regime were different.
    So I think we're actually quite a good example of how the Privacy Commissioner has used her authority well, has created robust privacy change, and has improved the service that we provide based on our existing authority.

  (1640)  

    Good. Thank you.

[Translation]

    Thank you, Mr. Carmichael.
    Once again, on behalf of the committee, thank you very much for appearing before us to help us with our study on privacy and social media.
    We will suspend the meeting for a few moments so you can leave the room. We will then continue to discuss the motion that was just tabled.

  (1640)  


  (1640)  

    As agreed earlier, we are going to continue discussing the motion tabled by Ms. Borg. Do you want to discuss it or are you ready to vote? It's up to you.

[English]

    Call the vote.

[Translation]

    We are ready to vote.
    (Motion agreed to)
    The Chair: I will be pleased to write this official and public letter to invite representatives from Twitter to appear before the committee. Thank you.
    Now, I would like to discuss the schedule for the next few days.
    Mr. Warkentin, you have the floor.

[English]

    If we're going to go into committee business, I think it would be important to go in camera so we can speak openly and freely about where we want to take this.

[Translation]

    Do you have a motion?

[English]

    I make a motion to move in camera for those reasons.

[Translation]

    So we have a motion to move in camera.

[English]

    Could we have a recorded vote?

[Translation]

    There is a request for a recorded vote.
    (Motion agreed to: yeas 7; nays 4) [See the Minutes of Proceedings]
    The Chair: We will suspend the meeting for a few minutes to give the technician time to take the meeting in camera.
    [Proceedings continue in camera]
Publication Explorer
Publication Explorer
ParlVU