Consult the user guide
For assistance, please contact us
Consult the user guide
For assistance, please contact us
Add search criteria
Results: 1 - 15 of 7562
View Colin Fraser Profile
Lib. (NS)
View Colin Fraser Profile
2019-06-06 9:10
I understand that, but it sends an important signal about the seriousness with which Facebook takes this issue when Mr. Zuckerberg is saying that he looks forward to discussing it with lawmakers around the world, and when they take him up on that, he refuses to come.
I want to move on to another point, and that's the upcoming federal election. You mentioned that in your presentation. What is Facebook doing to monitor and track foreign interference, generally speaking, when it comes to democratic elections but, in particular, the Canadian federal election coming up?
Kevin Chan
View Kevin Chan Profile
Kevin Chan
2019-06-06 9:12
Yes, I think we've been very public with our coordinated inauthentic behaviour takedowns. At times we've traced these things back to the Internet Research Agency in Russia. We've traced some of these back to Iran. We've traced some of these back to domestic operations in the countries in which we're doing the takedowns.
View Iqra Khalid Profile
Lib. (ON)
Thank you, Chair.
Thank you, Mr. Chan, for coming in today.
Mr. Chan, I want to start by going back to something you said in your opening remarks about protecting women candidates in the upcoming federal election. Do you have the same policies also for the LGBTQ2 candidates who are running? There aren't many of them.
Kevin Chan
View Kevin Chan Profile
Kevin Chan
2019-06-06 9:20
That's a very good question, Madam.
We do have the same policies, in the sense that they would apply to any “protected group”, if we could call it that. It would be gender. It would be race. It would be sexual orientation. We do have the same policies that apply to these protected groups or protected characteristics.
However, in the specific case of women's safety, what I was referring to is the partnership work we're doing with Equal Voice, which is to ensure that we have as many tools as possible to lean in with and to provide to as many women candidates as possible.
We started this last month. We're going to continue on our journey next week, a cross-country tour. We'll be engaging with as many political candidates and activists as possible across the country. We were in Ottawa, Toronto, Halifax, Oshawa and Montreal earlier. We're going to be out in B.C. and Alberta next week. In all these cases, we do hold events with Equal Voice, as well, where we bring out as many people as possible in their national network to engage with them on the tools and the products we have to keep women candidates safe as they campaign.
View Michael Barrett Profile
Mr. McKay, Mr. MacKenzie satisfied my curiosity as it relates to this, but I do have a question. If Google has said this year that it is not planning to allow political ads, is it the intent of Google to allow political ads in the next election? Or is the current plan just not workable ever?
Colin McKay
View Colin McKay Profile
Colin McKay
2019-06-04 16:20
We were faced with a difficult decision. The legislation was passed in December, and we had to have a system in place for the end of June. We went through the evaluation internally as to whether or not we could take political ads in Canada within that time frame, and it just wasn't workable.
The reality around transparency in political ads is that we already have products in the United States—and we're rolling them out elsewhere—that provide transparency around political advertising. Those products are evolving as we go through election after election. Europe just had one. India just had one. Brazil just had one. Our goal is to continue developing those products to a point where we hope it will reach parity with what's identified in the Elections Act.
View Michael Barrett Profile
Do you have an expectation for a timetable as to when, based on the current legislation, you think Google would be able to comply?
Mr. Colin McKay: No.
Mr. Michael Barrett: Okay.
My next question has to do with public safety. Rural Canadians have expressed concerns that mapping software that often relies on Google Maps doesn't identify rural streets. That can pose problems for emergency services. Is there a mechanism or a plan for a mechanism to be made available to rural Canadians to be able to identify to Google either missing streets or missing mapping data for instances like the one I mentioned?
Marlene Floyd
View Marlene Floyd Profile
Marlene Floyd
2019-05-29 8:45
Microsoft was also proud to be a signatory to the Paris call for trust and security in cyberspace announced in November by French President Emmanuel Macron at the Paris peace summit. With over 500 signatories, it is the largest ever multi-stakeholder commitment to principles for the protection of cyberspace.
Another focus of your committee has been the increasing interference by bad actors in the democratic processes of numerous countries around the world. We fully agree that the tech sector needs to do more to help protect the democratic process. Earlier this week, we were pleased to endorse the Canada declaration on electoral integrity announced by Minister Gould.
Microsoft has taken action to help protect the integrity of our democratic processes and institutions. We have created the Defending Democracy program, which works with stakeholders in democratic countries to promote election integrity, campaign security and disinformation defence.
As part of this program, Microsoft offers a security service called AccountGuard at no cost to Office 365 customers in the political ecosystem. It is currently offered in 26 countries, including Canada, the U.S., the U.K., India, Ireland and most other EU countries. It's currently protecting over 36,000 email accounts. Microsoft AccountGuard identifies and warns individuals and organizations of cyber-threats, including attacks from nation-state actors. Since the launch of the program, it has made hundreds of threat notifications to participants.
We have also been using technology to ensure the resiliency of the voting process. Earlier this month, we announced ElectionGuard, a free, open-source software development kit aimed at making voting more secure by providing end-to-end verification of elections, opening results to third party organizations for secure validation, and allowing individual voters to confirm that their votes were counted correctly.
At Microsoft, we're working hard to ensure that we develop our technologies in ways that are human-centred and that allow for broad and fair access by everyone. The rapid advancement of compute power and the growth of AI solutions will help us be more productive in nearly every field of human endeavour and will lead to greater prosperity, but the challenges need to be addressed with a sense of shared responsibility. In some cases this means moving more slowly in the deployment of a full range of AI solutions while working thoughtfully and deliberately with government officials, academia and civil society.
We know that there is more that we need to do to continue earning trust, and we understand that we will be judged by our actions, not just our words. Microsoft is committed to continuing to work in deliberate and thoughtful partnership with government as we move forward in this digital world.
Thank you, and we're happy to receive your questions.
Alan Davidson
View Alan Davidson Profile
Alan Davidson
2019-05-29 8:49
Members of the grand committee and the standing committee, thank you.
I'm here today because all is not well with the Internet. For sure the open Internet is the most powerful communications medium we've ever seen. At its best, it creates new chances to learn to solve big problems to build a shared sense of humanity, and yet we've also seen the power of the Internet used to undermine trust, magnify divisiveness and violate privacy. We can do better, and I'm here to share a few ideas about how.
My name is Alan Davidson. I'm the vice-president for policy, trust and security at the Mozilla Corporation. Mozilla is a fairly unusual entity on the Internet. We're entirely owned by a non-profit, the Mozilla Foundation. We're a mission-driven open-source software company. We make the Firefox web browser, Pocket and other services.
At Mozilla we're dedicated to making the Internet healthier. For years we've been champions of openness and privacy online, not just as a slogan but as a central reason for being. We try to show by example how to create products to protect privacy. We build those products not just with our employees but with thousands of community contributors around the world.
At Mozilla we believe the Internet can be better. In my time today, I would like to cover three things: first, how privacy starts with good product design; second, the role of privacy regulation; and third, some of the content issues that you folks have been talking about for the last few days.
First off, we believe our industry can do a much better job of protecting privacy in our products. At Mozilla we're trying to do just that. Let me give you one example from our work on web tracking.
When people visit a news website, they expect to see ads from the publisher of that site, from the owner of that website. When visitors to the top news sites, at least in the U.S., visit, they encounter dozens of third party trackers, trackers from sites other than the one that they're visiting, sometimes as many as 30 or 40. Some of those trackers come from household names and some of them are totally obscure companies that most consumers have never heard of.
Regardless, the data collected by these trackers is creating real harm. It can enable divisive political ads. It can shape health insurance decisions and is being used to drive discrimination in housing and jobs. The next time you see a piece of misinformation online, ask yourself where the data came from that suggested that you would be such an inviting target for that misinformation.
At Mozilla we've set out to try to do something about tracking. We created something we call the Facebook container, which greatly limits what Facebook can collect from you when you're browsing on Firefox. It's now, by the way, one of the most popular extensions that we've ever built. Now we're building something called enhanced tracking protection. It's a major new feature in the Firefox browser that blocks almost all third party trackers. This is going to greatly limit the ability of companies that you don't know to secretly track you as you browse around the web.
We're rolling it out to more people, and our ultimate goal is to turn it on by default for everybody. I emphasize that because what we've learned is that creating products with privacy by default is a very powerful thing for users, along with efforts like our lean data practices, which we use to limit the data that we collect in our own product. It's an approach that we hope others adopt, because we've learned that it's really unrealistic to expect that users are going to sort through all of the privacy policies and all the different options that we can give them to protect themselves. To make privacy real, the burden needs to shift from consumers to companies. Unfortunately, not everybody in our industry believes that.
Let me turn to my second point, which is that we believe that regulation will be an essential part of protecting privacy online. The European Union has been a leader in this space. Many other companies around the world are now following suit and trying to build their own new data protection laws. That's important because the approach we've had for the last two decades in our industry is clearly not working anymore. We've really embraced in the past this notion of notice and choice: If we just tell people what we're going to collect and let them opt out, surely they'll be fine. What we found is that this approach is really not working for people. We've been proponents of these new data protection rules, and we hope you will be too.
We believe that a good privacy law should have three main components. It needs clear rules for companies about what they can collect and use; it should have strong rights for individuals, including granular and revocable consent about specific uses; and it should be implemented within an effective and empowered enforcement agency, which is not always the case. We think that's an important component.
Critically, we believe that you can build those laws and you can include those components while still preserving innovation and the beneficial uses of data. That's why we're supporting a new federal privacy law in the U.S. and we're working with regulators in India, Kenya and in other places to promote those laws.
My third point is that given the conversation you have all had for the last few days, I thought it would be useful to touch on at least some of our views on the big issues of content regulation. Of all the issues being examined by the committee, we believe that this is the most difficult.
We've seen that the incentives for many in the industry encourage the spread of misinformation and abuse, yet we also want to be sure that our reactions to those real harms do not themselves undermine the freedom of expression and innovation that have been such a positive force in people's lives on the Internet.
We've taken a couple of different approaches at Mozilla. We're working right now on something we call “accountability processes”. Rather than focusing on individual pieces of content, we should think about the kinds of processes that companies should have to build to attack those issues. We believe that this can be done with a principles-based approach. It's something that's tailored and proportionate to different companies' roles and sizes, so it won't disproportionately impact smaller companies, but it will give more responsibility to larger companies that play a bigger role in the ecosystem.
We've also been really engaged in the issues around disinformation, particularly in the lead-up to the EU parliamentary elections that just happened. We're signatories to the EU Code of Practice on Disinformation, which I think is a very important and useful self-regulatory initiative with commitments and principles to stop the spread of disinformation. For our part, we've tried to build tools in Firefox to help people resist online manipulation and make better choices about and understand better what they're seeing online.
We've also made some efforts to push our fellow code signatories to do more about transparency and political advertising. We think a lot more can be done there. Candidly, we've met with mixed results from some of our colleagues. I think there is much more room to improve the tools, particularly the tools that Facebook has put out there for ad transparency. There is maybe some work that Google could do, too. If we can't do that, the problem is that we'll need stronger action from government. Transparency should be a good starting point for us.
In conclusion, I'd say that none of these issues being examined by the committee are simple. The bad news is that the march of technology—with artificial intelligence, the rise of the Internet of things and augmented reality—is only going to make it harder.
A concluding thought is that we really need to think about how we build our societal capacity to grapple with these problems. For example, at Mozilla we've been part of something called the responsible computer science challenge, which is designed to help train the next generation of technologists to understand the ethical implications of what they're building. We support an effort in the U.S. to bring back the Office of Technology Assessment to build out government's capacity to better understand these issues and work more agilely. We're working to improve the diversity in our own company and our industry, which is essential if we're going to build capacity to address these issues. We publish something every year called the “Internet Health Report”, which just came out a couple of weeks ago. It's part of what we view as the massive project we all have to help educate the public so that they can address these issues.
These are just some of the examples and ideas we have about how to work across many different levels. It's designing better products, improving our public regulations and investing in our capacity to address these challenges in the future.
We really thank you for the opportunity to speak with you today and we look forward to working with you and your colleagues around the world to build a better Internet.
Daniel Therrien
View Daniel Therrien Profile
Daniel Therrien
2019-05-28 15:32
Thank you, Mr. Chair.
Members of the grand committee, thank you for the invitation to address you today.
My remarks will address three points that I think go to the heart of your study: first, that freedom and democracy cannot exist without privacy and the protection of our personal information; second, that in meeting the risks posed by digital harms, such as disinformation campaigns, we need to strengthen our laws in order to better protect rights; lastly, I will share suggestions on what needs to be done in Canada, as I'm an expert in Canadian privacy regulation, so that we have 21st century laws in place to ensure that the privacy rights of Canadians are protected effectively.
I trust that these suggestions made in a Canadian context can also be relevant in an international context.
As you know, my U.K. counterpart, the Information Commissioner's Office, in its report on privacy and the political process, clearly found that lax privacy compliance and micro-targeting by political parties had exposed gaps in the regulatory landscape. These gaps in turn have been exploited to target voters via social media and to spread disinformation.
The Cambridge Analytica scandal highlighted the unexpected uses to which personal information can be put and, as my office concluded in our Facebook investigation, uncovered a privacy framework that was actually an empty shell. It reminded citizens that privacy is a fundamental right and a necessary precondition for the exercise of other fundamental rights, including democracy. In fact, privacy is nothing less than a prerequisite for freedom: the freedom to live and develop independently as individuals, away from the watchful eye of surveillance by the state or commercial enterprises, while participating voluntarily and actively in the regular, day-to-day activities of a modern society.
As members of this committee are gravely aware, the incidents and breaches that have now become all too common go well beyond matters of privacy as serious as I believe those to be. Beyond questions of privacy and data protection, democratic institutions' and citizens' very faith in our electoral process is now under a cloud of distrust and suspicion. The same digital tools like social networks, which public agencies like electoral regulators thought could be leveraged to effectively engage a new generation of citizens, are also being used to subvert, not strengthen, our democracies.
The interplay between data protection, micro-targeting and disinformation represents a real threat to our laws and institutions. Some parts of the world have started to mount a response to these risks with various forms of proposed regulation. I will note a few.
First, the recent U.K. white paper on digital harms proposes the creation of a digital regulatory body and offers a range of potential interventions with commercial organizations to regulate a whole spectrum of problems. The proposed model for the U.K. is to add a regulator agency for digital platforms that will help them develop specific codes of conduct to deal with child exploitation, hate propaganda, foreign election interference and other pernicious online harms.
Second, earlier this month, the Christchurch call to eliminate terrorist and violent extremist content online highlighted the need for effective enforcement, the application of ethical standards and appropriate co-operation.
Finally, just last week here in Canada, the government released a new proposal for an update to our federal commercial data protection law as well as an overarching digital charter meant to help protect privacy, counter misuse of data and help ensure companies are communicating clearly with users.
Underlying all these approaches is the need to adapt our laws to the new realities of our digitally interconnected world. There is a growing realization that the age of self-regulation has come to an end. The solution is not to get people to turn off their computers or to stop using social media, search engines, or other digital services. Many of these services meet real needs. Rather, the ultimate goal is to allow individuals to benefit from digital services—to socialize, learn and generally develop as persons—while remaining safe and confident that their privacy rights will be respected.
There are certain fundamental principles that I believe can guide government efforts to re-establish citizens' trust. Putting citizens and their rights at the centre of these discussions is vitally important, in my view, and legislators' work should focus on rights-based solutions.
In Canada, the starting point, in my view, should be to give the law a rights-based foundation worthy of privacy's quasi-constitutional status in this country. This rights-based foundation is applicable in many countries where their law frames certain privacy rights explicitly as such, as rights, with practices and processes that support and enforce this important right.
I think Canada should continue to have a law that is technologically neutral and principles based. Having a law that is based on internationally recognized principles, such as those of the OECD, is important for the interoperability of the legislation. Adopting an international treaty for privacy and data protection would be an excellent idea, but in the meantime, countries should aim to develop interoperable laws.
We also need a rights-based statute, meaning a law that confers enforceable rights to individuals while also allowing for responsible innovation. Such a law would define privacy in its broadest and truest sense, such as freedom from unjustified surveillance, recognizing its value in correlation to other fundamental rights.
Privacy is not limited to consent, access and transparency. These are important mechanisms, but they do not define the right itself. Codifying the right, in its broadest sense, along the principles-based and technologically neutral nature of the current Canadian law would ensure it can endure over time, despite the certainty of technological developments.
One final point I wish to make has to do with independent oversight. Privacy cannot be protected without independent regulators and the power to impose fines and to verify compliance proactively to ensure organizations are truly accountable for the protection of information.
This last notion, demonstrable accountability, is a needed response to today's world, where business models are opaque and information flows are increasingly complex. Individuals are unlikely to file a complaint when they are unaware of a practice that may harm them. This is why it is so important for the regulator to have the authority to proactively inspect the practices of organizations. Where consent is not practical or effective, which is a point made by many organizations in this day and age, and organizations are expected to fill the protective void through accountability, these organizations must be required to demonstrate true accountability upon request.
What I have presented today as solutions are not new concepts, but as this committee takes a global approach to the problem of disinformation, it's also an opportunity for domestic actors—regulators, government officials and elected representatives—to recognize what best practices and solutions are emerging and to take action to protect our citizens, our rights, and our institutions.
Thank you. I look forward to your questions.
Ellen Weintraub
View Ellen Weintraub Profile
Ellen Weintraub
2019-05-28 15:55
Thank you, Mr. Chair and members of the committee.
I am the chair of the Federal Election Commission in the United States. I represent a bipartisan body, but the views that I'm going to express are entirely my own.
I'm going to shift the topic from privacy concerns to influence campaigns.
In March of this year, special counsel Robert S. Mueller III completed his report on the investigation into Russian interference in the 2016 presidential election. Its conclusions were chilling. The Russian government interfered in the 2016 presidential election in sweeping and systemic fashion. First, a Russian entity carried out a social media campaign that favoured one presidential candidate and disparaged the other. Second, a Russian intelligence service conducted computer intrusion operations against campaign entities, employees and volunteers, and then released stolen documents.
On April 26, 2019, at the Council on Foreign Relations, FBI director Christopher A. Wray warned of the aggressive, unabated, malign foreign influence campaign consisting of “the use of social media, fake news, propaganda, false personas, etc., to spin us up, pit us against each other, sow divisiveness and discord, and undermine Americans' faith in democracy. That is not just an election cycle threat; it's pretty much a 365-days-a-year threat. And that has absolutely continued.”
While he noted that “enormous strides have been made since 2016 by all different federal agencies, state and local election officials, the social media companies, etc.,” to protect the physical infrastructure of our elections, he said, “I think we recognize that our adversaries are going to keep adapting and upping their game. And so we're very much viewing 2018 as just kind of a dress rehearsal for the big show in 2020.”
Last week, at the House of Representatives, a representative of the Department of Homeland Security also emphasized that Russia and other foreign countries, including China and Iran, conducted influence activities in the 2018 mid-terms and messaging campaigns that targeted the United States to promote their strategic interests.
As you probably know, election administration in the United States is decentralized. It's handled at the state and local levels, so other officials in the United States are charged with protecting the physical infrastructure of our elections, the brick-and-mortar electoral apparatus run by state and local governments, and it is vital that they continue to do so.
However, from my seat on the Federal Election Commission, I work every day with another type of election infrastructure, the foundation of our democracy, the faith that citizens have that they know who's influencing our elections. That faith has been under malicious attack from our foreign foes through disinformation campaigns. That faith has been under assault by the corrupting influence of dark money that may be masking illegal foreign sources. That faith has been besieged by online political advertising from unknown sources. That faith has been damaged through cyber-attacks against political campaigns ill-equipped to defend themselves on their own.
That faith must be restored, but it cannot be restored by Silicon Valley. Rebuilding this part of our elections infrastructure is not something we can leave in the hands of the tech companies, the companies that built the platforms now being abused by our foreign rivals to attack our democracies.
In 2016, fake accounts originating in Russia generated content that was seen by 126 million Americans on Facebook, and another 20 million Americans on Instagram, for a total of 146 million Americans; and there were only 137 million voters in that election.
As recently as 2016, Facebook was accepting payment in rubles for political ads about the United States elections.
As recently as last year, in October 2018, journalists posing as every member of the United States Senate tried to place ads in their names on Facebook. Facebook accepted them all.
Therefore, when the guys on the other panel keep telling us they've got this, we know they don't.
By the way, I also invited Mark Zuckerberg and Jack Dorsey, all those guys, to come and testify at a hearing at my commission when we were talking about Internet disclosure of advertising, and once again, they didn't show up. They didn't even send a surrogate that time; they just sent us written comments, so I feel for you guys.
This is plainly really important to all of us. In the United States, spending on digital political ads went up 260% from 2014 to 2018, from one mid-term election to the next, for a total of $900 million in digital advertising in the 2018 election. That was still less than was spent on broadcast advertising, but obviously digital is the wave of the future when it comes to political advertising.
There have been constructive suggestions and proposals in the United States to try to address this: the honest ads act, which would subject Internet ads to the same rules as broadcast ads; the Disclose Act, which would broaden the transparency and fight against dark money; and at my own agency I've been trying to advance a rule that would improve disclaimers on Internet advertising. All of those efforts so far have been stymied.
Now, we have been actually fortunate that the platforms have tried to do something. They have tried to step up, in part, I'm sure, to try to ward off regulation, but in part to respond to widespread dissatisfaction with the information and the disclosure they were providing. They have been improving, in the United States at least, the way they disclose who's behind their ads, but it's not enough. Questions keep coming up, such as about what triggers the requirement to post the disclaimer.
Can the disclaimers be relied upon to honestly identify the sources of the digital ads? Based on the study about the 100 senators ads, apparently they cannot, not all the time, anyway. Does the identifying information travel with the content when information is forwarded? How are the platforms dealing with the transmission of encrypted information? Peer-to-peer communication represents a burgeoning field for political activity, and it raises a whole new set of potential issues. Whatever measures are adopted today run the serious risk of targeting the problems of the last cycle, not the next one, and we know that our adversaries are constantly upping their game, as I said, and constantly improvising and changing their strategies.
I also have serious concerns about the risks of foreign money creeping into our election system, particularly through corporate sources. This is not a hypothetical concern. We recently closed an enforcement case that involved foreign nationals who managed to funnel $1.3 million into the coffers of a super PAC in the 2016 election. This is just one way that foreign nationals are making their presence and influence felt even at the highest levels of our political campaigns.
These kinds of cases are increasingly common, and these kinds of complaints are increasingly common in the United States. From September 2016 to April 2019, the number of matters before the commission that include alleged violations of the foreign national ban increased from 14 to 40, and there were 32 open matters as of April 1 of this year. This is again an ongoing concern when it comes to foreign influence.
As everything you've heard today demonstrates, serious thought has to be given to the impact of social media on our democracy. Facebook's originating philosophy of “move fast and break things”, cooked up 16 years ago in a college dorm room, has breathtaking consequences when the thing they're breaking could be our democracies themselves.
Facebook, Twitter, Google, these and other technology giants have revolutionized the way we access information and communicate with each other. Social media has the power to foster citizen activism, virally spread disinformation or hate speech and shape political discourse.
Government cannot avoid its responsibility to scrutinize this impact. That's why I so welcome the activities of this committee and appreciate very much everything you're doing, which has carryover effects in my country, even when we can't adopt our own regulations when you all adopt regulations in other countries. Sometimes the platforms maintain the same policies throughout the world, and that helps us. Thank you very much
Also, thank you very much for inviting me to participate in this event. I welcome your questions.
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 16:04
Thank you. My first question is for Ellen Weintraub.
You mentioned dark money in your opening statement. How concerned are you about the ability of campaigns to use technology, particularly blockchain technology, to launder impermissible donations to campaigns by turning them into multiple, small micro-donations?
Ellen Weintraub
View Ellen Weintraub Profile
Ellen Weintraub
2019-05-28 16:05
I'm very concerned about it, in part because our entire system of regulation is based on the assumption that large sums of money are what we need to worry about and that this is where we should focus our regulatory activity. On the Internet, however, sometimes very small amounts of money can be used to have vast impact, and that doesn't even get into the possibility of Bitcoin and other technologies being used to entirely mask where the money is coming from.
So yes, I have deep concerns.
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 16:05
Have there been any particular examples that have come to the awareness of your commission?
Ellen Weintraub
View Ellen Weintraub Profile
Ellen Weintraub
2019-05-28 16:05
The problem with dark money is that you never really know who is behind it. There has been about a billion dollars in dark money spent on our elections in the last 10 years, and I cannot tell you who is behind it. That's the nature of the darkness.
Results: 1 - 15 of 7562 | Page: 1 of 505

Show both languages
Refine Your Search
Export As: XML CSV RSS

For more data options, please see Open Data