Committee
Consult the user guide
For assistance, please contact us
Consult the user guide
For assistance, please contact us
Add search criteria
Results: 1 - 15 of 41
View Joël Lightbound Profile
Lib. (QC)
Thank you.
I have one last question. The last time you appeared before this committee, you said that people from China or Hong Kong were getting arrested for social media posts.
Do you have more to tell us about it? Do you have any more information about the kinds of arrests? We'd like that information to be in the report.
Chemi Lhamo
View Chemi Lhamo Profile
Chemi Lhamo
2021-05-31 19:15
I'm sure you'll get more information in the testimonials following this, by the Hong Kongers, because there have been a lot of arrests of Hong Kongers.
For Tibetans, Tashi Wangchuk is a case that a lot of people have known about. He is a language rights activist. He was actually just a shopkeeper in China, and he was imprisoned for five years simply because he wanted his niece and nephew to learn Tibetan at their school. As I said in my remarks at the beginning, it's all being erased from Tibetans.
Cherie Wong
View Cherie Wong Profile
Cherie Wong
2021-05-31 20:22
Social media has always been a tool of authoritarian regimes. They're able to use bots to create misinformation. This is where, as you said, we should be doing this kind of education because everyday Canadians are individuals who can choose between a slave-labour-made product and a non-slave-labour-made product. They are the people who are going to be investing in foreign actors, and they should know whether or not they're associated.
View Jean Yip Profile
Lib. (ON)
Do Facebook and other social media platforms alert CSE at the same time as the public, then?
Shelly Bruce
View Shelly Bruce Profile
Shelly Bruce
2021-04-12 20:23
It takes a very broad group of different players in the private sector and in government to monitor all of this space and to work together. Facebook plays a very specific role, and when it takes down these kinds of campaigns, we learn about it usually at the same time as everybody else.
View Jean Yip Profile
Lib. (ON)
Shelly Bruce
View Shelly Bruce Profile
Shelly Bruce
2021-04-12 20:24
CSE is not a regulator. We do not comment, endorse or ban specific technologies or specific companies, but we publish advice that helps Canadians to choose wisely and to understand how an app works, where their data resides, how to turn on the security features, how to update those apps when prompted and how to delete them when they're no longer used.
We do not comment specifically on apps, but we encourage very safe and responsible use and the best cyber-hygiene.
Christopher Parsons
View Christopher Parsons Profile
Christopher Parsons
2021-03-22 19:53
Thank you.
Good evening. My name is Christopher Parsons. As mentioned, I'm a senior research associate at the Citizen Lab. I appear before this committee in a professional capacity that represents my views and those of the Citizen Lab. Comments are based on our research into Chinese technology companies. The Citizen Lab is an academic institution, and our work operates at the intersection of technology and human rights.
In my time today, I want to point to some of the ways by which we can develop trust in the products and services that are manufactured in, transited through or operated from China. I do so by first turning to the issue of supply chain dependencies.
A rising concern is the extent to which Canadian companies, such as our telecoms, might become dependent on products made by Chinese companies, inclusive of Huawei. Dependency runs the risk of generating monocultures or cases in which a single company dominates a Canadian organization's infrastructure. In such cases, up to three risks can arise.
First, monocultures can enable foreign governments to leverage dependencies on a vendor to apply pressure in diplomatic, trade or defence negotiations. Second, monocultures can create a path dependency, especially in 5G telecommunications environments, where there's often a degree of vendor lock-in into vendors' telecom equipment. Third, monocultures risk hindering competition among telecommunications vendors, to the effect of increasing capital costs to Canadian telecommunications providers.
All of these challenges can in part be mediated by requiring diversity in Canadian telecommunications companies' networks, as has been recommended in the past by CSE's deputy chief of information technology security, Scott Jones. In this case, trust would come from not placing absolute trust in any given infrastructure vendor.
I now turn to building trust in software and hardware systems more generally. Software and hardware errors are often incidentally placed into digital systems. Some errors are egregious, such as including old and known vulnerable code in a piece of software. Others are more akin to spelling or grammar errors, such as failing to properly delimit a block of code. There are also limited situations where state agencies compel private companies to inject vulnerabilities into their products or services to enable espionage or attack operations.
No single policy can alleviate all of the risks posed by vulnerabilities. However, some can enhance trust by reducing the prevalence of incidental vulnerabilities and raising the cost of deliberately injecting vulnerabilities into digital systems. Some of these trust-enhancing policies include, first, requiring companies to provide a bill of goods that declares their products' software libraries and dependencies, as well as their versions. This would help ensure that known deficient code isn't placed in critical infrastructure and also help responders identify vulnerable systems upon any later discovery of vulnerabilities in the libraries or dependencies.
Second, Canada and its allies can improve on existing critical infrastructure assessments by building assessment centres that complement the U.K.'s, which presently assesses Huawei equipment. Working collectively with our allies, we'd be better able to find incidental vulnerabilities while raising the likelihood of discovering state adversaries' attempts to deliberately slip vulnerabilities into systems' codebases.
Third, Canada could adopt robust policies and processes to ensure that government agencies disclose vulnerabilities in critical infrastructure to appropriate vendors and communities, as opposed to potentially secretly hoarding them for signals intelligence or cyber-operations.
I will now briefly turn to increasing trust in Chinese social media platforms. Citizen Lab research has shown that WeChat has previously placed Canadians' communications under political surveillance to subsequently develop censor lists that are applied to China-registered WeChat accounts. Our research into TikTok, released today, revealed there's no apparent or overt political censorship or untoward surveillance of Canadians' communications on that platform.
Based on our findings, we suggest that social media companies be required to publish more information on their activities to enhance trust. This would include publishing detailed content moderation guides, publishing how and why companies engage in monitoring and censoring behaviours, publishing how organizations interact with government agencies and address their corresponding demands, and publishing annual transparency reports that detail the regularity and effects of state and non-state actors who make requests for users' data.
Platforms could also be compelled to make available algorithms for government audit where there is reason to suspect they're being used to block or suppress lawful communications in Canada or where they're being used to facilitate influence operations. Platforms could also be compelled to disclose when user data flows through or is accessible by parts of their organizations that have problematic human rights, data protection or rule of law histories.
To conclude, we at the Citizen Lab believe that the aforementioned sets of recommendations would ameliorate some of the cyber-related risks linked with the Chinese supply chain management issue, and social media platform issues more broadly. However, we also believe these policies should be applied in a vendor- and country-agnostic way to broadly improve trust in digital systems.
I would just note to the committee that the brief we have also submitted provides additional details and recommendations, especially as applied to Internet standards, which I have declined to get into in this.
Thank you for your time, and I look forward to your questions.
View Jean Yip Profile
Lib. (ON)
In regard to your recommendations on Chinese social media, are there any western social media platforms that operate at this level of transparency, and if so, what actions were taken by the governments to have them give up this information?
Christopher Parsons
View Christopher Parsons Profile
Christopher Parsons
2021-03-22 20:09
There aren't currently any in North America that adhere to all of the recommendations we have. We are certainly trying to advocate for increasing trust writ large, so not just in Chinese social media but also companies that we're very familiar with, such as Facebook, Twitter and the rest.
There are some elements on which we're seeing movement in North America. As an example, we have more robust transparency reports that are available in other jurisdictions. Facebook and others do disclose their lawful access handbooks. They're quite useful and quite accessible. However, we don't have things like algorithmic transparency or accountability, nor do we necessarily have the degree of awareness as to how companies interpret the law, which is almost more important than anything else, because how a company interprets the law versus how the law is written can often be not one to one.
Janis Sarts
View Janis Sarts Profile
Janis Sarts
2021-03-22 20:13
Thank you for the question.
We've been looking.... Most of these social media companies that we use for everyday life have become the agora for democratic process. Most of the elections actually play out in these platforms. We've detected that most—basically all—of those platforms are manipulable by robotic networks to put the messages and to game the algorithms—including during the election processes—to advance particular interests, including of hostile actors.
We've been measuring, every year, how well the platforms do in taking out these robotic networks from platforms, and the results have been very disappointing. Back in 2019, when there were European Parliament elections, we bought 55,000 different actions through robotic accounts on social media—of course, neutral effects—for 300 euros. During the EU parliamentary elections, 90% of that got delivered.
We repeated the same experiment during the U.S. presidential election, once again in a neutral manner. We were able to buy likes, shares, views, custom-made comments and all of that, but this time 300,000 for $300. About 70% of that got through. Basically, there was an option for outside actors to influence the discourse.
Most of the companies were incapable of eradicating that process. If I had to measure the companies, typically Twitter is the best at it. Facebook is less so. Last year, we measured TikTok for the first time. TikTok is basically defenceless. You can do any gaming of that system that you wish. Of course, the more potential electors there are out there, the more malign things can be happening.
Clearly, that goes back to Mr. Parsons's point that there is no way to oversee what the social media companies are doing. They're declaring great success, but when we turn to the vendors of these manipulations, it's cheap, available and effective. We have to have oversight to make sure that it is neither simple nor easy.
Thank you.
Janis Sarts
View Janis Sarts Profile
Janis Sarts
2021-03-22 20:20
First, public awareness campaigns are important, because if we don't do them, then we are even more vulnerable. But on the particular issue, obviously part of the defence is within the social media companies.
In our assessment, they are not doing [Technical difficulty—Editor] public discourses that are happening on these platforms, and therefore some kind of regulatory framework on our side would be necessary.
Janis Sarts
View Janis Sarts Profile
Janis Sarts
2021-03-22 20:21
Actually, it is very simple. You can create algorithms that see these things for what they are. For instance, if we buy this robotic accounts effect, you can see that account. We report that account to Facebook or Twitter.
Most of the time, their algorithm doesn't detect it. It's just a matter of upgrading their algorithms and being better at their jobs. That is not the case, at this point.
Results: 1 - 15 of 41 | Page: 1 of 3

1
2
3
>
>|
Export As: XML CSV RSS

For more data options, please see Open Data