Good afternoon, Mr. Chair and members of the committee. I am Michele Mosca, a professor of mathematics and cryptography at the University of Waterloo in the Institute for Quantum Computing.
It's an honour to be speaking to you today.
When I started my research career at Waterloo and Oxford, I believed my fields would have important implications for the world and offer Canada great economic opportunities, though decades in the future. A quarter century later, it's showtime.
Of course, Canada should proactively seize the great opportunity for economic prosperity created by the decades of work and billions of dollars that we've invested in making Canada a world leader in quantum technologies. However, before we unleash all the wonderful powers of quantum technologies, we have the responsibility to first prepare ourselves to be safe in a world with these technologies. Right now, we are tremendously and dangerously vulnerable. I'll explain briefly what I mean.
First, our economy depends on digital technologies, and their security relies fundamentally on cryptography. Cryptography is perhaps best known for providing confidentiality, which is critical for financial transactions and protecting intellectual property. Cryptography is also what allows our devices to know whom to trust when we engage in transactions on the Internet. For example, you want to make sure you're downloading legitimate software updates and not malware. If you're transferring money to your bank, you want to know that's really your bank and not someone pretending to be your bank. Robust cryptography is absolutely necessary for the proper functioning of our digital economy, which now is pretty much synonymous with our economy.
I'll explain in a minute how quantum computing seriously threatens all of this, but first let me point out one of the biggest challenges. Because the threat may be 10 or more years in the future, there's a natural human tendency to simply ignore it for now. But procrastinating any further and managing this as a crisis will have devastating consequences for our safety and our economy.
First, it will take more than a decade to prepare our economy and our critical systems to be resilient to quantum attacks. This is a very fundamental retooling. We're not talking about patch management and bad passwords. There's no quick remediation and fix. We're talking about systemic collapse with, again, no remediation in place.
Second, a loss of confidence in our cyber-resilience and the economic impact of that may happen much sooner, even in the next two to five years, as key quantum computing milestones are achieved. The quantum threat itself is simple. We don't need Schrödinger's equation to understand it. A quantum computer is a powerful new type of computer that will be able to perform previously impossible calculations. However, it will also decimate today's cryptography, which of course must be dealt with in order for the advent of the quantum computer to be a positive milestone in Canadian history—not just in Canadian history, but in human history.
The impact on our financial industry and economy will include the following: first, a direct attack on the financial services sector—money stolen, legitimate activities impeded, loss of confidence in the Canadian financial sector; second, cyber-attacks on other sectors driving our economy, where much of our money is invested—most importantly, critical infrastructure such as government services, power and other utilities, transportation systems and smart cities; third, theft of strategic intellectual property that is protected by quantum-vulnerable cryptography; and fourth, disruption of Canadian jobs, today's and tomorrow's, that produce or rely on technologies that are not resilient to quantum attacks and don't have a plan to become quantum-safe.
These are four distinct and very serious risks to the financial services sector and our economy as a whole.
We know what the threat is, and we have a good idea of the tools we'll need and how to use them to protect against those four risks to our economy. But this is not an academic exercise. This is where our species does not always shine, because we have to work together across multiple departments and multiple sectors. None of us can do this on our own, and we have to work proactively to get the job done, starting as soon as possible.
It's very challenging, very hard, but the potential silver lining for Canadians at least is that Canada is actually a world leader in quantum science, in cryptography, in quantum-safe cryptography, by which I mean cryptography designed to be safe against quantum attacks in cybersecurity and in financial services. This is our opportunity to lose, basically.
Given our stature and resources, we should be able to move relatively quickly to deploy new quantum-safe tools and to develop the workforce needed to do the work.
If managed proactively, the quantum threat can be turned into great economic opportunities for Canada. We know how to make ourselves quantum-safe, and we can do that and then export our quantum-safe tools and know-how abroad.
On the other hand, if managed reactively, if we choose to do that—which is human nature—we'll be susceptible to quantum attacks. We'll also be susceptible to mundane attacks, the everyday attacks we see today that simply exploit the mistakes intrinsic in a rushed crisis response, and we'll be importing, potentially backdoor, the implementations of our own innovations. That's what will happen if we manage this reactively. Not responding proactively means that new opportunities that we've invested in over decades will be lost, and much of our existing economy will be at risk.
ln closing, our recommendations to the committee are as follows.
First, please urge the government to move quickly to put in place the elements needed for Canada to become quantum-safe from a technology and human resources perspective, in particular including support for targeted research into quantum-safe cryptography, the rollout of a Canadian quantum key distribution network—a Canadian invention, by the way—via satellite and fibre systems, and the creation of a robust pipeline of expertise in quantum-safe cybersecurity.
Second, please urge the government to use the policy levers at its disposal, including approval, planning, procurement and funding powers, to ensure that the new digitally enabled infrastructure is designed and built to be quantum-safe, and not waiting to be decimated as quantum computers become available. In other words, let's create a pull for the technology and workforce needed to make Canada and the world quantum-safe.
Third, to make all this work, given the broad multisectoral, proactive effort needed—again, no one entity can pull this off on its own—please urge the government to provide suitable funding to a not-for-profit entity such as ours, Quantum-Safe Canada, to help co-ordinate the multi-faceted work needed for Canada to implement a robust quantum-safe strategy.
Thank you for listening.
I'd like to give my colleague Brian O'Higgins the rest of the time to say a few words. He is the chair of Quantum-Safe Canada and a world-renowned cryptographer and security entrepreneur.
My name is Christopher Parsons. I am a Research Associate at the Citizen Lab, which is part of the Munk School of Global Affairs and Public Policy at the University of Toronto. I appear at this committee in a professional capacity that represents my views and those of the Citizen Lab.
My comments today focus on a range of securitization practices that, if adopted, would mitigate some of the contemporary risks that participants in the financial sector face.
Canadian government agencies, private businesses and financial institutions, as well as private individuals, rely on common computing infrastructures. We use the same iPhone and Android operating systems, the same customer service interfaces and e-commerce platforms, the same underlying code bases and largely identical third party cloud computing infrastructures.
The sharedness of these platforms means that efficiencies can be leveraged to improve productivity and efficiency, but these benefits are predicated on the overall security of these shared products. To be blunt, the state of computer insecurity is profound, and a large number of vulnerabilities in these shared products, writ large, threaten the financial sector to the detriment of Canada's national security interests.
In my remaining time, I want to point to four issues in particular that I believe need to be taken up to ensure that Canada's national interests are better secured in the future than they are today. These issues include the need for Canada to formally establish a responsible national encryption policy, update Canada's vulnerability equities programs, develop a vulnerability disclosure program framework and promote two-factor authentication.
I now turn to the issue of responsible encryption policies. Given the state of computer insecurity, it is imperative that the Government of Canada adopt and advocate for responsible encryption policies. Such policies entail commitments to preserving the rights of all groups in Canada to use computer software using strong encryption.
Strong encryption can be loosely defined as encryption algorithms for which no weaknesses or vulnerabilities are known or have been injected, as well as computer applications that do not deliberately contain weaknesses designed to undermine the effectiveness of the aforementioned algorithms.
The benefits of strong encryption cannot be overstated. In a technological environment marked by high financial stakes, deep interdependence and extraordinary complexity, ensuring digital security is of critical importance and extremely difficult. The cost of a security breach, theft or loss of customer data or corporate data can have devastating impacts for the private sector and individuals' interests. Any weakening of the very systems that protect against these threats would represent irresponsible policy-making. Access to strong encryption encourages customer confidence that the technology they use is safe.
It is important to recognize that there are risks in the availability of strong encryption. As an example, one of Canada's closest allies, Australia, has adopted irresponsible encryption policies, which may introduce systemic vulnerabilities into code used by the financial sector, as well as other sectors of the economy. Once introduced, such vulnerabilities may be exploited by actors holding adversarial interests toward Canada or Canadian interests. Threat activities might be carried out against the SWIFT network, as just one of many examples, should any element of that network rely on cryptographic products made vulnerable by Australian demands.
Furthermore, strong encryption prevents our closest allies from monitoring Canada's financial activities beyond the above-the-board processes associated with a program such as FINTRAC.
As an example, The Globe and Mail revealed that the United States' National Security Agency was monitoring the Royal Bank of Canada's virtual private network tunnels. The story suggested that NSA's activities could be a preliminary step in broader efforts to “to identify, study and, if deemed necessary, 'exploit' organizations' internal communication networks.”
In light of these kinds of threats, we would suggest that the Government of Canada adopt a responsible encryption policy. Such a policy would entail a firm and perhaps legislative commitment to require that all sectors of the economy have access to strong encryption products, and it would also stand in opposition to irresponsible encryption policies, such as those calling for back doors.
I now turn to the management of computer vulnerabilities of the Government of Canada itself. Vulnerabilities in computer code are acquired by Canada's Communications Security Establishment, or CSE. Thereafter, the CSE determines whether to retain or disclose the vulnerabilities. The CSE is motivated to retain vulnerabilities to obtain access to foreign systems as part of its signals intelligence mandate and also to disclose certain vulnerabilities to better secure government systems.
To date, the CSE has declined to make public the specific processes by which it weighs the equities in retaining or disclosing vulnerabilities. In contrast, the United States publishes how all federal government agencies evaluate whether to retain or disclose the existence of a vulnerability.
CSE's stockpiles of vulnerabilities could potentially be uncovered and used by adversaries, and this has happened to both the United States' National Security Agency and the Central Intelligence Agency. The effect can cost billions in direct economic damage.
The ongoing presence of these stockpiles and lack of clarity concerning what vulnerabilities are retained in the businesses and private individuals have reduced confidence in the reliability and security of products needed to enhance Canada's economic efficiency and productivity, and prospectively slowed Canadians' adoption of contemporary and next-generation software platforms and infrastructure.
To alleviate these concerns, we would suggest that the Canadian government publicize its existing vulnerabilities equities programs and hold consultations on their effectiveness in protecting the software and hardware that is used in the course of financial activities. Furthermore, the government could include the business community and civil society stakeholders in the existing, or reformed, vulnerabilities equities programs. Including these stakeholders would encourage heightened disclosures of vulnerabilities and thus improve the availability of well-written software and reduce threats faced by the financial sector.
Now, it is also important to recognize that security researchers routinely discover vulnerabilities in hardware and software that are used in all walks of life, including in the financial sector. Relatively few organizations, however, have explicit procedures that guide researchers in how to responsibly disclose vulnerabilities to the affected companies. Disclosing computer insecurities absent a vulnerability disclosure program can lead companies to inappropriately threaten litigation to white hat security researchers. Such potential reduces the willingness of researchers to disclose such vulnerabilities.
Beyond studying the laws around unauthorized access to computer code, I would recommend that this committee, and this government, create a draft policy for the financial sector companies to adopt. Such a disclosure policy should establish to whom vulnerabilities are reported, how reports are treated internally and how long it takes for the vulnerability to be remediated. It should also insulate security researchers from legal liability, so long as they do not publicly disclose the vulnerability ahead of the established delimited period of time. Moreover, the government should move to develop and adopt a similar disclosure program for its own departments so that the government can benefit from researchers reporting vulnerabilities in government systems.
Finally, I turn to the topic of two-factor authentication, or 2FA, which refers to an individual being in possession of at least two factors to obtain access to their accounts. The factors most typically used for authentication include something that you know, such as a PIN or a password; something that you have, such as a hardware token or a software token; or something that you are, such as a biometric like a fingerprint or an iris scan. These multiple factors mean that losing a log-in and password pair does not necessarily enable third parties to access a protected system or data store.
It is important for customer-facing systems to have strong 2FA to preclude unauthorized parties from obtaining access to personal financial accounts. Such access can lead to better understandings of whether persons can be targeted by foreign adversaries for espionage recruitment, cause personal financial chaos designed to distract a person while a separate cyber-activity is undertaken, or direct money to parties on terrorist or criminal watch lists.
Admittedly, some Canadian financial institutions do offer 2FA but often default to a weak mode of second-factor authentication that relies on SMS or text messages. This is problematic, because SMS is a weak communications medium and can easily be subverted by a variety of means. It is for this reason that entities such as the National Institute of Standards and Technology in the United States no longer recommend SMS as a two-factor authentication channel.
To improve the security of customer-facing accounts, I would recommend that financial institutions be required to offer 2FA to all clients, and that the 2FA utilize hardware and/or software tokens. Implementing this recommendation would reduce the likelihood that unauthorized parties can obtain access to accounts for the purposes of recruitment or disruption activities.
To conclude, Canadian businesses and private individuals rely on digital tools for all aspects of their lives, including activities that intersect the financial sector. To be clear, the proposals I have outlined will not solve all of the computer insecurity problems that threaten Canada's national security interests and the financial sector, but we believe these proposals do represent a good effort in resolving the most basic threats and would also serve to build trust in the security of our digital tools and the governance of security.
Thank you for your time. I look forward to your questions.
I should also mention our colleagues at SERENE-RISC. Their driving force, the head of SERENE-RISC, is on our governing board as well. That's another venue with a number of workshops that try to bring together these various stakeholders.
Organizations like SERENE-RISC and ours are the few that actually step up to do more than just focus on.... The thing with cybersecurity is that we're all over-employed. We're super busy. For everything we choose to do, there's something else that's really important we're choosing not to do. We're not bored. It's not that we don't have anything to do and so we think maybe we can address this quantum threat. We're way too busy with too many things. There needs to be some encouragement. The thankless work that Benoit and the SERENE-RISC network do, for example.... They hardly get any money and they still do amazing work. I think these people need to be encouraged, thanked and supported.
Part of it is funding. We say “funding”, but when you're a professor and you ask for funding, people assume you want more undirected research money. Canada's already great at that. I'm talking about very focused, mission-oriented support to achieve these very important objectives for Canada, and working backwards from there.
There is a small, committed group of people across Canada who would help with that. They need to be proactively encouraged to do this. Right now, what they're told is that they have to keep advocating, but they don't have time and resources to do this. We, as a country, need to recognize the value they bring to us, the citizens, and tell them to keep up the great work and help them do more.
I also think there are not enough of us. Another thing we need, as part of developing the brain trust, is the intellectual capital and the workforce needed for Canada to even survive in the cyber world a decade from now. We're way behind. Two to five years ago, looking ahead a decade, I said that there's no way we're going to have a fighting chance if we don't have 20 new positions targeted in cybersecurity, with at least five of those in the social and human sciences, because that's a really important part of this equation.
Of course, now the number I see is 50. Our friends in Germany were talking about 50 faculty positions in applied cybersecurity at Saarbrücken, and I don't know how many more at the new Max Planck Institute. We're talking about over 200 serious faculty positions in this targeted area, because it's really important to their economy and security. In Canada, there are zero—not even a CERC, or a Canada 150, nothing. I think there's a huge catch-up there to build up our brain trust in these targeted areas.
I think that, currently, there are challenges within the Five Eyes countries: Canada, the U.S., New Zealand, Australia and the U.K.
The United States, outside of its law enforcement discussions, has showcased a strong desire to support strong encryption. The National Security Agency, the Central Intelligence Agency and all parties outside of the FBI, actually, are strong advocates for unvarnished, strong encryption for intelligence purposes, because they need it themselves in order to efficiently conduct their business. So I think we can turn to our ally to the south to actually derive some inspiration from their intelligence services.
With regard to vulnerability disclosure programs, there are certain companies that have good models for this. The United States' HackerOne has worked with the Department of Defense, and recently legislation has been discussed, if not quite passed, that would also authorize vulnerability disclosure programs to affect the state department.
I think that's how it works on the government side. I think it's a good, strong initiative, and it's leading to substantive patches of major vulnerabilities. You're also seeing, through HackerOne, a large volume of private companies slowly move towards more holistic disclosure programs. In both cases, it means that the infrastructure of government and of private business is secured, and it's often done at a low cost.
I share your enthusiasm for identifying challenges in a sector that is so unknown to us. This is Quantum-Safe Canada's area of expertise, so I'm going to tell you what I think, and you can correct me if I'm wrong.
You consider the threat to be very serious, and it is clear that Canada is at the back of#s the pack as far as its ability to defend against outside threats is concerned. The threat is not exaggerated per se, but is certainly more serious than people in general realize.
The solutions you are proposing focus on mechanics, techniques and technology. Given your extensive expertise, we can assume those solutions address the problem that lies before us. I don't necessarily think the threat has been exaggerated, but I do think the level of confidence in the proposed solutions is very high. The more, however, we talk about the technical dimension, the less we consider one specific element. I'm talking about the only risk you have no control over: the human element. No one has been able to come up with a satisfactory solution to that problem thus far.
Even if you have the best, most ironclad system in the world, the unpredictability of the human element makes it impossible to control the situation. The system can fall apart like a house of cards, because of the psychological element, or social engineering. I don't think, though, that AI is the way to manage the human risk. I'd like to hear your thoughts on that.
Thank you for the question.
You're absolutely correct. The human factor is one of the greatest, if not the greatest, vulnerabilities, and that's not going to fundamentally change. New mathematics, quantum entanglement, is not going to change our fallibility as humans and our corruptibility as humans, but good cryptography does reduce our dependence on trustworthy individuals. We still need some, but it reduces our dependence, which is a really important thing.
Second, the vulnerabilities intrinsic in human mistakes and human compromise tend to be more ephemeral and fixable. If there is a corrupt individual, if somebody uses a bad password or clicks on something they shouldn't click, you detect and you remediate. That's sort of at the top of the stack in terms of stuff that's hurting.... It's very common. It's not going away, but we have a fighting chance if we adopt better discipline and better detection mechanisms and, again, reduce our dependence on smart—not smart; we're all smart—but on people who are not making mistakes, because of course we're going to make mistakes. We can reduce that vulnerability, but not to zero.
Further down the stack, for broken crypto, there is no quick remediation there.
You're absolutely right—you can't just deal with one solution in isolation, because it's the whole ecosystem that works together. Definitely that's why I wanted to advocate for these 20 senior research chairs for Canada. Now it's 50, because we have to catch up. About a quarter of those need to be in the social and human sciences to help us get around the best way to handle all those aspects.
I'm going to be sharing my time with my colleague Steve Masnyk from SkyBridge Strategies.
My name is Normand Lafrenière, and I am the President of the Canadian Association of Mutual Insurance Companies, or CAMIC for short.
CAMIC represents 79 mutual insurance companies across Canada that ensure people's cars, homes, farms and businesses.
Mutual insurers were formed over a period of 100 years, beginning in 1836. They were formed because farmers could not find farm insurance or find it at a fair price.
Mutual insurers are owned by their policyholders. There are no stockholders or share capital, and they aren't on the stock market. Policyholders elect their company's board of directors and vote on the major orientations taken by their company.
The premiums of the many serve to pay the losses of the few. When a profit is generated, that profit is transferred to the surplus of the company to be better able to pay future claims, is refunded to the members or is used for the betterment of the community.
Canadian mutual insurers have formed two mutual reinsurance companies—their own reinsurers—to share risks amongst mutual insurers and access reinsurance in the international market.
They have also created guarantee funds to fully compensate policyholders should an insolvency occur. In passing, I would like to mention that, over the past 60 years—ever since guarantee funds have been in place—no mutual insurance company has gone under.
Today, CAMIC member companies have a 15% market share of the non-governmental Canadian property and casualty insurance market. Being especially present in rural Canada, mutual insurers insure 75% of Canadian farms.
We are here today to address the issues of cyber-risks and threats to the financial system in Canada and, in particular, how open banking could possibly increase the risk of cyber-attacks.
Generally speaking, the insurance sector is not a likely target of cyber-hacking. Apart from insured's credit card and debit card numbers, mutual insurers generally keep very little information of interest to cyber-hackers.
We do, however, have serious concern about the discussion at hand today, especially as it pertains to open banking. This is a concept that began in Europe, the U.K., Austria and Japan. The concept was put in place only recently in those jurisdictions, so there is very little anecdotal evidence on how well or not well it is working.
We can, however, offer thoughts about the discussion points raised by the government when it began its recent open banking consultation.
CAMIC is particularly concerned that the open banking concept will undermine the long-standing prohibition barring banks from engaging in the insurance sector. This long-standing prohibition, supported by governments of all stripes, is in place to protect consumers of insurance from credit-granting institutions coercing them into buying an insurance product that is not appropriate for them. We hope that any open banking framework would not undermine this legislative prohibition.
I would now like to ask my colleague, Steve Masnyk, to touch on other concerns related to open banking and the cyber risks.
Thank you, Mr. Lafrenière.
Thank you, Mr. Chair. Good afternoon, committee members.
I'm not sure if this little diagram has been distributed to everybody. You may have it in front of you. I hope it will be able to guide the discussion, because with me talking in the abstract, it is a bit easier to understand the concept once you have the diagram in front of you.
I'd like to explain the concept of open banking and the cyber risks it poses to the Canadian financial services sector. I'm sure that many members are not aware of what open banking is all about.
It's a concept where a consumer can request that all their data held by their bank—their chequing account, credit card transactions, debit card transactions, investments, RRSPs, mortgage, insurance or any other loan—be transferred to third parties who are in financial services. By third parties, we mean financial technology firms, also known as fintechs.
These fintechs will then be able to underwrite you a financial service product that you may or may not already have, based on the banking data your bank has about you. This transfer would happen via a middleman called an API, which stands for application program interface.
APIs are pretty much platforms or apps that would act as a conduit among the customer, the bank data and all the fintech entities they're associated with. Once a customer submits a request of this API to authorize the API to gather and disseminate their data from their bank, the API would follow through and disseminate the data to fintechs that are affiliated with the API.
The fintechs would have your banking history and, using this data, underwrite you a product to outbid something you already have or something you do not have. Based on the data, they would pretty much know everything about you: what products you have, what products you don't have and what products you might need.
This is the essence of the concept of open banking. As you can imagine, the risks and threats surrounding open banking are many: Who regulates the APls and by what privacy standards, provincial standards or federal standards? Who regulates the fintechs? Which privacy rules do they follow? How does a consumer authorize these players to disseminate their banking data? Once a consumer has given consent, can they revoke it? What happens to the data once a consumer has withdrawn their consent? How does a consumer know which players are holding their data?
Some of the bigger questions on cyber risks and hacking also apply: How easily can a fintech get hacked? What rules do they follow, and who enforces these rules?
Banks are highly regulated players with tremendous privacy standards in place in Canada, as are insurance companies. Where do fintechs fall into that hierarchy of standards? Canada's banks spend millions, if not billions, on technology to protect their customers' data, and even they get hacked. How about these fintech firms, which spend very little? These are a few of the big-picture issues that I will leave for this committee's consideration.
With respect to the insurance sector, as Mr. Lafrenière mentioned, with threats of cyber risks, we can say that, when it comes to mutual insurance companies, we believe there is minimal risk. lnsurance companies do not hold valuable financial data and, as such, are not as exposed to hacking as banks, for example, which hold much more valuable data.
I will leave you with an example. Of course, an insurance company insuring your home or car could be hacked; however, I am not sure a hacker would find it worth his while to know how old your car is or how many washrooms you have in your basement. Of course the risk of hacking exists; however, it is a question of degree.
With that, we're pleased to take any questions you may have.
That's fair enough, Mr. Chair. I respect your ruling, but certainly, when we shout down members with points of order as the point tries to get made, the chair has the right to rule on that.
Gentleman, thank you for being here. Forgive me for my layperson's understanding. When we talk about about apps, I'm wondering if we're also talking about applications through social media and things like that. What I'm getting at is, when we look at the Cambridge Analytica situation, part of what was at stake there was the fact that there was a legal grey zone with regard to data that was collected when a Facebook user would do one of these personality quizzes, or whatever. They were sort of clicking “Okay” and signing away a bunch of data they weren't aware of.
Is there a concern that by opening the floodgates for third party applications with regard to banking, someone could, say, log on to an application with the good intention of using it for a credit check or things like that—we see a lot of these services being offered—and then just scan through, as a lot people do, and click “Okay”, and then they've basically sold away a bunch of very private financial information?
In and of itself, this may not be bad; it may be used in the right way by the application user, but then if you get a breach, as with Equifax, the next thing you know, that data is being used for nefarious purposes—especially given that the third party app may or may not have the same type of security protocols in place as a large institution like one of the banks, which have been at this much longer in some cases.
That's probably a long-winded, convoluted way of getting to the question. What are some of the ramifications of where this could go, potentially?