Skip to main content
Start of content

INDU Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Industry and Technology


NUMBER 102 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Thursday, December 7, 2023

[Recorded by Electronic Apparatus]

  (1535)  

[Translation]

    Colleagues, I call this meeting to order.
    Welcome to meeting No. 102 of the House of Commons Standing Committee on Industry and Technology. Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.
    Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C-27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.
    I'd like to welcome our witnesses this afternoon. With us is Ana Brandusescu, AI governance researcher with McGill University.
    Good Afternoon, Ms. Brandusescu.
    I would also like to welcome Alexandre Shee, industry expert and incoming co‑chair of Future of Work, Global Partnership on Artificial Intelligence.
    Good Afternoon, Mr. Shee.
    From Digital Public, we have Bianca Wylie.
    Thank you for being with us, Ms. Wylie.
    Lastly, from International Association of Privacy Professionals, we have Ashely Casovan, managing director of the AI Governance Centre.
    I'd like to thank you, too, Ms. Casovan.

[English]

     Without further ado, I will yield the floor for five minutes to Ms. Brandusescu.
     Good afternoon. Thank you for having me here today.
    My name is Ms. Ana Brandusescu. I research the governance of AI technologies in government.
    In my brief, with public participation and AI expert, Dr. Renee Sieber, we argue that the AIDA is a missed opportunity for shared prosperity. Shared prosperity is an economic concept where the benefits of innovation are distributed equitably among all segments of society. Innovation is taken out of the hands of the few—in this case, the AI industry—and put in the hands of the many.
    Today, I will present four problems and three recommendations from our brief.
    The first problem is that AIDA implies but does not ensure shared prosperity. The preamble of the bill states, “Whereas trust in the digital and data-driven economy is key to ensuring its growth and fostering a more inclusive and prosperous Canada”. However, what we see is a concentration of wealth in the AI industry, especially for big tech companies, which does not guarantee that the prosperity will trickle down to Canadians. Being “data-driven” can just as easily equal mass data surveillance and more opportunities to monetize data.
     Trust, too, can be easily conflated in Canada with social acceptance of AI, telling people over and over that AI is invariably good. You may have heard the phrase “show, don't tell”. Repeating that AI is beneficial will not convince marginalized people who are subject to AI harms, such as false arrests. AI harms are extensively covered by the Canadian parliamentary study titled “Facial Recognition Technology and the Growing Power of Artificial Intelligence”.
    The second problem is the AIDA's centralization of power to ISED and the Minister of Industry. The current set-up is prone to regulatory capture. We cannot trust ISED—an agency placed in the position of both promoting and regulating AI, with no independent oversight for the AIDA—to ensure shared prosperity. Agencies placed in these dual roles with dual responsibilities, such as nuclear regulatory agencies, are often incompatible, so it will inevitably favour commercial interests over accountability of AI development.
    The third problem is that public consultation is absent. To date, there has been no demonstrable public consultation on AIDA. Tech policy expert Christelle Tessono and many others have raised this concern in their briefs and in articles. ISED's consultation process thus far has been selective. Many civil society and labour organizations were largely excluded from consultation on the drafting of the AIDA.
    The fourth problem is that the AIDA does not include workers' rights. Workers in Canada and globally cannot share in the prosperity when their working conditions to develop AI systems include surveillance in the workplace and mental health crises. Researchers have extensively documented the exploitative nature of AI systems development on data workers. For instance, there is huge toll on their mental health, even leading to suicide.
    In 2018, I learned from digital governance expert Nanjira Sambuli about Sama, which is a Silicon Valley company that works for big tech and hires data workers all over the world, including in Kenya. The contracts that Sama held with Facebook/Meta and OpenAI have been found to traumatize workers.
    We have also seen many cases of IP theft from creators, as AI governance expert Blair Attard-Frost has written about in their brief on generative AI.
    To share in the prosperity promised by AI, we propose three recommendations.
    First, we need a redraft of the AIDA outside of ISED to ensure public and private sector accountability. Multiple departments and agencies that are already involved in work on responsible AI need to co-create the AIDA for the private and the public sector and prevent the use of harmful technologies. This version of the AIDA would hold companies like Palantir, as well as national security and law enforcement agencies, accountable.
    Second, we need AI legislation to incorporate robust workers' rights. Worker protection means unions, lawsuits and safe spaces for whistle-blowers. Kenyan data workers unionized and sued Meta due to the company's exploitative working conditions. The Supreme Court ruled in their favour. Canada can follow the lead of the Kenyan government in listening to its workers.
    Similarly, in the actors' union strike, American workers prevented production companies from deciding when they could use and not use AI, showing that workers can indeed drive regulations. Beyond unions and strikes, workers need safe and confidential channels to report harms. That is why whistle-blower protection is essential to workers' rights and responsible AI.
    Third and lastly, we need meaningful public participation. Government has a responsibility to protect its people and ensure shared prosperity. A strong legislative framework demands meaningful public participation. Participation will actually drive innovation, not slow it down, because the public will tell us what's right for Canada.
    Thank you.

  (1540)  

[Translation]

    Thank you very much, Ms. Brandusescu.
    I'll now give the floor to Mr. Shee for five minutes.
    Go ahead, Mr. Shee.

[English]

    My name is Alexandre Shee. I'm the incoming co-chair of the future of work working group of the Global Partnership on AI, of which Canada is a member state. I'm an executive at a multinational AI company, a lawyer in good standing and an investor and adviser to AI companies, as well as the proud father of two boys.
    Today, I'll speak exclusively on part 3 of the bill, which is the artificial intelligence and data act, as well as the recently proposed amendments.
    I believe we should pass the act. However, it needs significant amendments beyond those currently proposed. In fact, the act fails to address a key portion of the AI supply chain—data collection, annotation and engineering—which represents 80% of the work done in AI. This 80% of the work is manually done by humans.
     Failing to require disclosures on the AI supply chain will lead to bias, low-quality AI models and privacy issues. More importantly, it will lead to the violation of the human rights of millions of people on a daily basis.
    Recent amendments have addressed some of the deficiencies in the act by including certain steps in the AI supply chain, as well as requiring the preservation of records of the data used. However, the law does not consider the AI development process as a supply chain, with millions of people involved in powering AI systems. No disclosure mechanism is put in place to ensure that Canadians are able to make informed decisions on the AI systems they choose, ensuring that they're fair and high-quality, and that they respect human rights.
     If I unpack that statement, there are three takeaways that I hope to leave you with. The first is that the act as drafted does not regulate the largest portion of AI systems: data collection, annotation and engineering. The second is that failing to address this fails to protect human rights for millions of people, including vulnerable Canadians. In turn, this leads to low-quality artificial intelligence systems. The third is that the act can help protect those involved in the AI supply chain and empower people to choose high-quality and fair artificial intelligence solutions if it is enacted with disclosure requirements.
     Let me dive deeper into all of these three points, with additional detail on why these considerations are relevant for the future iteration of the act.
    Self-regulation in the AI supply chain is not working. The lack of a regulatory framework and disclosures of the data collection, annotation and engineering aspects of the AI supply chain is having a negative impact on millions of lives today. These people are mostly in the global south, but they also include vulnerable Canadians.
     There is currently a race to the bottom, meaning that basic human rights are being disregarded to diminish costs. In a recent well-documented investigative journalism piece featured in Wired magazine, entitled “Underage Workers Are Training AI” and published on November 15, 2023, a 15-year-old Pakistani child describes working on tasks to train AI models that pay as little as one cent. Even in higher-paying jobs, the amount of time he needs to spend doing unpaid research means that he needs to work between five and six hours to complete an hour of real-time work—all to earn two dollars. He is quoted as saying, “It’s digital slavery”. His statement echoes similar reporting done by journalists and in-depth studies of the AI supply chain by academics from around the world, and international organizations such as the Global Partnership on Artificial Intelligence.
    However, while these abuses are well documented, they are currently part of the back end of the AI development process, and Canadian firms, consumers and governments interacting with AI systems do not have a mechanism to make informed choices about abuse-free systems. Requiring disclosures—and eventually banning certain practices—will help to avoid a race to the bottom in the data enrichment and validation industry, and enable Canadians to have better, safer AI that does not violate human rights.
    If we borrow from recently passed legislation Bill S-211, Canada’s “modern slavery act”, creating disclosure obligations helps foster more resilient supply chains and offers Canadians products free from forced or child labour.
    Transparent and accountable supply chains have helped respect human rights in countless industries, including the garment industry, the diamond industry and agriculture, to name only a few. The information requirements in the act could include information on data enrichment and specifically how data is collected and/or labelled, a general description of labelling instructions and whether it was done using identifiable employees or contractors, procurement practices that include human rights standards, and validating that steps have been taken so that no child or forced labour was used in the process.
    Companies already prepare instructions for all aspects of the AI supply chain. The disclosure would formalize what is already common practice. Furthermore, there are options in the AI supply chain that create high-quality jobs that respect human rights. The Canadian government should immediately require these disclosures as part of its own procurement processes of AI systems.

  (1545)  

    Having a disclosure mechanism would also be a complement to the audit authority bestowed on the minister under the act. Creating equivalent reporting obligations on the AI supply chain would augment the current law and ensure that quality, transparency and respect of human rights are part of AI development. It would allow Canadians to benefit from innovative solutions that are better, safer and aligned with our values.
    I hope you will consider the proposal today. You can have a positive impact on millions of lives.
    Thank you.
    Thank you, Mr. Shee.
    I'll now yield the floor to Ms. Wylie for five minutes.
    My name is Bianca Wylie. I work in public interest digital governance as a partner at Digital Public. I've worked at both a tech start-up and a multinational. I've also worked in the design, development and support of public consultations for governments and government agencies.
    Thank you for the opportunity to speak with you today about AIDA. As far as amendments go, my suggestion would be to wholesale strike AIDA from Bill C-27. Let's not minimize either the feasibility of this amendment or the strong case before us to do so. I'm here to hold this committee accountable for the false sense that something is better than nothing on this file. It's not, and you're the ones standing between the Canadian public and further legitimizing this undertaking, which is making a mockery of democracy and the legislative process.
    AIDA is a complexity ratchet. It's a nonsensical construct detached from reality. It's building increasingly intricate castles of legislation in the sky. It's thinking about AI that is detached from operations, from deployment and from context. ISED's work on AIDA highlights how open to hijacking our democratic norms are when you wave around a shiny orb of innovation and technology.
    As Dr. Lucy Suchman writes, “AI works through a strategic vagueness that serves the interests of its promoters, as those who are uncertain about its referents (popular media commentators, policy makers and publics) are left to assume that others know what it is.” I hope you might refuse to continue a charade that has had spectacular carriage through the House of Commons on the back of this socio-psychological phenomenon of assuming that someone else knows what's going on here.
    This committee has continued to support a minister basically legislating on the fly. How are we writing laws like this? What is the quality control at the Department of Justice? Is it just that we'll do this on the fly when it's tech, as though this is some kind of thoughtful, adaptive approach to law? No. The process of AIDA reflects the very meaning of law becoming nothing more than a political prop.
    The case to pause AIDA and reroute it to a new and separate process begins at its beginning. If we want to regulate artificial intelligence, we have to have a coherent “why”. We have never received a coherent why for AIDA from this government. Have you, as members of this committee, received an adequate backstory procedurally on AIDA? Who created the urgency? How was it drafted, and from what perspective? What work was done inside government to think about this issue across existing government mandates?
    If we were to take this bill out to the general public for thoughtful discussion, a process that ISED actively avoided doing, it would fall apart under the scrutiny. There is use of AI in a medical setting versus use on a manufacturing production floor versus use in an educational setting versus use in a restaurant versus use to plan bus routes versus use to identify water pollution versus use in a day care—I could do this all day. All of these create real potential harms and benefits. Instead of having those conversations, we're carrying some kind of delusion that we can control and categorize how something as generic as advanced computational statistics, which is what AI is, will be used in reality, in deployment, in context. The people who can help us have those conversations are not, and have never been, in these rooms.
    AIDA was created by a highly insular, extremely small circle of people—tiny. When there is no high-order friction in a policy conversation, we're talking to ourselves. Taking public engagement on AI seriously would force rigour. By getting away with this emergency and urgency narrative, ISED is diverting all of us from the grounded, contextual thinking that has also been an omission in both privacy and data protection thought. That thinking, as seen again in AIDA, continues to deepen and solidify power asymmetries. We're making the same mistake again for a third time.
    This is a “keep things exactly the same, only faster” bill. If this bill were law tomorrow, nothing substantial would happen, which is exactly the point. It's an abstract piece of theatre, disconnected from Canada's geopolitical economic location and from the irrational exuberance of a venture capital and investment community. This law is riding on the back of investor enthusiasm for an industry that has not even proven its business model out. On top of that, it's an industry that is highly dependent on the private infrastructures of a handful of U.S. companies.

  (1550)  

    Thank you.

[Translation]

    Thank you very much.
    I'll now give the floor to Ms. Casovan for five minutes.

[English]

    Thank you for inviting me here to participate in this important study, specifically to discuss AIDA, a component of the digital charter implementation act.
    I am here today in my capacity as the managing director of IAPP's AI governance centre. IAPP is a global, non-profit, policy-neutral organization dedicated to the professionalization of the privacy and AI governance workforces. For context, we have 82,000 members located in 150 countries and over 300 employees. Our policy neutrality is rooted in the idea that no matter what the rules are, we need people to do the work of putting them into practice. This is why we make one exception to our neutrality: We advocate for the professionalization of our field.
    My position at IAPP builds on nearly a decade-long effort to establish responsible and meaningful policy and standards for data and AI. Previously, I served as executive director for the Responsible Artificial Intelligence Institute. Prior to that, I worked at the Treasury Board Secretariat, leading the first version of the directive on automated decision-making systems, which I am now happy to see included in the amendments to this bill. I also serve as co-chair for the Standards Council of Canada's AI and data standards collaborative, and I contribute to various national and international AI governance efforts. As such, I am happy to address any questions you may have about AIDA in my personal capacity.
    While I have always had a strong interest in ensuring technology is built and governed in the best interests of society, on a personal note, I am now a new mom to seven-month-old twins. This experience has brought up new questions for me about raising children in an AI-enabled society. Will their safety be compromised if we post photos of them on social media? Are the surveillance technologies commonly used at day cares compromising?
    With this, I believe providing safeguards for AI is now more imperative than ever. Recent market research has demonstrated that the AI market size has doubled since 2021 and is expected to grow from around $2 billion in 2023 to nearly $2 trillion in 2030. This demonstrates not only the potential impact of AI on society but also the pace at which it is growing.
    This committee has heard from various experts about challenges related to the increased adoption of AI and, as a result, improvements that could be made to AIDA. While the recently tabled amendments address some of these concerns, the reality is that the general adoption of AI is still new and these technologies are being used in diverse and innovative ways in almost every sector. Creating perfect legislation that will address all the potential impacts of AI in one bill is difficult. Even if it accurately reflects the current state of AI development, it is hard to create a single long-lasting framework that will remain relevant as these technologies continue to change rapidly.
    One way of retaining relevance when governing complex technologies is through standards, which is already reflected in AIDA. The inclusion of future agreed-upon standards and assurance mechanisms seems likely, in my experience, to help AIDA remain agile as AI evolves. To complement this concept, one additional safeguard being considered in similar policy discussions around the world is the provision of an AI officer or designated AI governance role. We feel the inclusion of such a role could both improve AIDA and help to ensure that its objectives will be implemented, given the dynamic nature of AI. Ensuring appropriate training and capabilities of these individuals will address some of the concerns raised through this review process, specifically about what compliance will look like, given the use of AI in different contexts and with different degrees of impacts.
    This concept is aligned with international trends and requirements in other industries, such as privacy and cybersecurity. Privacy law in British Columbia and Quebec includes the provision of a responsible privacy officer to effectively oversee implementation of privacy policy. Additionally, we see recognition of the important role people play in the recent AI executive order in the United States. It requires each agency to designate a chief artificial intelligence officer, who shall hold primary responsibility for managing their agency's use of AI. A similar approach was proposed in a recent private member's bill in the U.K. on the regulation of AI, which would require any business that develops, deploys or uses AI to designate an AI officer to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business.

  (1555)  

     History has shown that when professionalization is not sufficiently prioritized, a daunting expertise gap can emerge. As an example, ISC2's 2022 cybersecurity workforce study discusses the growing cyber-workforce gap. According to the report, there are 4.7 million cybersecurity professionals globally, but there is still a gap of 3.4 million cybersecurity workers required to address enterprise needs. We believe that without a concerted effort to upskill professionals in parallel fields, we will face a similar shortfall in AI governance and a dearth of professionals to implement AI responsibly in line with Bill C-27 and other legislative objectives.
    Finally, in a recent survey that we conducted at IAPP on AI governance, 74% of respondents identified that they are currently using AI or intend to within the next 12 months. However, 33% of respondents cited a lack of professional training and certification for AI governance professionals, and 31% cited a lack of qualified AI governance professionals as key challenges to the effective rollout and operation of AI governance programs.
    Legislative recognition and incentivization of the need for knowledgeable professionals would help ensure organizations resource their AI governance programs effectively to do the work.
    In sum, we believe that rules for AI will emerge. Perhaps, more importantly, we need professionals to put those rules into practice. History has shown that early investment in a professionalized workforce pays dividends later. To this end, as part of our written submission, we will provide potential legislative text to be included in AIDA, for your consideration.
    Thank you for your time. I am happy to answer any questions you might have.

  (1600)  

    Thank you very much.
    To start the discussion, I'll yield the floor to MP Perkins, for six minutes.
    Ms. Wylie, the minister talked a lot about 300 consultations after he tabled the bill, not before. Looking at the list that he provided after we asked for it, I see that 28 were with academics and 216 were basically with big business and not really with people who are impacted, so it was sort of the converted talking to the converted.
    I'd like you to talk a little more, if you could, to expand on your belief about why you think a proper consultation, with this bill defeated and reintroduced in a new format, would produce a better result.
    Certainly. Thank you.
    I think, even with academics, they're not working in operations. The reason I listed the examples I gave is that I think AI starts to make sense when we talk about it in a specific context: as mentioned, in manufacturing, in health care, in dentists' offices. We could go through all of society here. We need to talk about people who are working in those spaces, not general specialists.
    This is what I mean. Even within the critics, people have a vested interest in going way down into the complexity instead of zooming out and looking at this to ask why we are doing this. What are we trying to accomplish? The answers to those questions are going to be very different per sector. What looks beneficial and harmful per sector is a totally different thing.
    I think that's why we need to restart the conversation from the point of what we are trying to do here, and then we can talk about how we would do it. You can't start the “how” before you get your “why” clear.
    What this bill outlines—which was a bolt-on to a previously failed privacy bill—is driven by trying to imitate what's going on in Europe, but it basically says that we're going to legislate harms, the highest level of harms in AI. It has already failed to define it well, because the minister has already had to revise the definition.
    Are the highest risks or harms the only harms that are potential out there, and what are all the levels? There are various levels of AI that can impact people, besides the highest level that they're legislating.
    Absolutely.
    There are two things on this point. One of them is that harm is always contextual. Something can seem absolutely safe in terms of, say, data collection your doctor has, and you turn around and someone else has it. It's dangerous. It's never absent context and use, ever, so I would argue that structural categorization is incorrect.
    The reason we look to Europe all the time and ask what Europe is doing.... I know it's appealing to say that what they are doing over there may be thoughtful, but geopolitically, from an economic perspective, they want their own Google, Amazon and Microsoft. When you gin up all this complexity, you protect your national industry. This is a way to enable the economy to grow, based on domestic rules.
    There is, then, that broad harmonization conversation you're hearing. How well has that worked to date globally with data protection law? It has not. It has not worked with privacy either.
    Those are the two pieces of a response to that.
     We've had a lot of discussion here about the first two parts of the bill, about whether or not privacy is a fundamental human right and whether or not this bill, in spite of the late-stage, eleventh-hour conversion of the minister in recognizing that, still has a lot of exceptions in it that give the paramount authority to business to override the fundamental right.
    In the AIDA bill, there's no mention of human rights, personal privacy or anything else, but there is mention of creating a super ministry of undefined power and undefined regulation at ISED to rule it all. What's an alternative to having one major Ottawa super agency that thinks it can rule the entire AI world in Canada? What's the alternative?

  (1605)  

    There is at least one alternative, which is why I keep going back.... The groundwork, the homework for this bill was not done. Even before you go out to the public, you go within the government and ask if this is the problem we're seeing in banking, in health care and in automobiles. We start from there, and then we think, “What do we do next from an adaptive perspective?”
    We don't reinvent the world in the name of artificial intelligence. It's disrespectful to the existing status of the government, of democracy and of accountability. I think you at least start there. When things don't fall into there, then let's get specific and regulate. Let's get specific and talk about accountability. We don't start building the world around artificial intelligence here and ignore everything else that came before.
    Should any future legislation outline what all the levels, as we know them, of artificial intelligence are, from the repetitive task stuff that gets done in a business right through to computer efficiency?
    I genuinely don't think this is the right approach from a structural question perspective, because artificial intelligence, if we break it down, is pattern matching and advanced statistics. We didn't regulate mathematics. We didn't regulate statistics. We didn't regulate databases. We didn't regulate general software. I don't think the software industry did badly without general regulation.
    It's just to say—
    That reminds me of a question I've been meaning to ask and haven't been able to ask anyone yet.
    We have not yet regulated any level of computing power in the world, but we are here trying to. Why?
    It's industry. Capital is looking for the next place to go. I'm only saying this because the business model isn't even proven yet. Do you know who knows they're making money? It's Google, Microsoft and Amazon. For every other start-up that is riding on the back of those companies, let's talk about where they are in two years. We're legislating for that context, which is novel and has not arrived yet, and that's the driving feature here. Make it make sense, please.
    I know that—
    Thank you very much, MP Perkins.
    I'll now yield the floor to MP Van Bynen for six minutes.
    Thank you very much, Mr. Chair.
    One thing that I'm enjoying very much about this committee is the divergent perspectives that we're hearing, the level of engagement and the level of intelligence in approaching the issue.
    The reality is that the genie is out of the bottle. My concern is that we're not going to go back to where we were before.
    My first question is for Ms. Casovan.
    In April 2023, you and 75 other researchers co-signed a letter calling on the government to move forward with the artificial intelligence and data act and saying that further postponing the act would be out of sync with the speed at which technology is being developed. Is your position the same today as when you co-signed that letter?
    I would note that I did that in my former capacity as the executive director of the Responsible AI Institute. I still continue to serve on the board of the Responsible AI Institute, so I'll share this in that capacity, given the policy neutrality of my current position.
    That said, yes, I definitely do believe that. As you mentioned, the genie is out of the bottle. I have a lot of respect for Bianca and her perspective. One thing I want to focus on is the role that.... Ana spoke to the harms and the challenges that exist from these systems. I do think that there is a fundamental delta between AI technologies and other types of ways in which we've provided regulation in certain sectors previously. I agree that we need to augment or look at existing legislation and figure out how AI impacts those industries: instead of having legislation that is specific to AI, figure out how we augment that and how that's complementary to this work. That does leave a lot of systems and different types of contexts that don't get resolved through that process.
     Thank you.
    That turns me over to Mr. Shee.
    According to the website, the Global Partnership on Artificial Intelligence is “a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.” It includes 29 countries.
    How could the work of the Global Partnership on Artificial Intelligence working group provide a framework for the implementation of the laws that will be governing artificial intelligence, such as the artificial intelligence and data act?

  (1610)  

    It's an excellent question.
    The purpose of the group is to bring world-renowned experts and policy-makers together around the table to actually think about the practical applications of artificial intelligence.
    One of the artifacts that recently came out from the working group on the future of work was 10 policy recommendations about what we have identified with the International Labour Organization as the “great unknown”, the idea that 8% of the working population, going forward, will be impacted in an unknown way by artificial intelligence, and there is an opportunity to act.
    It's an incredible organization that brings stakeholders from around the world. We discuss, in very practical terms, the way to apply legislation. It would be very open to continuing to be consulted in this process, and it can help give concrete examples of how AI can be built responsibly and benefit humanity.
    I'm interested in your comments. Some of the notes I made include “digital slavery” and your concerns about that.
    How do you think the impacts of AI on work should be regulated in Canada?
    There are two aspects to consider.
    The first one is how it's impacting work today in Canada and beyond. That's the first element. Then, how will it impact society and the place of work, going forward?
    If we think about today, we see there are millions of people who actually work behind the scenes in AI systems to make them operate effectively. They are not protected under this law, nor are they protected under any legislation that's coming out on AI; therefore, there's an opportunity to legislate the AI supply chain for what it is, a supply chain with millions of people working on it.
    In the second phase—the impact on workers going forward—there are a lot of unknowns around what will happen to workers and how their work will be influenced.
     One of the advantages of the Global Partnership on Artificial Intelligence is that we have representatives from academia, industry and worker unions, as well as governments. The statement that was put out was essentially that we need to put in place studies on the impact of AI on future work. We need to invest in retraining. We need to invest in making sure we're transitioning some roles. We need to be aware, even most recently with the advent of generative AI, that there already are economic impacts on low-skilled workers, who will need to be retrained and given other opportunities.
    The future of work needs that, and the Global Partnership on AI has a policy brief that is available online.
    I think I have about 15 seconds left.
    You provide an overview of regional and national initiatives. Which countries have the most robust approaches? Would you recommend amendments to the artificial intelligence legislation that we have here?
    The first amendment that I would recommend is to have a disclosure on the supply chain to ensure that we can decide on the usage of ethical AI that does not have forced labour or child labour in it. Right now the leading jurisdiction is the EU, where we see that companies we're working with actually have, in practice, higher standards than anywhere in the world, and they are forcing disclosure mechanisms in the private sector.
    I would say that's where we should be looking right now. We should be looking at the EU right now for legislation.
    Thank you, Mr. Chair.

[Translation]

    Thank you, Mr. Van Bynen.
    Mr. Lemire, you have the floor.
    I'd like to thank all the witnesses.
    I'll start with Ms. Casovan.
    Ms. Casovan, during your time in the Government of Canada, you led the development of the first‑ever artificial intelligence policy, namely, the directive on automated decision‑making. This directive imposes a number of requirements on the federal government's use of technologies that assist or replace the judgment of a human decision‑maker, including the use of machine learning and predictive analytics. These requirements include the requirement to provide notice when the automated decision‑making system is being used, as well as the existence of recourse methods for those who wish to challenge administrative decisions.
    In your opinion, should this type of notice or recourse provision be included in the Artificial Intelligence and Data Act?

  (1615)  

[English]

     I believe this type of notification is required.
    One thing that we did with the directive on automated decision systems was recognize that there are multiple different types of contexts in which these systems are being used and that those have different types of categories of harms. If you have a reference in the legislation like appendix C in the directive, then you'll see that there are different requirements that exist for those different types of harms.
    One of the challenges we had when looking to implement it was that people were looking for the acceptable standards or the bar that they'd need to meet. Unfortunately, that wasn't developed. That's what needs to happen now in order to address some of the concerns that you've raised—notification and other types of documentation requirements. That type of additional context is required through additional regulations that support the broader framework of AIDA, and then you need to look at what you do in those contexts for different degrees and categorizations of risk.

[Translation]

    In the case of a remedy, who should the consumer turn to if they want to challenge an automated decision‑making process or provide clarification?

[English]

    When consumers are looking to make a challenge, again, not only do they need the notification in order to understand that an AI system is even being used, but they should also have appropriate recourse for that. Again, these are different types of mitigation measures that will be context-specific and that should be included based on what the type of system is—again, following subsequent rules that should be made.

[Translation]

    As I understand it, the directive requires an algorithmic impact assessment for each automated decision-making system. Based on various specific criteria, this assessment will lead to a classification ranging from level 1, which is the lowest incidence, to level 4, which is the highest incidence. The results of that evaluation must be made public and updated if there are any changes to the functionality or the scope of the system.
    Why is it important that automated decision‑making systems undergo an algorithmic impact assessment?

[English]

    The key issue that we were trying to address is not to over-regulate or create more oversight than is required. We want there to be this balance of innovation in using these systems and also protection of the people who are using them. By breaking it down and recognizing that different types of issues and harms occur with the different types of systems, we wanted to address the effort that is required to ensure that appropriate mitigation measures are put in place.

[Translation]

    Can you give us some examples of some of the criteria used to determine the level of impact of each system? Would it be a good idea to add this type of requirement to Bill C‑27?

[English]

    I would love to see that. I think we see that in the amendments, with the different types of classes. We have the seven classes of potential impacts. I think there's recognition in that. There are different levels of harm that can exist within that. I would definitely recommend adding something almost like a matrix—to say that you have these different types of impacts that could occur in hiring or health, and these are the different types of harms that could exist, so therefore these are the mitigations needed. Most importantly, it's about matching that to industry-developed standards.
    One thing that Bianca was referencing—and other witnesses have too—is the need for increased public participation in this process. Standards development processes do allow for that and accommodate that. That's why I think this is really important.
    Again, it's recognizing that we're not going to be able to put everything in black and white in any sort of legislation. Having people trained to understand what those harms are, and how to look for them and mitigate them, is the point of having somebody responsible, like a chief AI officer.

[Translation]

    One of the criteria in the algorithmic impact assessment is the level of impact on the rights not only of individuals but also of communities. We have heard the call from marginalized communities that Bill C‑27 must go beyond individualized harms and include harms that disproportionately affect certain groups.
    Can you explain to us why we need to change some individualized language and ensure that the government directive will be as specific and inclusive as possible?

  (1620)  

[English]

     Different types of mitigation, as you're mentioning, depend on the use of the system. Both the technology and the context within which it's being used will change. The harms will change, from an individual to a group to the organization itself. Therefore, first of all, it's understanding what the harms are.
    The work I did at the Responsible AI Institute was really building on the work I did at Treasury Board: This is what the scope of a system is, and we need to put something like a certification mark on it, like a good housekeeping or LEED symbol. That type of acknowledgement would require you to be able to identify what those harms are, first and foremost, and therefore identify the different types of criteria or controls you would need to go through in order to mitigate them for the individual or the group or the organization.

[Translation]

    Thank you very much.
    You're welcome.
    Thank you, Mr. Lemire.
    Mr. Masse, the floor is yours.

[English]

    Thank you, Mr. Chair.
    Maybe I'll start with Mr. Shee, because he's virtual.
    There have been suggestions, not only by this panel but others as well, that we scrap this and start all over. I'm wondering if you have an opinion with regard to the content related to the Privacy Commissioner. If there is a separation of the two major aspects of the bill here, would you support continuation of the privacy work or should that be potentially looked at as well?
    Then I'll go to the witnesses in person.
    I would say that the AI act in itself is extremely important. As was mentioned by other witnesses today, AI systems already have an impact on people's lives, and their development is just increasing in speed. I would be very favourable to seeing legislation that at least sets the base framework. From there, we can move forward.
    Right now the legislation is not complete. It needs work and it needs to have additional amendments to ensure that it touches the whole AI supply chain and protects people throughout, both while it's being built and when it's being deployed.
    I'll move to Ms. Casovan, please, and then across the table.
    Again, what I'm looking for is this: If we do end up not getting enough fixes to the AI component, and that starts over or is delayed, should we continue to progress with the privacy and the potential changes that are suggested there?
    I'm a huge fan of the fact that this bill has.... I know that some people have said it's a bolt-on, as was announced today, but I think it's an important bolt-on. If AIDA does not continue, there are privacy implications and consumer protection implications in relation to the use of AI.
    I would like to see at least those two components strengthened.
    Thank you.
    I'll go to our next witness, please.
    I'm not going to respond to that. I'm not well located to comment on the privacy pieces of the bill.
    Okay.
    I'll go to our final witness, please.
    Just in terms of AIDA, AIDA should be separate.
    In terms of privacy, that's not my expertise either. I just stand by my comment to remove AIDA and proceed with the other two. Whether other amendments are needed for that is for somebody else.
    This is interesting.
    I do want to ask about the protection of labour law. If you could continue with regard to that, how would that best be done? Would that be through a commissioner or a special component in the labour ministry? I'm just throwing this out there. What are some mechanics we have around it that you're seeking to change?
     Thank you for that.
    As Ms. Wylie said, I could give you so many examples, right now, of specific types of harms, real-world implications and everything that's changing all the time, but I want to zoom out a little and talk about why labour is important to look at.
    Before getting into who can do this, it seems paradoxical to me to want agility in technologies that are so complex. We don't understand them. Most people don't. The black box is still there. Engineers don't understand them still, to this day. Workers are being continuously impacted. When I say “impacted”, I mean negative impacts and harms. I submitted a brief to your committee with Dr. Renee Sieber, and we discuss those at length. You have multiple studies to look at, from multiple years. I've been following Sama for five years now, the company that is a self-proclaimed “ethical AI” company. When we look at who says they're ethical, and what ethical is, we should really question that, as well.
    In my first five minutes, I said that AI being a societal benefit is being shoved down our throats. That is the case. “We need digital literacy. We need AI literacy. We know it's good and it's here to stay.” I'm here to sometimes reject that. We should be able to ban AI when we need to. We should be able to listen to the workers and see what they want and what they think. What does their day-to-day job look like? Do they have enough breaks? Look at what Amazon is doing, micromanaging every millisecond of their lives. The factory workers are living in a limbo space. I wouldn't even say “a limbo space”. They're in hell.
    How do we prevent that? Why not go to labour departments that know those strengths? This is why ISED is not fit to do this alone. Earlier, I was asked what other agency could do this. It cannot just be one. It has to be multiple. This is a team effort. This goes back to democracy. Slow it down a bit and listen to the public. We don't know what the public wants, because the public wasn't involved. We need to listen to labour organizations, departments that deal with labour everywhere in this country, and the workers themselves. This is why we cannot just have people in these rooms. We cannot just have this televised. We need to have people come to you. We need you to come to the people. We need to look at town halls. We need to look at off-line methods. We need to look at different times and places to do public participation, because we live in a digitized world.
    You're saying we need to change everything for AI. No. As Ms. Wylie said before, AI needs to change for us.

  (1625)  

    Thank you.

[Translation]

    Thank you very much.
    Go ahead, Mr. Vis.

[English]

    Thank you to all the witnesses here today.
    I'm very concerned about this broken bill. As legislators, we around this table understand what's at stake here, but it's very disconcerting. For the second time since we started doing this bill, we received massive packages of information from the minister that completely changed the bill in front of us. I'm saying, “Minister, why did you screw up so badly, and where the heck was your department for years? Where were you?”
    In the last meeting, I asked a number of experts whether Industry Canada or the Government of Canada even has the capacity. This was one of the first things I raised in Parliament when I got elected. I was on the HUMA committee reviewing data systems for the Department of Human Resources, because they were still using a binary code method from the 1970s. I think that's still in effect today. The Government of Canada has proven that, generally, they get a lot of things wrong and they're not up to date in the 21st century. I am so apprehensive about giving this department any more power over something most experts are still contemplating how to get right.
    That said, I think that, despite the minister's incompetence in this, his heart may be partly in the right place. He's trying to bring forward amendments and do something to fix his own mess. However, it is very scary that he's so incompetent that we're just getting thrown this information.
    I'm sorry for that rant, but part of me is thinking now—
     Tell us what you really think.
    Tony, you and I both come from the Dutch community, and in our culture, it's about being direct. I know you appreciate that as well. Thank you, my friend.
    That's true. Dutch people are direct. Tony was even born in Holland.
    We talked a lot about enshrining a fundamental right to privacy for children in the first part of the bill. We got from the minister seven areas where he doesn't believe that AI should be used now. I don't see anything in there related to children. That's kind of concerning.
    Have any of you followed the debates that we've had so far about a fundamental right to privacy for kids?
    Ms. Casovan, you're nodding “yes” in response.

  (1630)  

     I heard the debates, yes.
    I'm in a position where the Liberal members of this committee may make a decision with the Bloc Québécois to support this going through. I'm not sure where we're going to land on that. We're openly having this deliberation about whether this part of the bill deserves to go forward. That's where we are right now, in good faith.
    That said, if it does go through, is it worth it for committee members to look at some of the other amendments that we'll be putting forward in the first part of the bill, like really enshrining some protections for kids?
    I am so concerned about the innocent. I have a 10-month-old daughter, a four-year-old son and an eight-year-old son. I'm so concerned about their innocence and the manipulation. The bill, I will admit, does address psychological harms, but I don't think one or two clauses are good enough when it relates to a data-driven economy that impacts kids from birth to death in today's day and age.
    Could you comment on that a bit?
    Sure. Actually, the reason I included my personal note was that I heard your line of questioning. It is concerning. It is not something that I typically speak to, but it was quite surprising, having the experience of working in this space for almost a decade—which is scary—to really think about the evolution of different types of technologies and therefore the societal impacts they have.
     I was also nodding my head when you were mentioning some of the challenges that exist internally. Working inside government, I saw them up close and personal. Definitely, as with all organizations, there are concerns when we're using old technologies to try to fix modern problems. That said, the reality is that it does take a significant amount of time.
    On the children's perspective, the fact that I had kids recently completely opened my aperture in terms of the harms. It made it more real and visceral than I could have ever imagined. Everything was abstract before.
    I not only think that this should be included, but I think that when we see potential new classes of high-impact systems get added into these amendments, it would be nice to see something related to the protection of youth, similar to what we're seeing south of the border in the U.S.
    Okay.
    Mr. Shee mentioned in his comments earlier the relationship between generative AI models and child labour.
     If we had, say, a clause in the AI portion of the bill that excluded any data that was created by children in third world countries, what impact would that have?
    It would have a—
    I was actually asking Ms. Casovan.
    I think it would be not only nice to see.
    One challenge, though, with all of these systems is that they're trained on data. I know you've talked about this lots in this committee, so I won't regurgitate it too much, but what's important to note is that often the supply chain is not transparent. Knowing where that data comes from is quite difficult. To know that it comes from or was collected by children, I think you need to solve the more fundamental problem of transparency in the supply chain of data collection practices, which I think should be addressed with deeper concern in this bill as well.
    Mr. Chair, do I have any more time?
    Thank you, Mr. Vis. That's all the time.
    Mr. Shee, I will just allow you to add to this, if you had something.
    Yes. I would just add that it is common practice within the AI development world to actually detail instructions for both data collection and data annotation. Including any reference to child labour or forced labour would have a tremendous impact on making sure that that would be eradicated, given that it would be included specifically in the instructions given to companies that are operating around the world.

  (1635)  

     Thank you.
    Mr. Sorbara, you have the floor.
    Welcome, everyone.
    Thank you for your respective testimonies on AI. It's fascinating. It's very complex, and it's given a lot of us as MPs and not specific subject matter experts a lot to chew on.
    I do wish to go to the gentleman who is here virtually, Alexandre.
    You mentioned several times the AI continuum and the idea of data collection, engineering and annotation in the AI supply chain. Can you elaborate on that point? Your first point was that we should go forward with the bill. If you can comment on both aspects, that would be great.
    Essentially, when we look at artificial intelligence, there are many steps in that.
    The first step is collecting data for an AI system. The second step is annotating that data. For example, if you have an image where you see a nose and eyes, there is somebody annotating that. Then there is the feedback loop where that data is enriched, so it goes through a software model, and ultimately the outputs of that are revalidated by a human. That's packaged into a proof of concept that's oftentimes launched, and then it becomes a product that's used by consumers or in the business context. That's the whole supply chain.
    Right now, this legislation is geared only around the outputs, so we're missing all of the work done by humans to create the AI systems. I think it's important to have a law in place, because we need to start regulating the outputs as much as we need to regulate the supply chain.
    My recommendation [Technical difficulty—Editor].
    I'm afraid it's the whole system, because it's not just Mr. Shee.
    Mr. Shee, I will ask you to go back one minute in time. The system froze.
    Essentially, I think it's important to have legislation in place, because we need to start protecting the citizens who are interacting with AI systems.
    We also need to hold accountable companies that are building AI systems and ensure that they're not using practices that are against Canadian values in their supply chain.
    You did say one thing that I found fascinating. You made the linkage between the AI supply chain and human rights, and you also mentioned the race to the bottom on the lack of worker rights when it comes to the AI supply chain. I would love to follow up in a more in-depth conversation on that, but I am going to move on to another witness.
    Ashley, you commented on what compliance would look like in this AI world. Can you elaborate on that? We know governance within any type of organization is very important, and any type of service or product that's provided is important. When I think of compliance, I'm trying to wrap my head around compliance in an AI world. What is that, and what should it look like?
     You're not the only one. It's something that I think is quite complicated.
    One note that came in the amendments was related to the role of auditing within the commissioner's office. Something I'd like to see is more proactive use of auditing to ensure compliance, as opposed to the powers of the commissioner to require an audit when there is something that percolates that's problematic enough. It would be good to see that. That is done typically like a financial audit. You require those proactively every year with companies.
     In this case, one thing we need to understand better is the scope of an AI system and, based on that, what those harms are and how you comply with that. What does that “good” look like, again, doing that through a public process? From there, you would require third party audits in a similar way that we have professional auditors in financial services to do the same thing.

  (1640)  

     As someone who has spent many years in financial services, domestically and globally, I know we depend on audited financial statements to do our job. Hopefully 99 times out of 100 they're accurate.
    Are we looking at the same type of world as we go forward?
     If I have my way, yes, I would love that.
    However, there's one addition that I'd like to note here. One of the things that people talk about—as you would know—is that financial audits are lengthy and very expensive. However, there are a lot of tools we can use to expedite the evaluation of these systems now. Recognizing that they're changing so rapidly, it's really important for us to use and leverage those tools so that those audits are not only expedited, but also accurate at the time of that use and also for the purposes of ongoing monitoring.
    You're out of time, Mr. Sorbara.
    I'll just yield myself a little bit of time for a follow-up question to Mr. Shee.
    I'm just trying to understand what's the scale of the issue you're hoping for Parliament to address when it comes to the exploitative labour used in the AI supply chain.
    I'm thinking out loud. Just today, I watched the Google DeepMind Gemini prototype that came out. It seems to me like maybe that ship has sailed and AI has already gotten to the point where you would think it's not that labour-intensive.
    I'm just trying to understand what the scale is.
    It's a great question.
    What I would say is that, first, while AI systems look very impressive to consumers, millions of people on a daily basis are working behind the scenes to make them work. That spans from our interactions with social media to automated decision-making systems.
    The scope of what I'm asking for is very simple. By having a disclosure mechanism in the law that requires companies to give information about the data they've collected and how they collected it, we essentially ensure that millions of people around the world who are annotating daily and interacting with AI systems in the back end are protected from exploitative processes and procedures.
    Right now, nothing is in place in any jurisdiction in the world. Right now, this is a wild west and nobody is protecting these people. These are youth in Pakistan and women in Kenya. These are vulnerable Canadians who are trying to have a side job to make a bit more money. In all of these circumstances, they have nothing protecting them.
    Thank you.

[Translation]

    Mr. Lemire, you have the floor.
    Thank you, Mr. Chair.
    Mr. Shee, I'd like to continue with you.
    Yesterday, CBC presented a report on artificial intelligence in the service of war. He was referring to the use of artificial intelligence and Gospel software by the Israeli army to better target the facilities assigned to Hamas. However, this technology increases the number of civilian casualties, according to experts, because there is less human interaction behind every decision made before going on the offensive.
    In that case, is there some slippage in artificial intelligence? How can we regulate these practices to save human lives?
    That's a great question.
    I have no experience with artificial intelligence in war or defence situations. I can just comment on that as a sophisticated citizen.
    I think we need a very clear framework that takes into account the rules of war that have already been established. Unfortunately, AI systems are used in war situations and they kill a lot of people. We have to be aware of the risk and take measures to manage it.
    Very humbly, this is a bit outside my area of expertise. However, I think you raise an important point. Indeed, artificial intelligence will be used in war situations and systems [Technical difficulty—Editor].
    We still have problems with the system. It looks like the sound has stopped working.
    Mr. Shee and Mr. Masse, can you hear us?

  (1645)  

    Yes, I can.
    Okay.

[English]

     The sound is back.
    Yes. It just started working again.

[Translation]

    Okay.
    Mr. Lemire, you may continue.
    Based on your expertise and your involvement with the Global Partnership on Artificial Intelligence working group, I think you will be able to help us demystify all the pitfalls caused by artificial intelligence, in particular.
    I would like you to give us another type of example in terms of protecting our democratic institutions. For example, this week, 19,600 amendments were proposed in a very short time at the Standing Committee on Natural Resources by the Conservative Party, not to mention its name. Since the amendments were made in a very short period of time, I think that they were necessarily generated by artificial intelligence. So they want to bog down institutions using artificial intelligence.
    In that case, is there also a risk of slippage? What can we do to protect our democratic institutions from these attempts that could be called "Trumpists"?
    Without commenting specifically on what came out, I can mention that generating artificial intelligence, which is taking up more and more space in the current conversation, can generate texts as plausible as those that human beings would write. It certainly puts our democracy at risk, and it also puts people's interactions with different systems at risk. Will people be able to be assured that this is a human being? The answer is no.
    You raise an extremely important question. You have to have a marker to determine whether something is produced by an AI system as well as a way for the consumer or the person interacting with the system to know that they are speaking with a system based on artificial intelligence and not with a human being.
    These are essential elements to protect our democracy from the misinformation that can emerge and will grow exponentially with new systems. We're in the early days of artificial intelligence. We absolutely have to have ways of identifying artificial intelligence systems and determining whether we are in the process of interacting with a system or a person.
    Thank you very much.
    Thank you, Mr. Lemire.
    Mr. Masse, you have the floor.

[English]

    Thank you, Mr. Chair.
    Ms. Wylie, you didn't get a chance to get into the last conversation, so let me ask you this. If we had an AI commissioner or data commissioner, whatever it might be called, would the model of the Privacy Commissioner, an independent model like that, be something we should be looking toward?
    Second to that, maybe you have another suggestion. How do we bring some independence and accountability to the table here that would also be empowered?
    I just want to go back to my remark about making the same mistake for the third time. It's the same mistake that we saw with privacy and data protection, which is to treat these topics as objects that are independent from the rest of the world as it exists. We've seen the failure that thinking like this has gotten us to. While we talk about privacy a lot, what we're dealing with is a deeply privatized space where the control and power of the infrastructures—particularly with AI, never mind with data and software—are privately held.
    If we think about our failures in access to justice for things like privacy and data protection, and we think about the failures of this sort of model, with privacy or data protection it's never about whether we should do it; it's always about “how”. If we want to turn the corner into a different world so that we have control over technologies, we have to talk about them in context.
    For me, I go back to this. Who is the minister in charge of X, Y or Z sector? Who is in charge of making sure forestry is operating in a certain way, environmental protections are operating in a certain way and cars are operating in a certain way? Go from there every time. If we keep scaffolding more and more complexity, more and more compliance, and more and more of these sorts of complexities out into the sky, it doesn't serve justice. We have a fundamental access to justice problem as it stands right now. How many people have the time and energy to file a complaint with the Privacy Commissioner? What is the profile of someone or the demographic of someone who can bring that kind of a complaint forward?
    In the same way that we're talking today about how you would even know if you were harmed by artificial intelligence, I recently heard the concept that in some cases it's like asbestos: It's in things and you don't know it's there. Whom will you go to and ask to hold them accountable? If you get hit by a car, there is a clearly accessible track of where you go to deal with that problem. I do not understand why we think it's a good idea to build an entirely new construct when we have a perfectly good physical and material world and a perfectly good set of governance standards. That's a place where we have public power. To me, the only people who benefit from scaffolding all this additional complexity are those with private interests. In a democracy—at this point in time we're 30 years in—public power has to be increased.
    Do I want to see a commissioner for AI? No. I don't want to see a new regime for AI.

  (1650)  

     You want it built within the actual departments. Is that correct?
    That's correct. Guess what's going to happen. It will surface the harms that right now we're talking about in abstractions.
     I'm sorry, everyone. I cannot believe we keep doing this. This is not how the world works. You have to talk about specificity. That is how the law works. The law is about where, when, who and what happened. That's how justice works. You don't work in the abstract.
    I'm sorry to have to keep bringing us back to this point, but why don't we build out from what we have functioning? The majority of our government is pre-existing. Work from there.
    Thank you.

[Translation]

    Thank you very much.
    Mr. Généreux, you have the floor.
    Thank you to all the witnesses.
    As they say in Quebec, I am “sur le cul”.

[English]

    I don't know if you know what that means. It means “I'm on my ass.”

[Translation]

    I don't know if that translates into that.
    I apologize to the interpreters.
    Ms. Wylie, you're giving us a particularly interesting lesson.
    Bill C‑27 has been on the table for almost two years. It has been evaluated. It was created by public servants, obviously, in Ottawa. Some politicians have done some work to try to put in place legislation that would frame a problem that you don't really see. In fact, you are saying that all the legislation we need already exists. We simply have to proceed by sector to correct the elements that will be related to artificial intelligence.
    At the committee, we have heard from people. Over the past few years, we have conducted studies on blockchain, the automotive industry, the right to repair, and so on.
    Today, you are telling us that what we are doing is not working at all. You are telling us to take back the studies we have conducted and the existing legislation and to correct what will affect artificial intelligence, because it is already in all these sectors, let's face it.
    My question is still for you, Ms. Wylie, but I would also like to know what Ms. Brandusescu and Ms. Casovan think of your position.

[English]

    There's nothing wrong with supporting the industry of AI. I want to be very clear about that. However, to me, it is stunningly disingenuous to use fear, safety, harm reduction, human rights protection and more to say that's the reason for this bill, which is why I was asking what this bill is actually doing.
    If we were to stop and go back to the start, we could ask, “What are the sector-specific harms we're seeing? How did we deal with them in software and banking?” Take any sector. They're not starting from scratch. They've had to deal with data. They've had to deal with privacy. They've had to deal with software. There are harms all over the place with software. We're not looking at those. This is also not even coherent with the last 30 years of tech harms.
    What I'm saying is that you should go to the people. Again, the only people we should be talking to haven't been included in this process. They're the ones who could tell you about the problems, because right now, everybody's talking in generic terms.

[Translation]

    I agree with you. Moreover, we were told on Tuesday that the third world war will be technological.
    To avoid potential abuses, should we still have something like what is about to be implemented in Europe and around the world?

  (1655)  

[English]

     Thank you.
    To build on Bianca's point, I think we need to regulate AI. We need to slow down. We can't move fast and break things with regulation. Again, AI is being regulated, but it's private regulation. It's self-regulation, and that's not working. Mr. Shee already said that in his first five minutes.
    We need something different. We need it to be like the EU in the way that it needs to be for both the public and the private sector, and it cannot be centralized. I insist, because there's too much at stake to keep all of the power in one agency. I'm going to move on to also say that it can't just be the OPC. It cannot just be the Privacy Commissioner, because AI is more than privacy. AI is also about privatization.
    What we see right now is the risk of regulatory capture, because every time there's a new summit being done, as in the U.K., at Bletchley Park, the major governments, including ours, get together and announce collaborations with a top firm. Now, we have the usual suspects—Amazon, Google and Microsoft—and then the new kids on the block, but it cannot be that.
    Again, this isn't about perfection at all; it's that the process to get here was one and a half years of almost no public consultation, participation or understanding, even when, as Bianca said, we do have specific examples of harms over and over again. We do need to make sure that AI is regulated. We can use our imagination to do that with law.
    You have 30 seconds, Ashley.
    I think I've shared repeatedly that I don't think AI is one monolithic thing. I do think that it needs to be broken down into sector-specific regulation.
    I think what AIDA does is provide a framework that is then dependent on other types of sector-specific regulation. There is no contesting that how this was done is problematic. There needs to be more public consultation. I was really happy to see in the amendments that at least it speaks to what was heard and then how that's being addressed.
    I think if we just put that aside—the process is for you guys to debate—it's very important to have regulation of AI systems. I've seen and experienced, by doing a lot of interventions with civil society organizations, harms that are occurring. I don't think that having rules or just leaving it up to self-regulation from companies to say, “We're doing the best we can do” is going to prompt the appropriate behaviour. I think legislators need that.
    We need to be able to set the homework, too. We can't say, “You go and write your test, and then you mark it yourself.” I think it's very important that we as civil society organizations, in combination with industry and with government and academics, write what those tests are, the standards that I'm talking about, and then use that to assess industry.
    Thank you very much.

[Translation]

    Thank you very much, Mr. Généreux.

[English]

    Mr. Turnbull, the floor is yours.
     Thanks, Chair.
    Thanks to all of the witnesses for being here today. We have a great juxtaposition of perspectives. We've been hearing a diverse cross-section of perspectives during this undertaking.
    I think we can all admit that this is a very big and important piece of legislation that is complex and challenging for all of us, both as legislators and as.... I'm not sure that any one stakeholder has the full view on how this should move forward. I think it's good to have conversations like this that are push-and-pull. There are lots of challenges here. I appreciate that.
    I wanted to just say, first off, that this bill was initiated due to recommendations from the minister's AI advisory committee, which consisted of industry experts. The Facebook whistle-blower was also part of the context that led to this work.
    I'd also say that, from my perspective, there were consultations of over 300 stakeholders, which included universities, institutes, companies, industry groups, associations, privacy experts and consumer protection groups. I think there are some other categories, but those are the ones that I can see. I have the list here. It has been provided publicly and to committee members.
    I would also say, in terms of the way that parliamentary practice goes, that usually amendments aren't provided in advance, during a study where you hear from witnesses. The government has provided the amendments in advance. We've also heard from some witnesses.
    There are varying perspectives on what the process should look like. We've heard from some witnesses that tabling a framework piece of legislation was a good way to get something on the Order Paper and then undertake a lot of consultation to inform amendments to that. Some people feel like that process is very justified.
    I just wanted to make those statements off the hop.
    Ms. Casovan, we've heard the point that you made, about balancing innovation and protection, from some other witnesses. What I've heard is that having responsible guardrails for AI will allow people to benefit from it while protecting them at the same time. I know that's a challenge. Like any legislation that we work on, it is a balancing act that we're constantly confronting.
    Could you speak to how we will know if we get that balance right, from your perspective?

  (1700)  

     It would be if no one is harmed.
    It's really difficult to address that. I think that, first, we need to try. We need to recognize that just leaving it to the free market is probably not going to result in the conclusion we want to see.
    There's an amazing resource called the AI Incident Database. I don't know if you've seen it. It tracks different types of harms that exist. I'd love for that to be compiled and then we'd understand better, so we can articulate in more common ways what those are.
    It's a difficult question to answer in the absence of having any of these in place. I think the requirement for collecting data through a commissioner's office that would have those use cases reported is important.
    Ms. Casovan, from your perspective, are we moving too fast on this legislation and this work?
    We heard from quite a few witnesses earlier this week that we're in fact behind and we need to move faster. That's what I've been hearing a lot from stakeholders. Some would maybe disagree with that.
    What would you say?
    With all respect to my fellow witnesses, I think we're moving way too slowly on this.
    I understand the gravity of this. There are many different risks of harm, and it's hard to understand those without contextualizing them. I think Ms. Wylie made that point quite well. I heard her points, which essentially seem to be leaning towards a really decentralized approach to this, whereas I think the approach we're opting to take is to have a very central piece of legislation that is going to regulate all activity to some degree. Obviously, that will need to evolve and change. We know that the pace AI is evolving at is so quick that it's hard to keep up with.
    What is your perspective? It's a tough question to answer.
     I think there are two key points here.
    One is that we really need to have one point of accountability. There's a lot of interoperability between different types of AI systems, so knowing exactly.... If it's an automated vehicle, it might be very clear that this is going to fall into transportation, but if it's a health care system, it might have issues related to consumer protection or it might have issues related to the health and safety of somebody. Breaking those apart is difficult, so what I think this bill does is require those different types of regulators and regulations to work hand in hand with each other.
    There are also gaps that exist.
    Maybe, third, I would add—as I said in my opening statements—having the professionalization of an individual who would be responsible and accountable for the governance of these systems. You would then have some consistency across all of these different regulations.

  (1705)  

    It's interesting, because it's not uncommon for us these days to talk about the big overarching issues and wanting to take an all-of-government or all-of-economy or all-of-society approach, and I think most people understand that governments have to integrate across ministries and really tackle these problems together. We see that with the fight against climate change.
    However, a lot of the legislation still sits within a ministerial accountability and falls within a minister's mandate and role. I think it's not uncommon to have central legislation that is in one ministry but still impacts the work right across government ministries. I think that's what we might see in this process.
    Is that what you're hoping to see?
    The requirement of harmonization across different ministries, I think, is really important. I would also flag the requirement of harmonization within Canada—interprovincially, as well as provincial to national government and local government, as well—which I think is quite important.
    Also, this bill, as we know with the amendments, addresses international harmonization with Canada playing a crucial role with the EU—which we've heard a lot about today—but we haven't talked about the U.S. executive order and the implications of that.
    Chair, I think I'm out of time, but thanks for your leniency.

[Translation]

    Thank you very much.
    I now give the floor to Mr. Williams for five minutes.

[English]

    Thank you very much, Chair.
    Ashley, I want to follow up with you on a couple of things.
    This has been a great discussion, by the way, especially on AIDA today.
    We talk about the value of public and private data, especially for AIDA, and where this bill right now exempts that. Right now, under this bill, DND, CSIS and CSE are exempt from AIDA and there's provision for any federal or provincial department or agency to be exempted via regulation. That's the entire federal government and Crown corporations that are exempt.
    When we talk about AIDA as a whole in this bill, in your opinion, is it right that we've exempted all of the public government from AIDA as a whole?
    That's why we worked on the directive on automated decision-making systems at Treasury Board Secretariat. That's the purview of management systems that Treasury Board is responsible and accountable for. Should that be raised to an act level, similar to where we see PIPEDA and a Privacy Act that governs how public sector services work? Yes.
    One thing I would like to see is alignment of requirements between AIDA and the directive, or a subsequent type of policy that would come out from TBS recognizing that automated decision-making systems aren't the only types of AI.
    One of the things that it doesn't address, or things that are out of scope, is national security systems, as you mentioned, so I do think that additional provisions would need to be made for that.
    I guess the premise of this.... Just for everyone listening right now, the first part of Bill C-27 does not cover the public sector, but to the point that you brought up, we have the Privacy Act, which, it could be argued, we should have been studying at the exact same time. The point I'm making is that there is nothing out there that exists, especially not in AIDA, that addresses AI in the public sector, and we've talked a lot about that.
     I'm trying to get a better handle on your recommendation. Should this have been included with AIDA right now, or is this a whole other act that you're looking at that we should have included with this?
     The directive on automated decision-making systems does, though, oversee government's use of AI systems.
    One other additional thing is that we should, again, ensure alignment between these two due to the fact that most government departments aren't actually developing their own AI systems. They're purchasing them. I think that ensuring that procurement rules are the same as AIDA is quite important.
    However, privacy and looking at an act that would govern data of AI and AI as a whole would certainly look over that. Procurement would only look at other sections, like the Investment Canada Act or other acts.
    It's interesting to me that that's not in there. I think that is a glaring hole that I've just noticed today.
    I want to switch to either Ms. Wylie or Ms. Brandusescu.
    I really focus a lot on opposition to competition. We look at big, bossy conglomerates that exist within the system.
    Ms. Wylie, you made an interesting comment that this seems to be going forward only for industry, because capital is looking for a place to go. The examples you gave are that it seems to be benefiting Amazon, Microsoft and Google. They're big, bossy conglomerates. They're huge companies that are only looking to get bigger, and obviously to benefit from this.
    When it comes to competition, as the industry committee, we want small, scrappy competitors and companies to be able to enter the space and to ensure that they can compete and enter the market.
     I agree with your arguments on where we are with AIDA. Let's talk about if we started anew. How do we create competition? Where do we start in terms of making sure that we get all the players in, not just the big ones but some of the smaller ones included within the discussions?

  (1710)  

    I have just two comments on this.
    One, it's partially why, if we had a proper public engagement and started from the beginning, you'd have to map the infrastructural assets that make up artificial intelligence. There is no AI without big tech, full stop. You can't spin it up in your garage. You can't go and do your little software company because code is available to you. That's not how this industry works. This is what I mean. I'm concerned about the lack of homework that has been done to make sure we're starting from a place of material, physical, infrastructural reality, and how it relates to this industry. That's one thing.
    The second thing I want to say, which relates back to the conversation we were having about centralization or decentralization, is that not only does the Canadian government not have much clout in terms of telling what the heart of this infrastructure can and can't do.... When we think about privacy legislation, if we start up here with an umbrella called “privacy”, and then we look at how that works in different sectors, we might know what that looks like sector to sector. If our umbrella is called “artificial intelligence”, it's artificial intelligence what? What exactly are we trying to do if our umbrella is called “artificial intelligence”? Are we trying to use it everywhere?
    I just want to keep returning us to the fact that we're having a conversation within a frame that does not track to the reality of how this industry is set up, nor how our pre-existing legislation is set up.
     I just want to say how little companies might come in on this. The start-ups are hoping no one is going to ask about their two- or three-year revenues, because all start-ups have to do is show scale. That's how the venture capital industry works. You just have to show that your thing is getting big; you don't have to show that it's making money. That's how similar it is to a casino.
    That's why I think the fact that we're building into this sector without looking at the consequences on the rest of our whole economy is also a grave error.
     To add to Bianca's point, I want to take us back four years ago, when Element AI was heavily invested in by the public and the private sector. It's a case that we just do not speak about anymore in Canada and Quebec. This is to Bianca's point about who owns the infrastructure and who owns the data centre versus the datasets. Again, without big tech, there may not be AI, but I would argue that without the military there would be no AI, because that's where it comes from, like most technology.
    Element AI was a darling of Canada. In the end, the space that we had in the regulatory framework for competition did not allow it to survive. What happened? It was acquired by ServiceNow, a Silicon Valley company that does, frankly, worker surveillance.
    I would like to know exactly, when we move on to this new ideation, what more shared prosperity in competition looks like across SMEs and big companies. I would like to reflect on the failures of AI in Canada within the industry space, and see where we went wrong and what happened to the massive amount of funding and government spending to prop up our industry with all the AI research expertise we have, with all of the centres of excellence. We should reflect on this before we even go and ideate on how competition should look. We should reflect on what happened, especially with Element AI.

  (1715)  

    You're out of time, Mr. Williams.
    Before I turn to Mr. Gaheer, I'll give myself one small question.
    Ms. Brandusescu, you just mentioned something we've never heard so far on the committee. You said there would be no AI without the military. Would you mind explaining that?
    Certainly. I've heard over and over again witnesses talk about scale, but not violence at scale. That's what we see—how AI is being used in the military. We have to go back to something I spoke about when Parliament did a study on facial recognition technology—that's companies that are defence contractors, which are now spun up as AI and data analytics firms. A famous one is Palantir. You may know of them.
    Palantir is interesting, because it started in defence, but now it's everywhere. The NHS in the U.K. just gave them a contract of millions of dollars, despite so much opposition to it. Palantir promised that the U.K. government would be in charge of the data of the people, but in the end it is not so. We have past examples of Palantir abusing human rights. Let's bring that into context. For example, an Amnesty U.S.A. study showed how, in the U.S., government planned mass arrests of nearly 700 people and “the separation of children from their parents...causing irreparable harm”.
    I'll go back to the military. What does this mean? The military is the biggest funder of AI. We see rapid, exacerbating killing at scale. When we are racing to move forward with making more AI, making it faster and creating faster regulation just so we can justify to ourselves that we use it, we are not thinking about what should be banned, what should be decommissioned—
    Thank you, Ms. Brandusescu. I'll have to cut you off here. I was just interested in more information on that. To my knowledge, most of the biggest players in AI remain in the private sector, but thank you for the examples you provided.
    We have bells ringing, colleagues, which means we do need unanimous consent to continue. I'm looking around the room to see if we have it, given that we're going to about 35 hours of voting, thanks to our friends to my left, but definitely to my right politically.
    Do I have unanimous consent to continue for 10 more minutes?
    Some hon. members: Agreed.
    The Chair: I'll now yield the floor to MP Gaheer.
     Thank you, Chair, and thank you to all the witnesses for their testimony before the committee.
    My first question is for Ms. Casovan.
    We know that the minister has provided recent amendments to the committee to clarify the definition and scope of “high-impact systems” by outlining seven distinct classes of such systems. Do you think that's a good way of proceeding? Does it provide sufficient clarity, or do you think there would be a better model?
    As we've discussed a lot today, I do think it's a good start to understand that AI is not one thing. Breaking it down into different types of contexts and use is important.
    I think, though, that it's a limited list. I get that the concept is to continue to add to it and to have a process. I do think that maintaining an inventory of such classes could be difficult, as I mentioned earlier, recognizing that there are different degrees of risk that could exist within those classes and trying to identify a way...similar to what we did with the directive to break that down into what we are actually trying to achieve from each of the mitigation measures for those classes of systems.
    Do you have a proposed system that would be better?
     As I mentioned, I think it could be a matrix of both the contexts that are being used and the recognition of what a standard high-risk assessment would be.
    Again, I would draw your attention to appendix C of the directive on automated decision-making systems, where that is broken down into four different types of impact, as we called it, but then different types of compliance requirements would be related to that.
    I also think that the key word there is a “standard” for an impact assessment, to understand what that risk would actually be.

  (1720)  

    Sorry, I didn't mean to put you on the spot.
     No, no. I have lots of opinions about this.
    This is generally for everyone, and maybe Mr. Shee can answer this one. We also know that the government-proposed amendments to AIDA include a series of tasks to be completed before a general purpose or high-impact AI system can be made commercially available, including an assessment of adverse effects and a test of the effectiveness of measures to mitigate the risk of harm or biased results.
    What do you think about these new obligations that the government wants to impose on people who want to make AI systems available?
     Maybe I'll answer really quickly.
    Go ahead.
     I think that what's really important is that there is a governance process put in place before those systems are developed. As I mentioned, that's part of this assurance or audit function that would exist.
    I also think, as I mentioned in my opening statements, that having an accountable person, something like a chief AI officer, would help work through that process in a consistent and therefore meaningful way.
    Mr. Shee, do you want to add anything?
    I would just add that I think it's a good starting place, but especially in proposed paragraph 11(1)(a), respecting the usage of data, I think there would be advantages to including a disclosure mechanism to be able to understand how the data was labelled and how it was used. I think that would be something that would have an incredibly positive impact, both on the creation of the models and on their implementation.
    I think it's a good starting place, but I would include, specifically in that paragraph, the amendments that were proposed with a specific disclosure requirement around data labelling and annotation.
    We can't know how these things will be used. We can write systems all day where we say, “This is where we think it will be used. This is what we think the risks and harms could be.” It's a tool. You can't tell anybody how to use a tool. If they use it a certain way that's not in your categorization, you have a problem.
    This model, to me.... I'm going to keep bringing us back to deployment. We can write beautiful laws with intricacy all day long, but you can't control the use of these products in operations and deployment. I don't want us to talk as though how we think we should organize it is the most important thing. The most important thing is what's going to happen in reality.
    Ms. Casovan, do you think there should be a compliance audit before the AI systems are placed on the market?
     Yes, I do, and I think, too, a certain specification. That's why a standard would be good. The analogue could be a fair trade symbol or LEED, as I mentioned previously. Thinking about different types of standards that one would need to meet in order for that to go on the market should be a precondition for high-risk systems.
     Thank you, Chair.
    Thank you to the witnesses.

[Translation]

    Thank you, Mr. Gaheer.
    Mr. Lemire, you have the floor.
    Thank you, Mr. Chair.
    Ms. Brandusescu, last year, when you appeared before the Standing Committee on Access to Information, Privacy and Ethics, you talked about the procurement of artificial intelligence systems by the public sector. You were saying that facial recognition technologies and other artificial intelligence technologies highlight the need for a discussion on private sector participation in public governance.
    Can you elaborate on what you mean by private sector involvement in public governance when it comes to facial recognition technologies and other artificial intelligence systems?

[English]

    Facial recognition technology, as we know, hopefully is the low-hanging fruit of dangerous AI. It seems like harm is getting out of context. I will call it dangerous because that's what it is. Yet, we need to have these levels of imagination of banning certain technologies, and facial recognition technologies should be banned.
    The public sector can make that choice because it is responsible to the public in the end. The private sector, as it stands, is responsible to the shareholder and to the business model of making more money. This is how capitalism works. This is what we're seeing.
    That's not the job of the government. Again, when I say that AIDA should be out and reflected upon as public and private, that is exactly what I'm thinking about. I'm thinking about facial recognition technology used by law enforcement, national security, in IRCC and in immigration. Now it can be used maybe in Service Canada, or maybe in the CRA the way the IRS wanted to use facial recognition for doing taxes. Again, these technologies aren't domain-bound. Just like Palantir went from the military to health, FRT, facial recognition technology, works the same way. The public sector needs to be involved and to be publicly accountable to its people.
    I really am coming back to Bianca's points about democracy. Participation is messy, but we need to participate in a way that there is dissent, discussion, non-compliance across the board and consensus, because it is important to make sure that these technologies will no longer be used because they are too dangerous. We saw what happened with Clearview AI. That is a privacy case, but it is also a mass surveillance case, besides the obvious, which are the dangers and harms it has done to so many marginalized groups.

  (1725)  

[Translation]

    Thank you.
    We see all the abuses that are happening in Ireland and China, among others.
    Thank you.
    Thank you, Mr. Lemire.
    Normally, Mr. Masse would now have the floor, but I think he had to leave to vote. That will conclude the last round of questions, and since we have little time left to head to the House, that will end today's meeting.
    Mr. Masse is back. I thought we lost him.

[English]

    The floor is yours, Brian.
    Mr. Chair, perhaps we should wrap up. It's getting tight.
    I agree.
    I want to thank all our witnesses for enlightening us this afternoon.
    I want to thank the analysts.

[Translation]

    I also want to thank the interpreters and the clerk.
    The meeting is adjourned.
Publication Explorer
Publication Explorer
ParlVU