Skip to main content
Start of content

HUMA Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities


NUMBER 088 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Wednesday, November 8, 2023

[Recorded by Electronic Apparatus]

  (1630)  

[English]

    I will call the meeting to order.
    Welcome to meeting number 88 of the House of Commons Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities. Pursuant to Standing Order 108(2), the committee is resuming its study on the implications of artificial intelligence technologies for the Canadian labour force.
     Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders. Members are appearing in person and are appearing remotely using the Zoom application.
    For the benefit of everyone, I would ask that, before you speak, you wait until I recognize you. For those appearing virtually, use the “raise hand” icon to get my attention. For those in the room, simply raise your hand.
    You have the option of speaking in the official language of your choice. If translation services are discontinued, please get my attention. We'll suspend while they are being corrected. For those appearing virtually, use the globe symbol at the bottom of your screen for translation services. For those in the room, translation is provided via the earpiece.
    I would also remind those in the room to please keep your earpiece away from your microphone for the protection of the translators, who can incur hearing injuries from the feedback.
    All of our witnesses are appearing virtually today. We have James Bessen, professor of technology and policy—
    He's not with us. He's the one who's not here.
    We have Morgan Frank, professor of the department of informatics and networked systems at the University of Pittsburgh, by video conference. We have Fenwick McKelvey, associate professor of information and communication technology policy at Concordia University, by video conference.
    With that, we will have five-minute opening statements, beginning with Mr. Frank.
    You have five minutes for an opening statement, please.
     Thank you very much for this opportunity to share my thoughts with all of you.
    Generative AI renews concerns for job stability, education and the future of work, because generative AI is capable of things that were unimaginable from AI systems just 10 years ago. The conventional wisdom from labour economics recognizes that technology does not automate occupations wholesale, but instead automates specific activities within a job.
    The challenge is that workplace activities and AI applications vary across the entire economy. Therefore, efforts to predict automation and job stability need to rely on simplifying heuristics. Cognitive, creative and white-collar workers are assumed to be safe from automation, for example, because creativity is difficult to assess objectively and because the creative process is difficult to describe algorithmically.
    However, generative AI, including tools like large language models like ChatGPT and image generators like Midjourney are doing creative work when they write essays, poetry or computer programming code, or when they generate novel images from just a prompt. This means that today's AI shatters the conventional wisdom that has been used to inform economic policy and economic research.
    For example, unlike past automation studies, a recent report from OpenAI and the University of Pennsylvania found that U.S. occupations with the most exposure to large language models tended to be the occupations requiring the most education and earning the highest wages. Departing from a heuristic-based approach to predicting automation will require some new data that reflects the more direct implications of generative AI.
    However, just like past technologies, generative AI performs specific workplace activities, which means that AI's most direct impact on occupations is through a shift in workers' skills and activities towards other skills that would complement AI. However, if workers fail to adapt then a job separation can occur. These separations include workers quitting or being fired by their employer. Job separations will lead workers to seek new employment, but if they struggle to find a job, then they may receive unemployment benefits to support them while they continue job seeking.
    This lays out a pipeline of AI impact that identifies the most and least direct implications from AI and highlights that data that better reflects shifting skill demands, job separations by region or industry or occupation, and even the data on the unemployment risk experienced by occupations across the economy, will improve efforts to predict AI's impact on workers.
    There are some emerging data sources, including job postings, workers' resumés and data from unemployment insurance offices, that offer some new options for describing these details in the labour market but are often missed by traditional government labour statistics.
    Finally, because shifts in skills are the most direct consequence from exposure to generative AI, prudent policy should focus on the mechanisms for skill acquisition. If generative AI will mostly impact white-collar jobs, then we should focus on the skills taught during a college education since a college education is the typical mechanism for getting students into those white-collar jobs.
    While labour statistics abound, insight into college skills are more difficult to find. If college skills are quantified, then just as we study generative AI in the workforce, we can also assess the colleges, students and major areas of study with the greatest exposure to AI. However, educational exposure to generative AI should not be shied away from. Recent case studies find that generative AI tools do not out-compete or significantly improve the performance of experts, but they do make a big difference in raising the performance of non-experts to be more comparable to that of the experts in those applications.
    If this observation holds across contexts, then incorporating generative AI into learning curricula has the potential to improve learning objectives, especially for underperforming students, and therefore could strengthen educational programs.
    In summary, generative AI is new and exciting and will impact the workforce in new ways from previous technology. In fact, generative AI shatters the conventional wisdom used to predict automation from AI in the past, because it does the work of occupations that were previously thought to be immune to automation.
    A better path forward would focus on the data and insights reflecting what AI can actually do from the perspective of workplace skills and activities as well as the sources of those skills among workers in the workforce.
    With that, thank you.

  (1635)  

    Thank you, Mr. Frank.
    We'll now go to Mr. McKelvey for five minutes.
     Mr. McKelvey, go ahead please.
    I'm an associate professor in information and communication technology policy at Concordia University. My research addresses the intersection of algorithms and AI in relation to technology policy. I submit these comments today in my professional capacity, representing my views alone.
    I'm speaking from the unceded indigenous lands of Tiohtià:ke or Montreal. The Kanienkehaka nation is recognized as the custodians of the lands and waters from which I join you today.
    I want to begin by connecting this study to the broader legislative agenda and then providing some specific comments about the connections between foundational models trained off public data or other large datasets and the growing concentration in the AI industry.
    Canada is presently undergoing major changes to its federal data and privacy law through C-27, which grants greater exemptions for data collection as classified for legitimate business purposes. These exemptions enable greater use of machine learning and other data-dependent classes of AI technologies, putting tremendous pressure on a late amendment, the artificial intelligence and data act, to mitigate high-risk applications and plausible harms. Labour, automation, workers' privacy and data rights should be important considerations for this bill as seen in the U.S. AI executive order. I would encourage this committee to study the effects of C-27 on workplace privacy and the consequences of a more permissive data environment.
    As for the relationship between labour and artificial intelligence, I wish to make three major observations based on my review of the literature, and a few recommendations. First, AI will affect the labour force, and these effects will be unevenly distributed. Second, AI's effects are not simply about automation but about the quality of work. Third, the current arrangement of AI is concentrating power in a few technology firms.
    I grew up in St. John, New Brunswick, under the shadow of global supply chains and a changing workforce. My friends all worked in call centres. Now these same jobs will be automated by chatbots, or at least assisted through generative AI. My own research has shown that a driving theme in discussing AI in telecommunication services focuses on automating customer contact.
    I begin with call centres because, as we know through the work of Dr. Enda Brophy, that work is “female, precarious, and mobile.” The example serves as an important reminder that AI's effects may further marginalize workers targeted for automation.
    AI's effects seem to already be affecting precarious outsourced workers, according to reporting from Rest of World. Understanding the intersectional effects of AI is critical to understanding its impact on workforce. We are only beginning to see how Canada will fit into these global shifts and how Canada might export more precarious jobs abroad as well as find new sources of job growth across its regions and sectors.
    Finally, workers are increasingly finding themselves subjected to algorithmic management. Combined with a growing turn toward workplace surveillance, as being studied by Dr. Adam Molnar, there is an urgent need to understand and protect workers from invasive data-gathering that might reduce their workplace autonomy or even train less skilled workers or automated replacements. According to the OECD, workers subjected to algorithmic management have a larger reported feeling of a loss of autonomy.
    All the promises of AI hinge on being able to do work more efficiently, but who benefits from this efficiency? OECD studies have found that “AI may also lead to a higher pace and intensity of work”. The impact seems obvious and well established by past studies of technology like the BlackBerry, which shifted workplace expectations and encouraged an always-on expectation of the worker. Other research suggests that AI has the biggest benefits for new employees. The presumed benefit is that this enables workers to make a contribution more quickly, but the risk is that AI contributes to a devaluing or deskilling of workers. These emphasize the need to consider AI's effects not just on jobs but on the quality of work itself.
    The introduction of generative AI marks a change in how important office suites like Microsoft Office, Google Docs and Adobe Creative Cloud function in the workplace. My final comment here is less about AI's particular configuration now, but instead about a growing reliance on a few technology platforms that have become critical infrastructure for workplace productivity and are rapidly integrating generative AI functions. AI might lock in these firms' market power as their access to data and cloud computing might make it difficult to compete, as well as for workers to opt out of these products and services. Past examples demonstrate that communication technology favours monopolies without open standards or efforts to decentralize power.
    I am happy to discuss remedies and solutions in the question and answer period, but I encourage the committee to do a few things.
    One, investigate better protection of workers and workers' rights, including greater data protection and safeguards and enforcement against invasive workplace surveillance, especially to ensure workers can't train themselves out of a job.

  (1640)  

     Two, consider arbitration and greater support in bargaining power, especially for contracts between independent contractors and large technology firms.
    Three, ensure that efficiency benefits are fairly distributed, such as considering a four-day workweek, raising minimum wage and ensuring a right to disconnect.
    Thank you for the time and the opportunity to speak.
    Thank you, Mr. McKelvey.
    We will begin the first round of questioning with Ms. Ferreri for six minutes, please.
    Thank you to our witnesses who are here today to discuss the impacts of AI, in particular, on labour but also where Canada sits on this.
    Mr. Frank, are you Canadian? I know you're working out of an American university. Are you Canadian?
     I'm not Canadian, no. I'm American.
    What do you know about Canada's current productivity in terms of AI and where we stand?
    I know that Canada is very active in innovating in this space, mostly through my exposure to academic activity in the area of machine learning, computer science and data science as well.
    Actually, “Canada is 29th out of 38 countries in the Organization for Economic Co-operation and Development, based on GDP per hour worked—to the low[est] rates of new-technology adoption in our private sector.” We're actually doing really poorly in this area. We're really behind in our production. Our productivity in the last eight years has really plummeted as well. It says, “out of 35 OECD countries whose national statistical agencies have conducted similar business surveys, Canada ranks 20th in AI adoption.” That's 20th out of 35.
    You had some really interesting points that you were talking about. I'd like you to expand on them. You talked about the creative ability of AI. In particular, you said something about not being “immune to automation”. What do you mean by that?

  (1645)  

    What I mean is that because the nature of workplace activities or the skills you would need to perform one job compared to another are so particular and so diverse, it's been difficult to find data that reflects all this variability, so researchers have relied on just heuristics. For example, if you get a college degree, you don't have to worry about automation.
    What's interesting about generative AI is that it's doing work that would have been assumed to be safe from automation just a few years ago. That means there are new parts of the economy—in particular, high-skilled, white-collar jobs—where generative AI is doing some of the workplace activities we would expect from these workers. That is something new.
    Where do you see closing the gap so that the employee learns how to operate AI rather than being replaced by AI?
    This is a very good question. It's not exactly clear how to incentivize this among employers. I think workers recognize that they need to upskill to work with whatever the new technology is in their domain, but I feel that employers and HR don't make enough space for this, or they don't see the value.
    What's a lot more common generally is that sometimes it's easier to separate with a worker who's been with you for a while and has a higher wage, when you can hire somebody out of college for much cheaper, who's just entering the workforce and is already prepared to work with new technology. Exactly how to do this re-skilling on the fly for folks who are already in the middle of their careers is an open question.
    Thank you for that.
    As somebody who's sitting on the other side of the border, looking in at Canada, how do you think we're doing on a productivity level? I know that I gave you some stats, but what is the general consensus? Is that something that's talked about amongst your peers and colleagues?
    What I see is that, as I mentioned, the research activity around AI in Canada is very strong. I feel that, given the statistics you pointed out, there could be an improvement in capturing some of that talent in the economics of Canada.
    My understanding is that there are tech companies who are interested in places like Vancouver, for example, but this population of companies and workers is far outstripped by areas in the U.S., including in New York and in Silicon Valley outside San Francisco. Maybe that type of critical mass hasn't quite found a home in Canada yet.
    You just touched on a very critical point. We have the potential to attract the talent, to attract the work and to increase our GDP, but there are barriers in place of what brings people to Canada. I don't know if you want to expand on that, but I'm curious to know whether you've heard about our housing crisis, our inflation or about these kinds of issues.
     Yes. I don't have too much to say about that, although we have many of the same issues in the U.S. too.
    Thank you for that.
    If I can go to you, Mr. Fenwick, your testimony seemed almost anti-AI. Even when I see what you have published in Internet Daemons: Digital Communications Possessed, it seems that you have a more negative spin on something that can be used as a tool.
     It is important. As I say, the horse is out of the barn. How do we use this to our benefit rather than being afraid of something that is inevitable?
    I would say that I don't have a negative view of artificial intelligence. What I would say is that I am cautious. I think my responsibility is to identify gaps between the development and deployment of artificial intelligence in the current regulatory environment in Canada and, in particular, some of the ways we're talking about as to how Canada fits into a global political economy around artificial intelligence.
    I think some of my concerns around specifically generative AI hinge upon its impacts and relationship to Canadian privacy law. I think what we've seen—and I think what's quite significant—is that we're undergoing a kind of classic procurement hack, which is that technology like ChatGPT has been released to the public and workers are adopting and scrambling this without actual adequate time to address how this is being integrated into the workforce.
     This is a strategy similar to what's been used by companies like Clearview AI in trying to adapt the use of AI tools in police forces through a similar mechanism of circumventing classic procurement mechanisms. I think these types of strategies are part of what I hope to call out. I have less concern about the technology itself, necessarily, than about its delivery and development with a clear sense of its social impacts.

  (1650)  

    Thank you for that, and just to correct, I called you “Mr. Fenwick” earlier and it's “Mr. McKelvey”. I'm sorry.
    Thank you, Ms. Ferreri.
    Mr. Coteau, you have six minutes.
    Thank you, Mr. Chair.
    I want to thank our witnesses for joining us today. It's pretty exciting to hear from experts on this very interesting subject matter. Thank you for your time.
    Maybe I'll start with Mr. Frank.
    You suggested that generative AI may impact white-collar workers more than those we traditionally refer to as blue-collar workers, and I'm assuming that automation would affect white-collar workers less than AI would.
    Can you explain a bit more about those two technologies? I know that this is a study on AI, but I think it's important, because I guess the second part to the question is this: How is the integration or the intersection between those two technologies going to impact both workforce sectors?
    To elaborate, yes, workers across the economy in white-collar roles or blue-collar roles face a threat of automation from technology, although usually the technologies are really quite different. The go-to example for blue-collar workers might be thinking about robotics in manufacturing, while until recently the example for white-collar workers was to think about things like computer programming and machine learning.
     It seems that blue-collar workers are at greater risk of being completely substituted by things like robotics—imagine a conveyer belt with a robot arm completely replacing somebody who would otherwise have to move things around—while white-collar workers are made more productive, because machine learning makes it easier to analyze data and to focus more on interpreting results rather than actually crunching numbers.
    Generative AI is different because it seems that it's doing actually the more cognitive part of that white-collar work. It's actually able to interpret results in addition to things that standard AI machine learning can already do—like crunch numbers. This makes it fundamentally different and fundamentally within the domain of work that usually describes a white-collar job rather than a blue-collar job.
    Yes.
    You spend a lot of time in this area of study. I'm sure you see things that just amaze you in the workforce, things that may even catch you off guard, things that are disruptive. Can you give us an example of something you may have seen in the last year that caught even you a bit off guard in regard to the disruption it's having within a sector? Any examples...?
     Image generation has really picked up very quickly in just the last two years. It's performing very well compared to five years ago. It's getting better at a faster rate, and it's generalizing into other modalities as well. We're starting to see things like not just creating a single image based on a prompt but also creating several images together, based on a prompt, that are all coherent. You could think about a page from a comic book, for example, and even creating whole videos based on descriptions of the features of the objects in the image and also how they're going to interact over time.
    This is really a space with a lot of expansion. This has created a lot of uncertainty for creative workers in the economy. The most obvious example is with tools like Midjourney or OpenAI's DALL-E, which are image generator platforms. What do these tools mean for the future of work for graphic designers? It seems there's a risk that graphic designers could be completely automated, but I actually don't think that's what will happen. What I expect is that these tools will make ideation, which is just one step in the creative process, much faster and much more scalable. Graphic designers will become more efficient. Maybe they can offer their services for a lower price per contract, because they're able to do this ideation so much faster. From the economics literature, we know that when there's this scaling in productivity, the scaling of demand doesn't always have to be linear. It's sometimes the case that you can get more demand as a good or product becomes easier to produce.

  (1655)  

    Thank you so much for that response.
    Mr. McKelvey, I have a quick question for you. You talked about protecting workers' rights or just protecting workers in general when it comes to the adoption of AI as it's integrated more and more into the space that workers operate within.
    How do we as policy-makers, as folks who build regulation, develop policy, regulations and legislation that keep up with the changes that are happening so quickly? Even in your space and with your expertise, I would say that you probably couldn't predict with accuracy what's going to happen even next year.
    How do we get ahead of it by making sure that the policy, legislation and regulations we put in place actually are aligned with where we're going? Do you have any thoughts on that when it comes to workers' rights?
    Honourable member, I can assure you that I can't predict what will happen next, nor when the light will go out in my office.
    I will tell you that I think there are actually long-term trends. I feel that one thing that is important to recognize is that generative AI is arriving in a pretty well-established policy context when you have growing debate and concerns across the government about the influence of large technology firms.
    Really, two things come to mind as key points. One has been an approach that governments elsewhere have been trying to look at around arbitration and being able to allow for and support our collective bargaining power when there's such asymmetry between a large platform and a worker on those platforms. I would add that many of the creative sectors working online now are out front and centre on the impacts of algorithms and how that will impact content creation.
    I would think that one part is trying to figure out how to, in places, step in to alleviate bargaining asymmetries. The second is trying to deal with actually the contracts and contract law, because in many ways you're dealing with service arrangements with large institutions and cloud providers. This is another key point where we need symmetries in place. I think those are two key sites of identification.
    I think the third thing is just being mindful of the changes that are taking place in workplace surveillance. This is a long-standing trend. Certainly things like the turn toward algorithmic management and employee monitoring programs are not going away. I think sustained attention could be dedicated there.
    Thank you.
    Thank you, Mr. Coteau.

[Translation]

    Ms. Chabot, you have the floor for six minutes.
     Thank you, Mr. Chair.
    I'd like to thank the witnesses.
    I was pleased to see Quebec hosting an important forum on framing artificial intelligence last week, with a number of players in attendance. Even though the data is lacking, we're starting to see some interesting impact studies. I wanted to point that out.
    My first question is for you, Mr. McKelvey.
    In your speech, you talked about Bill C‑27. I should point out that our committee is not studying this bill. Another committee is studying it. One of my colleagues told me that the committee had only reached data protection in its study of the bill. Therefore, the committee hasn't yet gotten into the real challenges posed by artificial intelligence.
     You have made us aware that the Standing Committee on Human Resources, Skills Development and the Status of Persons with Disabilities could study the effects of Bill C‑27. In your opinion, should the two committees do it simultaneously rather than one after the other? Can you tell us more about that?

[English]

     First, some of my comments were also drawn from the forum for AI presentation in Montreal and some of the panel discussions around labour. I actually think it is important to recognize the differences in Quebec's leadership on addressing the social impacts of artificial intelligence. That was an important milestone in trying to push an agenda of trying to think about AI as not simply economic policy but also as social policy.
    The challenge, presently, with Bill C-27 is that it's complex enough in itself, and then there is the added AIDA amendment. It's a really challenging moment to make very important legislation work, so having more eyes on it, particularly attention from your committee on the labour impacts of Bill C-27, would be welcome.
    Given the time that this committee will have to investigate the multitude of changes, I don't think there is going to be enough time to address those effectively. This is an important way of coordinating AI policy across the government, which in my own research I found lacking.

  (1700)  

[Translation]

    Thank you. That's a very good argument.
    You mentioned three things about the workforce. One thing you said was that the effects would be unevenly distributed. You also talked about quality of work, which is something I'm very interested in. Earlier, my Conservative colleague spoke about productivity. In my opinion, productivity is not only related to the number of hours worked; quality of work is also important.
    In your opinion, what effects will AI have on quality of work?

[English]

     I want to first clarify that artificial intelligence is a complicated term presently.
     I appreciate Dr. Frank's work in distinguishing between the present discussions of generative AI and the broader term that we use for artificial intelligence. Certainly, there is a wholesale conversation about AI's impact, but I think in this moment right now what we're talking about is generative AI.
    The two parts that stand out to me are that, one, Canada's position, at least in the generative AI landscape, is different from its position in the broader AI ecosystem. You've really seen movement from a few large American firms to launch some of the main products—you hear about ChatGPT and the other ones—which I think are not necessarily part of the Canadian ecosystem. That, I think, raises the first question about where we fit in our own workplace autonomy, what tools we are able to use and how much we are kind of following. I think that's an important shift.
    The second thing is that my background is largely in studying media systems. My closest proxy to understanding the distributive effects of artificial intelligence is looking at creators online and around platform regulation. I would say that a lot of the impacts of artificial intelligence are around automated ad generation.
    Facebook is launching new features to auto-generate AI in ads. A lot of the content is this kind of high-level creative stuff, and I think the daily churn of information production is an important place where this impact is going to take place. Partially, I think our information systems are really primed for high-volume, low-quality content. That's been a kind of wide concern, and certainly one of the impacts that we have in journalism presently is that you see workers attuned to generating press and stories for the algorithm.
    My first concern is one where you could see a kind of devaluing of the type of labour that's being done, because it could be done quicker or more efficiently. The second of my comments is that I think—and this is from my read of the OECD literature—there is also this potential of a deskilling, saying that we are automating this and that enables certain types of tasks. I think that's specifically generative AI and the generative AI that's being approached in a top-down way. It's being embedded in key productivity suites and kind of rolled out with the expectation that people are figuring out how to use it.
    I think an important point to make is that how OpenAI, which launched ChatGPT, has been deliberately trying to kind of hack and disrupt the workplace. That open demo—what was ChatGPT—demonstrates that is a business strategy we want to attend to.

  (1705)  

[Translation]

    Thank you, Mr. McKelvey and Ms. Chabot.

[English]

    Ms. Zarrillo, you have six minutes, please.
    Thank you to the witnesses.
     I'm going to ask my question initially of Mr. McKelvey.
    We're obviously in the very early stages of this, as legislators, and I'm sure that it's going to evolve over time. Right now I'm wanting to focus on the obvious traps that we should be legislating. I really appreciate the three that you brought forward.
    I'm interested in your expanding a little bit more on this “efficiency benefits are fairly distributed”. With the intersectional lens that you brought to this discussion, which is gender.... There could be other intersections, of course. This committee looks at disability inclusion, so I'm also very interested in how that would benefit or harm persons with disabilities and bring them into the workforce.
    With that lens, I'm wondering if you could explain a little bit how workers can be protected and benefit from the obvious evolution.
     I do want to acknowledge that there are opportunities here. One of the parts that I think is important with generative AI in these opportunities is thinking about how they're changing the barriers to access, particularly when it comes to things like passing as a native English speaker.
     If we're adapting and trying to understand the multiple layers, I think one part is trying to acknowledge one of the potential benefits, recognizing some of the proxies we have for workplace competency, like English writing, which is something that might ultimately be beneficial in allowing people who are non-native speakers to actually access those skills. That kind of goes back to things like grammar.
    Part of what we're looking at here is attending to the different.... There are two parts that I think are coming up. There's one dealing with change in the precarious workforce, when you're talking about more contract work, shift work or gig work. AI doesn't change that, but I think AI adds to the importance of studying the shifts in the labour market towards more user platform arrangements, like what we see with Uber.
     That's really where I feel there's going to be one potential point of impact: whether you're going to see AI as part of what we call the “algorithmic management” of those platforms. Those are often people turning to those as jobs of last resort or jobs that they're looking to.... I think that in one sense it's an important way of protecting workers who are in those kinds of gig jobs.
    The second part, then, I think, is trying to look at the way that, more broadly, we have this silent arrangement with a few large technology firms that are providing critical infrastructure and how conscious they are of understanding the ways their data collection practices are affecting the workforce and might be in place.
    I think those are my best guesses as remarks. I think there is a challenge here about, really, this deeper question: Is the driving force of this kind of productivity just going to be something...? Where is it going to be adopted and where are the drivers here? Part of what I see is that generative AI is incentivizing further automation in places that already seem automatable, like in content creation. There is, I think, a way of saying that jobs that have already been deskilled or marginalized are going to become exacerbated by this turn towards generative AI.
    Thanks for that.
    I was hoping to get a bit of a further look into your comments around the gender split and how we need to look at this data—and also for disability. Have there been conversations around how data needs to be split? Have there been conversations around the disproportionate amount of data that's already in these larger systems that didn't look at women's voices, that didn't look at racialized voices and that didn't look at disability voices at the same percentage or to the same degree?
    In preparation for this, I was trying to look for evidence of where these impacts would be coming from. I wasn't able to find anything that's been published that's talking about the impacts in Canada, necessarily, with the gender-based, I would say, and I'd add that this is really an important part of what's going on in these discussions about artificial intelligence, especially in generative models: the biases that embed and reproduce.
     I would like to acknowledge that, when you're talking about what voices, it's also important to recognize what voices these systems reproduce. This is really fantastic work. When you look for and ask for a generative AI model to depict a doctor, is it more likely to be a male than a female? It's the same thing when it comes to depicting.... If you describe someone from a different country, how do they reproduce certain key stereotypes?
     I think one part—to add to what I clearly agree with you is a need to identify how automation and generative AI will impact jobs from an intersectional framework—is that this is clear investigative work, clear work that needs to be done. There is also a clear concern about the biases built into and baked into these technologies that are being rolled out as solutions to workplace productivity.

  (1710)  

    Thank you so much. I know that this is going to be a long conversation over time.
    I want to go back to protecting workers' incomes. We all know that the writers were on strike, and now it's the film industry. It's really impacting communities north and south of the border. I want to talk about those workers, those creative workers who have already seen the impacts. Perhaps you could share a bit about how incomes need to be protected and privacy and how this data collection now matters, but really, I want to know how we protect workers' incomes.
     Please give us a short answer, Mr. McKelvey.
    I think there's a question about ensuring that there's proper taxation, which is another discussion around large platforms, and making sure they are contributing. I know there's some stall in movement in the OECD. Certainly you're talking about ensuring that the benefits or the profits of a lot of these key platforms aren't leaving Canada. I think that's part of ensuring that there are those strong social safety networks to support workers in general, whether that requires an expansion of minimum wage....
    There's a lot of discussion. It's a bit hard because it's so fraught. When you're talking about universal basic income, that is often trotted out. Sometimes I think it undermines actually getting strong worker protections. I think there's a host of things that can be in place.
    I would say the one thing is that I've heard from creators and opened up some productive dialogues. There is concern there. There's definitely concern in the creative sector about what's taking place around generative AI.
    Dr. McKelvey—
     I think it kind of demonstrates to me what's an important arrangement here. If this technology is not coming into a neutral place, it's coming in where you're talking about large studios, creators and the relationships between them.
    I think—
    Dr. McKelvey, perhaps you could conclude your thought. I'm sure other questioners will get to you. I do have to keep close to the schedule, so I will move on to Mr. Aitchison.
    Dr. McKelvey, you can certainly conclude your thought process in response to another question. Thank you.
    Mr. Aitchison, go ahead for five minutes.
    I do have a couple of questions, but I am curious to hear the rest of your thought, Mr. McKelvey, if you would like to finish it.
    Thank you so much. I appreciate the time.
    I just wanted to say that I think what's interesting is that, in at least the writers strike, what was being negotiated was access to data and trying to ensure that workers were able to understand their place in the organization, which I think is an important thread.
    There was also a concern that, if you're talking about franchise models and about generating the next Marvel movie, then you're talking about a type of cultural production that is really oriented towards keeping the same type of content being churned out. I think that's where workers were concerned that their scripts or their content would be used to train models that ultimately would either undermine their bargaining power or replace them. That's important only to point out the kind of context and where there is a benefit or perceived value in this kind of automated content generation.
    The third thing is what actors are negotiating for—and this seems like a clear split—which is whether they have a right to their face and whether studios have, in perpetuity, access to modify their images. That I think all speaks to the idea that workers need to have data rights and privacy rights. I think the actors guild and the writers guild have really been the ones at the forefront of demonstrating what is a broad concern, not just in Hollywood.
    Okay. Thank you for that.
    I'm one of those people who are still sort of struggling to grasp what exactly the scope of AI is, I guess, but there's no question that, in the last 50 years, technology has advanced and changed in an exponential fashion.
    I'm wondering if you can give me an example of a technology from even a generation ago that had a similar kind of impact on our labour markets and on our society, about which there was this level of concern or caution or interest.
    The question is for both witnesses.

  (1715)  

    Sure. I'll start.
    The Internet comes to mind. I don't know that it was similar in terms of concern, but it was certainly similar in terms of being ubiquitous across many domains and really shifting the nature of many jobs. However, it did also create a lot of new jobs that were unimaginable before the Internet. I would say the Internet is a comparable example, even if the conversation at the time when the Internet was young had different tones to it.
    Mr. McKelvey, go ahead.
    I'm going to give a Canadian example, the BlackBerry. I actually remember being a worker and my bosses having a BlackBerry and walking around and how cool they were. Really, I think the shift was to where there's an expectation of being connected and a change in the kind of dynamic of the work and the pace of the work, which is what I was getting at in my comments. I also think there's been a shift in some ways, because I often feel as though our discussions of the Internet imagine us sitting in front of a computer and being thoughtful, whereas so much of what we turn to is a mobile environment.
    I'm thinking about, say, the impacts of artificial intelligence and something like Google's new camera and how it allows you to delete people from pictures. The debates about what and how much you should be able to do with that are a good reminder about the way in which mobile technology and mobile phones have had a really important impact on the workforce. That has been studied in Canada.
     Mr. McKelvey, further to your points, the Internet and certainly the BlackBerry are two examples. I think they're both great examples. Was there a similar level of concern at the dawn of those technologies about privacy rights, for example, compared to what we're discussing now with AI?
    I was just thinking back about that. When we were talking about the early days of the dot-com boom, and stuff like that, we weren't talking about the same magnitude or influence of companies. If anything, we've learned, and I think we can.... Partially, what I'm here for is that I'm trying to be more conscious about how those technologies have been rolled out in a more thoughtful way.
    When the Internet was coming about, I think there was this idea that it was connectivity and it was going to bridge digital divides, and some of those privacy concerns fell by the wayside.
    What has really become more prominent, at least with mobile technology and the ways mobile phones are really part of a fairly elaborate ad tracking and surveillance network, is that those concerns have become more prominent. Where we are now is that I hope we have learned from our debates and from the challenges we have now about platform governance and know that, when I'm talking about a procurement hack with open AI, to me, it's that type of strategy we've seen companies do time and time again. I hope we're better and quicker at raising concerns about privacy and concerns about users' data than we were in the past.
    I think that's something I'd give back at least to the BlackBerry. It was a cool gimmick, but now I have to check my email all the time because I've been trained to do it, and I regret, in some way, that I didn't think about that sooner.
    Thank you very much.
    Thank you, Mr. Aitchison.
    Next we'll go to Mr. Van Bynen for five minutes.
     Thank you, Mr. Chair.
    Perhaps I'm starting to reflect on my age, but building on the transformation of technology that Mr. Aitchison referenced, I think about Netflix and how that has eliminated video stores. I think about Apple Music and how that has eliminated record stores and tapes. I think about how Uber has transformed the taxi business. I think about Zoom as opposed to a phone call. The technology has changed consumer behaviours and consumer demands, so it will have a very dramatic impact, I think, on the labour force.
    My first question is for Mr. Frank. I've often been concerned that regulatory bodies regulate through the rear-view mirror as opposed to through the windshield, which is where we should be focusing our attention. That draws the dilemma of what we can predict, with a reasonable degree of certainty, and what we cannot predict.

  (1720)  

    That's a very difficult question of predicting how emergent technologies will look in the future. Of course, if I was very good at this, maybe I would be playing the stocks, instead of being here talking with all of you.
    It's a bit of a mystery. You can look towards recent shifts, recent dynamics, to try to predict what will come next. I think in terms of regulating, in an area where technology is so new and we're discovering new capabilities and applications—it seems like every couple of weeks, something new and exciting is highlighted—having voices from industry and from researchers as part of the regulatory conversation would be a good way to do that.
    I would recommend doing that in a way that allows those folks who are experts to share their views in a protected way so that there's a public-facing and also a private way for them to communicate with legislators. That would give you the best opportunity to understand what's happening next and to attempt to be ahead of it.
    You recommended the development of a decision framework given the fundamental uncertainty of being able to predict technological change. What considerations or principles should be part of that framework?
     I think having expert opinions from the bodies that are developing and deploying platforms will be essential. On the other hand, having folks who are informed, based purely on the empirics of how those tools are being used, which is sometimes out of the control of the developers, out of the control of the companies, is equally important.
    In my view, it's very difficult to get that type of data, but there are some options that might be helpful in seeing how workers are changing their use of technology in real time and also in seeing how employers are changing their demands around technology in real time. This would be a departure from what I have seen from official government statistics about the workforce.
     Like the United States, Canada is a very diverse country. Would you see a significant difference between the impact on workforces in large urban areas as compared with smaller towns or smaller communities? What would be the factors to consider as that rolls out?
    If we focus on generative AI, then I expect that there will be a lot of positive implications for the workers currently residing in cities. There will be a challenge to make sure that some of that economic benefit trickles down to workers in rural areas. This is because a lot of the work for tech companies or the work with data—the work that would be involved in innovating generative AI technologies but also benefiting from the tools you can build with these AI tools—is done by workers who tend to be in cities.
    The access to data and computing and these AI services also requires a lot of infrastructure. For example, access to high-speed internet is, of course, abundant in cities. It's better and better in rural communities, but it is not great everywhere. This is just one basic way to see that, even if the brightest minds were living in the most rural communities, there could still be infrastructural barriers in their way.
    Great.
    Thank you, Mr. Chair.
    Thank you, Mr. Van Bynen.

[Translation]

    Ms. Chabot, you have two and a half minutes.
    I really wish I had five minutes' speaking time, Mr. Chair.
    My next question is a short one and it's for Mr. McKelvey.
    In your opening remarks, you gave the example of call centres. Personally, I am in contact with many unions, and I have to say that when it comes to telecommunications, it's pretty appalling. I wasn't aware of some of the current realities. We don't need to look any further than Bell, Videotron or Telus; call centres are being relocated around the globe. That's causing a fairness issue for reports and working conditions.
    What difficulties will generative AI add to all that?

[English]

    I'll say that my expertise has been historically in the telecommunications sector. When looking at discussions of artificial intelligence, there has been a real turn towards automation. I thought there would be more debate, but in my review of the trade literature, there was a focus on automation, and automation in all parts. I think automation in the call centre with chatbots is a really immediate part of what's already taking place.
    I think partially it's important to look at the voices from below and the voices that are working and that have the lived impact of these AI systems on their day-to-day workplace. I think that was an important part of focusing on the call centre. For me, at least, that was the job of my future.

  (1725)  

[Translation]

    Let's hope the future is one of quality.
    Mr. Frank, in your presentation, you not only spoke about challenges and skills, but also about struggling students. I feel it's important to understand whether artificial intelligence is going to be an asset or a risk, particularly for struggling students. As we know, we will need to count on humans to support these students in terms of their skills, abilities and struggles.
    How will AI affect these students?

[English]

    Sure. That's a wonderful question.
    The same type of volume of research about AI and its implications for skills in the workforce hasn't been carried out for the mechanisms by which workers get skills. Education would be one of the major mechanisms by which people get skills before entering the workforce, but I think AI is a tool that will really help educate students today.
    I'll give you a simple example. I'm a professor. Right now I have to field every email from every student when they have questions that need clarification from me. You could imagine, with some of the clarifications that I or my teaching assistant provide, maybe having an AI tool available to them instantly, in real time, at any hour of the day, could help them get an understanding. If there's still confusion, then they could submit a question to me or their TA.
    The other thing we see, at least in the few studies I've seen that are actually random controlled trials, where some workers have access to generative AI compared to those who do not, is that generative AI's biggest effect is in bringing up non-expert performance to the level of expert performance. If this observation holds in a variety of cases, what it could mean in the classroom is that underperforming students are able to reach the levels of high-performing students with access to these tools. That could be a great dynamic or great result that makes everyone reach the same type of bar in higher education.

[Translation]

    Ms. Chabot, did you want to make a comment?
    I'd really like to continue this discussion, but I only have 15 seconds left. I don't think I can make use of them.
    Thank you.

[English]

     We'll get back to you.
    We have Ms. Zarrillo for two and a half minutes, please.
    Thank you.
    I am going to follow up on this topic or vein with Mr. Frank. We've talked for decades about intellectual property and how the intellectual property belongs to the company. It doesn't necessarily belong to the worker. It belongs to the company. We're now having conversations about cognitive property. A lot of the data that's already captured by large organizations came from someone's ideas, their education, their thoughts and their opinions, and it's now being monetized by someone else.
    I'm very interested in how we protect workers' cognitive property, especially now, in situations where we're starting to build a lot of that cognitive property into AI tools. Do you have some thoughts on how we can protect workers, Mr. Frank, when it comes to their opinions, education, skills, knowledge and talents?
     Sure. I just want to make sure I understand your question, though. Do you mean protecting the IP of workers who help to build these AI algorithms or the IP of the folks who generated data—and who may or may not be employed at the company—that was used to train the AI algorithm?

  (1730)  

    It's the first one.
     I'm not necessarily seeing it as algorithms because they're just doing their jobs. They're answering phone calls or they're just doing their job the way they always did, but it's now being captured in a way such that it can answer in generative AI later. It's just basically taking people's thoughts, opinions, intellectual property and cognitive property and monetizing it by the corporations that do that. How do we protect that for workers?
    All right. I understand better.
    I would say that this is not new to AI, this dynamic where the ideas, the thoughts and the perspectives of workers are getting coded into AI, just as the perspectives of programmers who build social media websites get encoded into the programming and the code behind the website.
    I would say that this maybe isn't a new topic. I think that having workers who are thinking about these issues—for example, representation and how we account for different viewpoints—and having people with those ideas embedded into the engineering side of these tools is really powerful for exactly that reason.
    Another thought that comes to mind is that the generative AI tools we're seeing now that are making big waves, things like ChatGPT and Midjourney for image generations, these are not things that I could produce here with my laptop or even with the computers I have at my lab at the university. These really are things that require collaboration between smart people who can write very effective code and huge amounts of resources on the computing side and the training side of these AIs. I don't think that something like ChatGPT would have emerged without a collaboration between the smart people who do the coding and the resources that the company can put behind a project like that.
    Thank you, Ms. Zarrillo. We will get back to you, I'm sure.
    Mrs. Gray, you have five minutes.
     Thank you, Mr. Chair.
    Thank you to all the witnesses for being here today.
     Before I get into my lines of questioning, I would like to move the following motion:
The committee immediately undertake a five meeting review on the disproportionate impact the carbon tax has on low income individuals.
     This has been circulated to committee members.
    We know that the carbon tax is impacting vulnerable Canadians by raising the cost of basic goods like gas, home heating and groceries. The Liberal government has admitted that it's doubling down on their carbon tax plan, including quadrupling the carbon tax on Canadians. The temporary pause the Liberal government has announced for the carbon tax on home heating oil won't help 97% of Canadians. The committee needs to study how proceeding with the government's carbon tax policy adds costs to the lives of the most vulnerable.
    This is relevant to this committee specifically, because the mandate of this committee talks about studies that this committee can do and should prioritize. In our mandate, it includes income security and disability issues. The carbon tax affects income security by raising the price of basic necessities. As well, the carbon tax increasing costs impacts the most vulnerable in our society, especially persons with disabilities. We heard a lot of testimony at this committee during the Canada Disability Benefit Act legislation, where persons with disabilities were finding it hard to pay for basic necessities. We even heard of people considering medical assistance in dying, MAID, because they couldn't afford to live. All of that testimony was actually before the most recent carbon tax increase that happened this summer.
    I have moved this motion. I hope the committee will support it.
    Thank you very much, Mr. Chair.
    Thank you, Mrs. Gray.
    Just for the benefit of witnesses, a member of the committee has moved a motion, which is the prerogative of the committee member. We have to deal with it before we get back to the continuation of the testimony on the study we're doing.
    It's my understanding that the motion is in order.
    Go ahead, Mr. Coteau and then Mr. Fragiskatos.

  (1735)  

    Can I say one thing, Mr. Chair? Maybe we can get an indication of how long this debate is going to be. I just don't know if it's going to take a long time.
    The witnesses are very busy people. I don't want them to have to be here for 15 minutes to half an hour, and then we don't have the opportunity to finish what we're doing.
    Mr. Chair, we should just go to a vote.
     I cannot answer that. That's totally the prerogative of the committee.
    The discussion is on the motion that is now before the committee. It is in order.
    Mr. Fragiskatos, go ahead with discussion on the motion.
    Mr. Chair, just out of respect for our colleague Ms. Chabot, she had stepped out. I think she knows where we're at, but I wonder if you could just make it clear that a motion has been presented.
    It's just so that we're on the same page. She was away.
    Yes.
    Committee members, Mrs. Gray has moved a motion. I will ask the clerk to read the motion as currently on the floor for debate.
    The motion is as follows:
The committee immediately undertake a five meeting review on the disproportionate impact the carbon tax has on low income individuals.
    Is there any discussion?
    Go ahead, Mr. Fragiskatos.
    I have just a quick comment, Mr. Chair.
    This issue, along with others relating to the carbon tax, has been debated at length in the House of Commons. It will continue to be debated at length in the House at other committees that are the relevant committees. Because of that, I think we should allow those conversations to continue in those relevant forums.
    For that reason, our side will not be supporting Mrs. Gray's motion.
     Seeing no further—
    Ms. Zarrillo, go ahead on the motion.
    Thank you, Mr. Chair.
    I actually appreciate the comments around the mandate of this committee.
    We do know that many families are struggling and many people are struggling, and the Canada disability benefit is something we'd all like to see advanced much more quickly.
    I want to discuss something. In March of this year I brought forward a motion that I didn't table. I just sent it out to committee. Really, I'm interested in tax credits. What are the tax credits like? How can we increase income for people?
    I know that one thing for sure is that seniors and persons with disabilities often don't file their taxes. They don't get their taxes in on time, and then they lose their GIS and they lose some of their income supports and entitlements. I found out over the past two years that there are students coming out of school who don't understand what entitlements they have and what income supports they have.
    Although I'm all for trying to understand how we can increase income for people, I'm concerned that this one is narrow in its scope, that it's just looking at the carbon tax. It's too wide of a scope. I would like this committee to sit together. Maybe we can have a discussion about really taking a look at income supports that vulnerable people need, income supports for vulnerable people that they haven't accessed, entitlements that they're allowed and that they deserve but that they haven't been able to access because of different barriers and maybe even because they haven't filed their taxes.
    Something I am thinking about is automatic tax filing. It would be a great opportunity to increase income.
    Although I like the spirit of it, I think we need to have a wider discussion about how we support vulnerable people in this country.
    I'll just leave it there.
    Thank you, Ms. Zarrillo.
    Madame Chabot, go ahead on the motion.

[Translation]

    Thank you, Mr. Chair.
    I understand what my colleague from the NDP Ms. Zarrillo is saying. However, Ms. Gray's motion as presented asks that our committee undertake a study on the carbon tax. Our committee connects with low-income people, while other committees make connections in other areas.
    I disagree with Ms. Gray's motion. The considerations around the pros and cons of the carbon tax have been widely debated. I don't believe it's relevant for our committee to discuss it.
    Thank you.

  (1740)  

[English]

    I see no further discussion.
    Mr. Clerk, could we have a recorded vote on the motion presently before the committee?
    (Motion negatived: nays 7; yeas 4)
    The Chair: We will return to the witness testimony.
    Ms. Gray, you have four and a half minutes.
    Thank you, Mr. Chair.
    That was really unfortunate, considering how much people are hurting, but I'll go into the questions I have for the witnesses here today.
    I have the same question for both of the witnesses. I'm wondering if I can get your feedback. The U.S. has just released their AI rules. I'm wondering if you have had an opportunity to go through those. Specifically, do you believe there's a benefit to having Canada potentially harmonize our rules with the AI rules that the U.S. is using and perhaps other countries, like those in the EU, are using? I'm wondering if you can comment on that.
    Maybe we'll start with Mr. Frank.
    Sure. Thank you.
    It's a good question. I haven't reviewed all of the details of the Biden administration executive order. I know there's a lot of concern there about jobs, about data privacy and about IP and ownership. I think there is a big risk that each county having its own regulations on each of these dimensions would create a real problem so that no country's regulations would end up being effective.
    The thing about AI is that it's digital, so it's easy to ship data from one country to anywhere else in the world, to use that in an AI system and to ship the results or even the code base for the AI itself. It's easy to share across borders.
    I would expect that it would be much more effective if countries could collaborate to agree on a standardized set of regulations along all the dimensions they think are of concern.
     Thank you.
    I'll go to Mr. McKelvey to answer that as well.
    Yes, I've been able to review it briefly, but not in complete depth. I'd say that it certainly demonstrates the clear gaps that I see in Canada's approach to the artificial intelligence and data act. You see much more fulsome treatment of potential harms and willingness to engage in the sector-specific issues around artificial intelligence. I think it's a document worth studying just to demonstrate the complexity of the challenges facing regulators and legislators...and then in comparison to AIDA.
    I would agree with Dr. Frank that there is probably a need for a harmonized approach. Canada is quite active in that to some degree, whether it's participating in a global partnership on AI or in some of its bilateral agreements with France or the United Kingdom. I think there is a debate that Canada is going to have to position itself where it's at least working—and I know there are efforts to talk about treaties with the EU around AI—in parallel with the United States.
    The one thing I would say is that with Bill C-27 and Quebec's Law 25, I think there is a big test about GDPR compliance. Really, what should be front and centre when we are talking about our legislative agenda for AI is understanding it in relationship to the movement that's happening in Europe around the AI act, and I think to a lesser degree with the United States, although I commend what that order has been able to accomplish.

  (1745)  

    Great. Thank you.
    You answered part of the next question that I was going to ask.
    I'll pose this to Mr. Frank, then.
    When we're looking at future trade negotiations, how do you see that this might fit in? Are there any trade issues that we should be aware of now—anti-competitive effects for Canada?
    You only have a minute to respond, so what are your thoughts on that?
     Quickly I'll say that there is a concern about a consolidation of power right now: There are just a handful of companies that are able to build these highly powerful AI systems.
    On the other hand, in trade negotiations, one thing I'm concerned about is that the data from one population can be used to train software in another country.
    Coming up with ways that allow for people to be a connected global society but also protect the interests of a population from misuse by a firm somewhere else with a different set of rules will be an important issue to address moving forward.
    Thank you, Mrs. Gray.
    Mr. Kusmierczyk, you have five minutes, please.
    Thank you, Mr. Chair.
    I thank the witnesses for an excellent and illuminating conversation on an important topic.
    Professor Frank, we talk about the impact of AI on workers and the labour force. How will we measure that impact on workers, and how do we need to think about how we're going to measure the impact across various sectors?
    I love this question. I spend a lot of time researching this question.
    What you'll find from research on automation is a lot of use of the word “exposure”. Workers are exposed or tasks are exposed to AI. There is not a lot of commitment to what “exposure” means. That is because some workers are freed up by technology to do other things that complement AI, so they become more productive and more valuable with AI. In an extreme case, where many tasks are automated by AI, then you can be completely substituted for, and that would be a negative outcome for the worker.
    I think we need to be more specific than just saying that a worker or a task is exposed moving forward. The way to do that is to get data on how skill sets shift in response to the introduction of AI. When a new tool is introduced, in a dream world, we would have data that reflects what every worker is doing all the time.
    Of course, there are a lot of privacy concerns with that, but for the sake of conversation, let's just imagine that world. We would have very good information on what changes when a worker is introduced to a new tool. You can even imagine having these little natural experiments, where there's randomization in who does and does not have access to a technology. You could start to get at the causal impact of technology shifts.
    That would be the ideal. I think there are some things that are a few steps away from the ideal that would also be very useful.
    I'm much more familiar with the labour statistics we get in the U.S. than in Canada. Those of you who read my brief probably picked that up very quickly.
    Very important labour dynamics like job separation rates or unemployment are not typically described by industry, firm or job title. Clearly getting at those concepts at the more granular level would be much closer to the consequences in shifts of skill and would allow for more proactive policy interventions—not just from AI but from any labour disruption moving forward.
     Typically when you look at the impact of technology on industry, for example, we look at, let's say, impact on income. We also see it on, like you said, unemployment, job loss and whatnot. Those are very blunt instruments. Is that correct, in your opinion?
    That doesn't paint the full picture of the impact of AI on the workplace. Is that correct?
     Yes, absolutely not. In the U.S. context, at least, I can show that there are changes in the probability of receiving unemployment or shifts in job separation rates, at least in terms of estimates from my research, that aren't very correlated with changes in the employment share for an occupation within a state.
    I think there is evidence to show that looking at shifts in employment and shifts in wages is missing out on other dynamics that can occur and, in particular, the specifically negative dynamics that we're most worried about with AI.
    Thanks, Professor.
    I have a question for Professor McKelvey.
     We just completed a summit here, a caregiving summit, on Parliament Hill. In terms of caregiving, there are eight million Canadians who are caregivers. A lot of them are unpaid caregivers. We also have a large paid caregiving part of our economy. We're talking about nurses, PSWs, home care, child care and whatnot.
    I wanted to ask you if you could talk about how AI might impact caregiving in Canada. If you're not able to speak about that in particular, what questions could we be asking to find out what the potential impact of AI is on caregiving in Canada?

  (1750)  

    Obviously this is not my area of expertise, but I will say that I have been pulled into some of these discussions because, in Montreal, there was a proposal to introduce a robot into a seniors' home as a way of taking care. That led me down a bit of an investigation into what the effect of this is.
     The best place to look at as a parallel is Japan. I think there have been a lot of efforts in the automatization of caregiving in the Japan context, but it largely hasn't been effective, because many of these technologies cost as much to maintain as it would to actually properly resource the caregivers in place.
    I think there's a kind of shifting of values, where, again, I think it's targeting the cultural impacts of artificial intelligence. They think the technology is going to do a better job than just paying a nurse or a caregiver properly for that function. I would say that, in anything I have seen, the benefits are overstated compared to the potential, and also that, really, this is something that has to fit within a larger holistic system of care that evaluates even the kinds of benefits—which I'm not saying there isn't—within making sure there are actually proper resources to support our fundamental frontline caregivers.
    Thank you, Mr. Kusmierczyk.
    We have Mrs. Falk for five minutes.
    Thank you very much, Chair.
    I know that in the past this committee has done a study on precarious work. Precarious work is growing and actually quite prevalent.
    I believe it was you, Professor McKelvey, who mentioned the word “precarious”, so I thought I would ask you this question: Do you think precarious work will become more prevalent, more common, when we see AI being developed or even maybe absorbed by business?
    Acknowledging my poor powers of prediction, I can't draw a direct correlation between a rise in precarious work and artificial intelligence.
     I would say that where we'll see a significant place, and where we want to attend to AI's impacts, is around precarious workers and gig workers, because we know that these are the workers that are already subject to algorithmic management, already subject to new forms of workplace surveillance and ultimately have complicated data arrangements with their platform providers, which are often trying to figure out ways of managing them.
    The other part, I would say, is that if we're looking at a shift towards more hybrid environments and changing ways that organizations are being designed, there is certainly I think a push towards trying to create more services that are on demand and that invite, potentially, a kind of precarious relationship because of the type of gig worker. I feel as though what's partially at risk here is that the way shifting platforms are also reorganizing how workforces are taking place could give rise to more plug-and-play types of jobs. That would be something that doesn't have the same risk, because largely you would be contractors.
     Would you say that legislators should take into consideration precarious workers specifically?

  (1755)  

    Yes. Definitely. I'd commented earlier on asymmetrical bargaining power. If you're looking at online social media platforms, you have real data imbalances there. We have mounds of evidence about workers trying to make content creators online subject to how platforms change their data analytics and how the platform works. Really, that demonstrates a way that the workplace is very tangibly precarious, because their popular solutions can change overnight.
    I think that speaks to evidence of a growing part of the workforce but also the lived impacts of what that looks like. If you're dealing with a company that is moving towards more dynamic forms of management through emerging AI strategies, that certainly creates conditions of precarity.
    Thank you.
    I know that we have heard at this committee—and for sure I have in my office as a member of Parliament, as I'm sure all members of Parliament have—about how there is a labour shortage in every single industry sector in Canada. It doesn't matter what it is—they need people and they can't find people.
    Just quickly from both of you, because I'm running out of time, do you believe there will be industries that will be more prone to job displacement when it comes to AI? As well, how can industries prepare for that?
    I'm happy to go first.
    Yes, I think the impact of generative AI and any other technology will be biased towards certain industries. It's not just a blanket impact across the whole economy usually. In the case of generative AI, I imagine that we'll see a lot of advances that are a boon for workers and a boon for capital in tech, but we'll also see new opportunities as a consequence of these new tools that are not necessarily involved in development, such as in the areas of medicine, communications and media.
    I think a lot of spillover effects are yet to materialize. There are a lot of people working on it, and I expect that they will produce something.
    We will conclude with Mr. Fragiskatos, Madame Chabot and Ms. Zarrillo. We lost a little time in the motion, so that's to be fair to everybody.
    I have Mr. Fragiskatos for five minutes.
    Thank you very much, Chair.
    Thank you to both presenters today.
    I'll ask you both the same question. It's a very general question. I like to do this, because it does help summarize what we hear from witnesses, especially when the topic is broad and also very important for public policy. Obviously, we're looking at AI with specific reference to labour, but there are many ways to look at that.
    How can we take from your testimony the most important parts? What would you say are the key things that you would want us as a committee to keep in mind when looking at this issue going forward and when we ultimately provide recommendations to the government on the way forward?
    First, there is a need to consider this around Bill C-27 and the ways in which we're trying to understand privacy and data. Partially what is really important now is recognizing our data power. What AI demonstrates is that there's power in collecting large amounts of data. You can now mobilize it. Really, it's trying to think about privacy law and data as bigger than the traditional concerns about personal information. That's an important broader shift that we've been witnessing, but it just hits it home.
    I think the second thing is then trying to understand these uneven and disparate impacts. Certainly we're going to hear ample evidence about the benefits of artificial intelligence. I think it's incumbent on the government to understand and protect those marginalized and precarious workers who might be on the outside of those benefits.
    That's certainly part of what's going on with generative AI. We're trying to understand a different class. That's why there's so much attention right now. It's a different class of workers, typically white-collar creative workers, who are potentially now facing greater competition from automated solutions. That's not to say that the effects are going to be easy to predict, but it's also saying that we're seeing a marked shift. That needs to be taken into consideration in how we're going to understand this relationship with AI and the labour market.
    Finally, it's to ensure that we are making sure that we have strong protections for workers and making sure that this is something that we value as a society and part of how we frame our legislative agenda.

  (1800)  

     Thank you very much.
    I'll ask the other witness the same question. What is one thing you want us, as a committee, to really keep in mind among all the very important things that were raised here today by both of you?
    I'd say the most important part of my testimony is that, if you feel blindsided or surprised by what AI can do right now, you're not alone. The research community, economists, computer scientists, we've all been really surprised by what recent examples of generative AI can do. The reason we're surprised is that the tools, the data and the framework we've been using to think about AI and the future of work, at least in my case, are clearly outdated. They aren't dynamic enough to account for what generative AI can do.
    Moving forward, I would suggest adapting the data that policy-makers and researchers use so that we can be more responsive to what AI is actually doing and better prepare the workers who are directly in the path of AI with that improved data. Data on skills, I think, would do a lot to help provide insight in both the policy-making and the research domain.
    Our chair tells me I have time for one more question. Actually, the question doesn't belong to me. It belongs to my colleague Mr. Van Bynen, who wanted clarification on what is meant by “algorithmic management”. Just for the record, that would be helpful.
    Algorithmic management is a broad blanket term to talk about different types of techniques of using computers and AI, predictive analytics, to schedule workers, to talk about their performance, to evaluate them and to assign jobs. I think Uber is a good example of algorithmic management, and it ties in to employee monitoring programs or types of systems tied to HR that are monitoring and evaluating workers' performance.
    One of the other examples I know of is, if you're a gig or freelance worker, often, you have to install tracking software that takes screenshots of your productivity over a certain period of time. That's the broad suite of what I'm talking about. They're just new and more invasive forms of monitoring workers and of workplace surveillance, and part of that is tied in to forms of using that data to manage workers.
    Thank you, Mr. Fragiskatos.

[Translation]

    Ms. Chabot, you have two and a half minutes.
    Thank you, Mr. Chair.
    My question is for both witnesses. I should have time to get brief responses.
    The study we're doing specifically looks at the impact of artificial intelligence on the workforce. We could also ask whether these technologies have a greater impact on women and people with disabilities, and whether that constitutes discrimination against them.
    At this stage of our study, if you had one or two recommendations for us, what would they be, Mr. Frank?

[English]

     I would recommend seeking out good, detailed empirical information about which workers are actually experiencing disruption because of AI, exactly what that disruption looks like and, therefore, what that means for their job security and their ability to find a new employment opportunity if that's the type of disruption they're facing.

[Translation]

    What do you think, Mr. McKelvey?

[English]

    Briefly, I would point to a study by the Immigrant Workers Centre of Montreal looking at the applications of algorithmic management in warehouse management. I think hearing from workers, and from workers who are impacted by this, is an important part in making sense of this and in trying to keep up with what's taking place. Part of the deeper issue is developing capabilities and monitoring technology development through something like the Office of Technology Assessment, as the United States previously had, to try to build capabilities to understand those impacts.

[Translation]

    Mr. Frank, I'm going to ask you a question about data, because I want to be sure I understood you correctly.
    In your testimony, you said that more data was needed. That was also part of your recommendations.
    You said that unemployment would provide us with more data, if I understood what you meant correctly. That worries me somewhat. I'm all for using technology to perform certain tasks, but not to replace employees. If it puts jobs at stake, in our opinion, it shouldn't be a solution.
    I want to hear your thoughts on this. When we use a new technology, shouldn't we be aiming for requalification rather than unemployment?

  (1805)  

[English]

     I did not mean to say that unemployment is a solution. Just to clarify, what I meant is that it's often difficult to track why unemployment is occurring. You might see a spike in unemployment in a certain province in Canada and want to understand why that's occurring, and it's not always so easy. You have to dig into other data sources to better understand exactly which industries or which workers in particular are experiencing unemployment.
    Having unemployment data that actually gives you insight into who's experiencing that level of disruption will be very helpful in forming a response that is proactive. The way I actually see this playing out is through better understanding estimates for the probability that workers will receive unemployment, given the labour market they are in, their job title.... I can imagine other factors playing a role as well. For example, their level of education and maybe their ethnicity and gender would be interesting additional variables to have—but by thinking about these as unemployment risks, based on where workers are in the economy.
    Thank you.

[Translation]

    Thank you, Ms. Chabot.

[English]

    To conclude, we have Ms. Zarrillo for two and a half minutes.
    Thank you, Mr. Chair.
    I want to bring forward my motion again today for consideration. I'll do it quickly, and maybe I'll have a little bit of time at the end.
    This is in relation to persons with disabilities and their experiences on Air Canada. I'm sure all of us have seen, since Monday, the Marketplace story as well.
    I want to move the following motion for consideration. I move:
That, given multiple recent reports of persons with disabilities facing discrimination and unacceptable treatment while travelling with Air Canada, and that Air Canada admitted it violated Canadian disability regulations, that, pursuant to Standing Order 108(2), the Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities invite Air Canada CEO Michael Rousseau as soon as possible to committee for a minimum of one hour to explain these violations related to persons with disabilities and how they will rectify this situation; that a report of this meeting be prepared and presented to the House; and that, pursuant to Standing Order 109, the government table a comprehensive response to the report and explain how they will rectify this situation.
    Thank you, Mr. Chair.
    Thank you, Ms. Zarrillo.
    The clerk has advised me that the motion is in order.
    Is it the wish of the committee to adopt the motion?
    (Motion agreed to)
    The Chair: The motion is adopted unanimously, Ms. Zarrillo, and you still have two minutes left.
    Thank you so much.
    My question is for Mr. McKelvey.
    You mentioned Bill C-27 quite a bit. It's quite extensive. I'm wondering if you think that the labour portion, the workers portion, of artificial intelligence should have its own stand-alone legislation or if you think workers will be duly protected in Bill C-27.
    I would say two things briefly. Bill C-27 builds in large exemptions for what types of data can be collected, so if it is anonymized or for legitimate business purposes. I feel like that actually warrants more consideration of what that entails and of the potential impacts it has on workers.
    The second part is that, really, what these exemptions do is.... They are backstopped by AIDA—the artificial intelligence and data act, which is at the end—which really causes some notable concerns because it's putting a lot of the investigative powers in a loosely defined data commissioner role. I actually feel as though part of the task, ahead of the legislative agenda, is changing it from AI to being simply a matter of an economic strategy, and also thinking about ways of mitigating its potential negative and positive social impacts.
    Yes, I think some ways of addressing how this impacts labour and trying to make sure that there is targeted legislation would be a boon, because I think this is not something that is going to be addressed by an omnibus bill.
    Thank you so much. I'm going to take that as a second piece of legislation is required.
    I just want to ask.... You mentioned Uber. Is there any business or two that you would recommend this committee speak to in relation to this study?

  (1810)  

    I don't have the names of the companies off the top of my head, but I would be looking at some of the HR firms that are providing some of these management services. I think that's a part of the.... I'd be happy to...if there is a way of providing comments, but I actually think it's interesting to look at what the firms are actually trying to do in terms of providing the integration of AI in HR. There's a big boom of an industry there, so I think it would be really helpful to hear how these types of tools are being developed, but I don't have names of companies off the top of my head—my apologies.
     Thank you, Ms. Zarrillo.
    Dr. McKelvey and Dr. Frank, if you want to provide a written response to Ms. Zarrillo's question on companies that would be of interest for the committee to hear from, you can provide that in writing to the clerk of the committee.
    With that, I want to thank both of you for appearing before the committee today and providing very informative testimony on this emerging topic that will be discussed for some time.
     We will conclude this portion of the meeting, suspend for a few moments and come back in camera for committee business.
    Dr. McKelvey and Dr. Frank, you can exit Zoom at your wishes. Again, thank you so much.
    We are suspended.
    [Proceedings continue in camera]
Publication Explorer
Publication Explorer
ParlVU