Skip to main content
Start of content

HUMA Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities


NUMBER 089 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Monday, November 20, 2023

[Recorded by Electronic Apparatus]

  (1100)  

[English]

     Members, the clerk has advised me that we have a quorum and all witnesses have been sound-tested and are okay. With that, I call to order meeting number 89 of the House of Commons Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities.
     Pursuant to Standing Order 108(2), the committee is resuming its study on the implications of artificial intelligence technologies for the Canadian labour force.
    Today’s meeting is taking place in a hybrid format, meaning that members are attending in person in the room and virtually.
     You have the option to choose the official language of your choice. To participate, those appearing virtually can use the globe icon at the bottom of their Surface. If there is an interruption in the translation services, please get my attention by using the “raise hand” icon, and we'll suspend while it's being corrected.
    I will remind members appearing in the room to keep their headsets clear of the mike to avoid the translators getting hearing feedback from your device. I would also ask members to speak clearly and slowly for the benefit of the interpreters.
    We have two panels today.
     With the first panel, we have, as an individual, appearing by video conference, James Bessen, professor and director of the technology and policy research initiative at Boston University; in person in the room, Angus Lockhart, senior policy analyst at the Dais at Toronto Metropolitan University; and, appearing by video conference, Olivier Carrière, executive assistant director to the Quebec director of Unifor.
    Welcome back. I believe, Mr. Carrière, that there were issues the last time. Thank you for coming again.
     We will begin with opening statements, beginning with you, Mr. Bessen, if you are ready with your opening statement for five minutes or less.
    AI has gotten an awful lot of media hype, and I think that makes it very confusing to understand what its impact will be.
    I tend to view it as much more continuous with the kinds of changes that information technology has been bringing about for the last 70 years, particularly regarding the role of automation.
    There are tremendous and exciting things that AI can do. Some of them are very impressive. Many of them, unfortunately, are still very far removed from the point at which they can replace labour.
    In fact, what tends to happen—and this has been true throughout the period—is that automation mainly pertains to automating specific tasks of a job rather than the entire job, and a lot of people misunderstand that. There are very few jobs that have been completely automated by technology. I looked at the U.S. census and identified occupations that had been completely automated by technology. I found only one, which was elevator operator. Other jobs were lost and other occupations disappeared because technology became obsolete or tastes changed, so we no longer have telegraph operators and we no longer have housekeepers of boarding houses.
    That's been over a period in which technology has had a tremendous impact on automating tasks and affecting labour and productivity. What it means, basically, is that there's been a lot of fearmongering about AI causing massive unemployment. We've been using AI since the 1980s, and we're not seeing massive unemployment. I don't think we're going to see massive unemployment any time in the next couple of decades, but we are going to see many specific jobs being challenged or disappearing, and new jobs being created.
    The real challenge of AI for the labour force is not that it will create mass unemployment but that it will require people to change jobs, to acquire new skills, to maybe change locations or to learn new occupations. These transitions are very costly, can become burdensome, and are really a very major concern.
    There's a second thing I'll point out, but I don't want to be long here. Another major impact—and this has been true of information technology for the last two decades—is that AI has done a lot to increase the dominance of large firms. We see that large firms are acquiring a larger share of the markets. They're much less likely to be disrupted by innovators in the traditional Schumpeterian fashion, where the start-up comes along with the bright new idea and replaces the incumbent. That's happening less frequently.
    That's important for a number of reasons, but it also affects the labour force in a couple of ways. One is that large firms tend to pay more, in part because they have advanced technology, and this tends to increase wage inequality. Information technology has been leading to boosting differences in pay, even for the same occupations. We'll see big differences so that the same job description will pay much more at a large firm.
    The second thing is that partly because of that, there's a really significant talent war, with these new technologies requiring specific skills that work with the technology. I'm talking not just about STEM skills but all sorts of skills of people who have experience adapting their skills to work with the technology. They're in great demand, and large firms have an upper hand in the talent wars. They'll pay more; therefore, they can recruit more readily.
    There's nothing bad with their paying more—we want labour to earn more—but at the same time, it means that smaller firms, particularly innovative start-ups, are having a harder time growing.
    We see that the growth of start-ups declines in areas where large-firm hiring is predominant. That becomes sort of an indirect concern for labour.

  (1105)  

    I will just wrap it up with that. Thank you.

  (1110)  

    Thank you, Mr. Bessen.
    Mr. Lockhart, go ahead for five minutes, please.
     Thank you, Mr. Chair, for the invitation to address this committee today.
    My name is Angus Lockhart. I'm a senior policy analyst at the Dais, a policy think tank at Toronto Metropolitan University, where we develop the people and ideas we need to advance an inclusive, innovative economy, education system and democracy for Canada.
    I feel privileged to be able to contribute to this important conversation today. In addition, I have a brief I co-authored with Viet Vu. I'm submitting it, and it will, hopefully, be available soon.
    Today I would like to talk about three things—what we know about past waves of automation in Canada, what the Dais has learned from our research into the impact of automation on workers, and how the current wave of automation is different from what we have seen before.
    First, I want to set some context for my remarks. The concern for workers in an age of automation is not new. In fact, it has been ongoing for more than 200 years, since machines started to enter the economy. What we have seen through many waves of automation, in the end, is not mass unemployment for the most part, but increased prosperity.
    Our research at the Dais suggests that AI is much like past waves of automation. The risk from AI to those whose jobs are likely to be impacted is smaller than the risks to Canada of not keeping pace with technological change, both on productivity and on remaining internationally competitive. This, however, does not mean that there aren't any bad ways to use this technology, or that adoption won't hurt at least some workers and specific industries. The question has to be how we can support workers and be thoughtful about how we adopt AI, not whether we should move ahead with automation.
    The good news is that we're still in the early stages of AI adoption in Canadian workplaces. Our recent research shows that just 4% of businesses employing 15% of the Canadian workforce have adopted AI so far. Less than 2% of online job postings this September cited AI skills. Most people are not yet exposed to AI in their workplace. This is likely and hopefully going to change over the next decade, making now the time to act and put in place frameworks that support responsible adoption and workers.
    In order to do so, we ought to understand how this technology differs from what came before it. Probably the biggest change in the latest wave of large language models is how easy they are to use and how easy it is to judge the quality of their outputs. Both the inputs and outputs of tools like ChatGPT are interpretable by workers without specialized technical skills compared to previous waves of automation that required technical skills to implement in the workplace and produced outputs that were often not interpretable by lower-skilled workers.
    This means that the new wave of AI tools are uniquely positioned to support lower-skilled workers rather than automating entire tasks that they previously did. Evidence from some initial experimental research suggests that in moderately skilful writing tasks, the support of a GPT tool helps bridge the gap and quality between weaker and stronger writers.
    That said, we also want to acknowledge that previous waves of automation and digitization in Canada have not had fully equitable outcomes. While, in general, increased prosperity has improved quality of life for all Canadians, the benefits have nonetheless been disproportionately concentrated among historically advantaged groups. With AI we run the risk of this again being the case. It's currently being adopted most quickly by large businesses in Canada, and those tend to be owned by men. However, because we are still in the early stages of AI adoption in Canada, there is time to make sure it's not the case. We can't afford to miss out on the prosperity that AI offers, but we need the prosperity to uplift all Canadians and not just a select few.
    I want to end by saying there's still a lot of work to do here. At the Dais we're going to continue to research and try to understand how generative AI can be and already is used in the Canadian workplace and what the impacts for working Canadians are.
    Our work relies on data collected by Statistics Canada in surveys like the “Survey of Digital Technology and Internet Use”. We're glad to see that this committee is taking a serious look at this issue. Continued support for and interest in this kind of research puts Canada in a better position to tackle these challenges.
    Thank you again for the opportunity. I will be happy to answer questions when we get there.
    Thank you, Mr. Lockhart.
    Now, Mr. Carrière, please go ahead for five minutes.

[Translation]

    The fundamental problem with algorithmic management is that wehave no information. There’s no framework for all kinds of elements. There seems to be a wish to pass this problem on to unions and employers, but unions can’t be the solution for managing artificial intelligence in the workplace, when we know that the unionization rate is around 15% in the private sector. This will require a regulatory framework deployed by every level of government.
    Nothing is known. No doubt the clauses in collective agreements relating to technological change were used to address artificial intelligence issues, and that was a mistake. It was a mistake because, often, the triggers for technological change clauses are related to job losses or potential job losses. Unfortunately, that doesn’t address issues related to artificial intelligence, which deals with a multitude of situations that don’t result from job loss.
    We hear about artificial intelligence as if it’s something positive that will lighten the load on workers. Unfortunately, there’s a downside, such as reduced autonomy and increasingly intrusive surveillance. Workers are constantly being monitored, since algorithms need data to do their jobs. We don’t know how this data is stored, how it’s analyzed or how it’s reused. The ability to collect data is not regulated. We therefore need to regulate data and what is done with it, but above all we need to regulate and mandate dialogue between employers and employees to understand the whole issue of explainability and transparency. There isn’t any.
    For years now, we’ve been using tools that make decisions on behalf of workers, but they haven’t been presented as algorithmic management or artificial intelligence tools. They were simply described as new tools. For example, at Bell Canada, there’s the Blueprint tool for customer service staff. When speaking with a customer, workers are required to follow a decision tree that tells them what to do based on the customer’s stated problems. The employee’s judgment is completely removed from the process. What’s more, the employee must enter data into the tool to ensure that the various interpretation scenarios are effective and appropriate for the customer.
    This is done in various industries, such as transportation, where algorithms make decisions for truckers, whether it’s about the best route or the best driving practice to use. This completely eliminates the individual’s judgment and ability to drive their vehicle. They are required to follow the tool’s instructions. They must be managed.
    The Organization for Economic Cooperation and Development, or OECD, has laid down four principles: artificial intelligence must be oriented towards sustainable development, it must be human-centred, it must be transparent and explainable, and the system must be robust and accountable. At present, we have none of those things, because there’s no disclosure obligation. In our view, this is the first step that needs to be taken. It’s about knowing the tools, understanding their effects and then implementing solutions that are truly benefiting from the efficiency or added value of technological tools in the company.
    We’re in a period marked by a shortage of workers. It is simply untrue that we’re going to transform a customer service operator into someone who will program or manage algorithmic tools. In any case, in Quebec, there’s currently a shortage of 9,000 to 10,000 workers in the IT sector, and our workers who can’t fill the gap. There’s a kind of vicious circle that has to stop, and it has to start with the implementation of mandatory disclosure or mandatory dialogue between employers and their employees.
    Thank you very much.

  (1115)  

    Thank you, Mr. Carrière.

[English]

    We will begin the first six-minute round of questions with Ms. Gray.
    Please proceed, Ms. Gray.
     Thank you, Mr. Chair, and thank you to all the witnesses for being here.
    My first questions are for Angus Lockhart from Toronto Metropolitan University.
    You stated, as part of an article, that, “While some medical practices benefit from the inclusion of AI, there are serious privacy risks in feeding private medical data into a computer model that must be addressed.”
    I just want to confirm that this was something you wrote.
    Do you believe Canada's privacy laws are adequate to address these privacy issues?

  (1120)  

    Yes, that is something we wrote. I think I co-authored that with Viet Vu as well.
    I don't know, strictly speaking, if the medical laws are accurate. I do know that AI is going to require new forms of medical privacy. As data gets fed into these large algorithms, there's an opportunity for the algorithms to spew that back out in a way that we don't or can't anticipate. It requires a degree of care that is larger and more significant than previous tools. We've seen with ChatGPT and tools like it that there's a risk that whatever gets fed into them can come back out. It's very challenging to incorporate systems that will prevent that from happening, or at least to make sure you're extremely confident that it won't happen.
    Thank you.
    Do you believe there are security and privacy concerns that are currently barriers to the adoption of AI?
    We did a study on the adoption of AI in Canada. We found that for the most part, very few businesses are actually looking at security and privacy concerns as a barrier to them. Something like less than 3% or somewhere in that range of businesses that have yet to adopt artificial intelligence cite anything like that as their concern. For the most part, people really just don't know what tools are available to their business.
    Thank you.
    If government were to amend privacy laws, do you believe that would help remove some of those barriers? Are there concerns that privacy laws in Canada may not be helpful to protect people's privacy?
    Yes. I think there's room to provide more clarity for businesses on what the privacy concerns are and what they need to be really careful about. To some strong degree, probably a lot of that will have to fall on the developers of the actual AI tools and not the businesses implementing them themselves.
    In general, I think there is always room to help support that, but it probably wouldn't be a massive driver of increased adoption in Canada, even if it were improved.
    Thank you.
    My next questions are for you, Professor Bessen. You contributed to a paper last year that talked about AI start-ups. Do you think AI development poses ethical and data access issues?
     Yes, definitely. We surveyed AI start-ups about the kinds of ethical issues that they were attempting to control, and they saw a very definite need. We were surprised, actually. We thought that ethics would be the last thing on their radar, and in fact the majority were actually implementing things and taking actions that had some teeth in them. In some cases, they let people go. There were concerns about bias that might arise in training.
    So yes, ethics has been important. I think it's going to become more important as these systems develop and we understand more about what they can do and what their effects will be.
    Thank you.
    Do you believe Canada's privacy legislation and protections are sufficient to address concerns with AI development?
    I'm sorry. I'm not a Canadian, so I'm not that familiar with Canada's privacy laws.
    Do you believe new AI technologies will create issues for workers with respect to intellectual property and antitrust issues—issues around ownership of data and privacy?
    Oh, absolutely. There are a bunch of things. First off, there's a huge issue in terms of intellectual property, copyright in particular. Particularly with these large language models like ChatGPT, they're trained on a bunch of data that is out there, much of which is under copyright protection and out there on the Internet. It can result in cases, and some of these have been very clearly demonstrated, in which they are more or less reproducing copyrighted material without permission.
    Antitrust is also an issue. I referred earlier to the effect of information technology generally increasing the dominance of large firms. I believe AI is going to accelerate that tendency. It's not directly an immediate problem for antitrust law, but it means that antitrust law is going to become that much more important as the dominance of these firms grows.
    I will also—

  (1125)  

     Thank you.
    I'm sorry to interrupt you. I have only a few more moments here.
     If I could just add on to that, what would you recommend the Canadian government look at specifically, especially around privacy laws? What recommendations would you give?
    In terms of intellectual property, I think there are some strong recommendations about copyright that need to be effected, and that's going to be a big problem to work out. In terms of privacy laws, it's much more difficult and it concerns the extent to which privacy-protected information is being made available to AI systems that may reuse it in a different way. This is the problem the other speaker referred to.
    It is—
    You can continue with another questioner.
    Thank you, Ms. Gray.
    We'll go to Mr. Coteau for six minutes, please.
    Thank you very much, Mr. Chair.
    Thank you to our witnesses today. I found each of your testimonies interesting. They complemented one another as well.
     I'm going to start off with the gentleman from Unifor. Mr. Carrière, you spoke about the way unions will be positioned in this as we further adopt AI. I thought it was interesting the way you spoke about a regulatory framework being necessary. I understand that part of it.
     The piece that is interesting to me is, outside of the government regulations, if unions are not involved in the big private sector jobs that are growing.... Mr. Bessen talked about how these big corporations will dominate a lot of the space. Outside of the public jobs, what is the strategy for organized labour, to make sure they and their workers are protected through the collective agreement process if they're not necessarily part of the growth that's taking place? Do you have any thoughts on that?

[Translation]

    Thank you, Mr. Coteau.
    The current challenge is that there’s no discussion with employers about this. There’s no discussion about the potential consequences of integrating a new technology. There will only be discussions if we know in advance that there will be job losses. There is no obligation to discuss how a job will be modified, simplified or made more complex. There is no structure. The labour movement, again—

[English]

    Who would be included in this conversation, from your perspective? Who would be included in this conversation if the jobs are not necessarily coming from union jobs? How do you envision that?

[Translation]

    In a context where there are no unions, a government structure must require the setting up of workers’ committees to explain to people what we want to implement, how it will affect work and how we will be able to correct the negative effects or unwanted pernicious effects of algorithmic management. If, in the algorithmic management tool, there are features that discriminate unintentionally, we need to be able to correct the application of the management tool.
    The management tool is replacing the manager. Workers and employers need to collectively build management tools. If there are mistakes or negative trends, we must give ourselves the necessary means to correct them. This absolutely requires dialogue with workers, through a structure that is not necessarily the union structure. We need to set up such a structure.

[English]

    Thank you very much for being here. I think the organized labour voice is very important in this conversation. I appreciate the fact that you joined us here today.
     Mr. Lockhart, I have a question for you. You said that 2% of job vacancies that are being published today cite AI skills as a requirement. Do you think that 2% is a true reflection of the actual sector, or is that just the jobs?
    Do you think that because it's becoming easier to incorporate AI without specific skill sets—as I think you stated—the 2% is an under-representation of that skills that are actually required?
    What tools can be placed in the job without the employee needing those specific skill sets?
     I hope I made sense.

  (1130)  

     That makes total sense.
    We saw that just 2% of all jobs have any kind of AI skills. You're exactly right in saying those AI skills are traditional tech-based skills—things that require advanced training to use. There is a generation of new, generative tools that take natural language inputs and don't require the same technical skills to use.
    That said, there is still a whole range of technologies that require those digital and technical skills to use. The new technologies aren't necessarily replacing them. They're more additive. They're operating in new areas in which the old technologies didn't help. There is still going to be increased demand and need for AI skills, broadly.
    The same workers who don't have AI skills and are being asked for AI skills are going to be able to adopt the new tools, but they might not necessarily be able to use any of the older, existing tools.
    Thank you.
    I was very fascinated, Mr. Bessen, with how you started off your conversation.
    You said there was a lot of media hype around AI and that this is just a continuation of a 70-year process. Hopefully, over the course of the remainder of the time, I can get a little more detail on that. It is a very fascinating and popular subject. I'd like to hear more about why you think it's part of a long story, rather than something new.
    Thank you so much, Mr. Chair.
    We'll capture that in another question. The time is out.

[Translation]

    Ms. Chabot, you have six minutes.
    I’d like to thank the witnesses for being with us. Your testimony is very important, even if we don’t have all the answers and we don’t yet know all the challenges associated with implementing artificial intelligence in the workplace.
    Mr. Carrière, you opened by telling us that the challenge is the total lack of information and guidance. Can you tell us a little more about that?
    Thank you very much, Ms. Chabot.
    Presently, we seem to want unions and employers to find the magic bullet or the magic wand. Instead, I think it’s going to take the federal and provincial governments to put regulations in place, according to their respective areas of jurisdiction.
    The first step to understanding the effects of algorithmic management is being aware of what’s going on. Employees must be informed and consulted. This will ensure transparency and explainability. The only effects of algorithmic management that we are currently seeing are negative ones. We see work decreasing rather than increasing.
    What we see is a decision-making tool, a computer application, making decisions and diagnosing anomalies instead of the individual. Our impression is that, in unionized workplaces that apply an algorithmic management program, workers find themselves dehumanized. Dehumanization is a strong word. In fact, the individual is clearly told that their judgment is no longer needed, because a computer tool does the thinking for them. That demotivates people, since they become automatons, i.e., they perform a task without thinking.
     Currently, people are unaware that they are being replaced. What’s more, they’re being asked to feed data into the tools that are going to replace them. We need to get back to basics. We need to impose, probably through the Labour Code, a conversation about the kinds of technology companies want to use, and we need to determine its impact.

  (1135)  

    Unifor represents thousands of workers in Quebec and across Canada, in a number of sectors.
    Now that implementation has begun in some sectors, have you observed any impact on certain job categories?
    Yes, there have been numerous consequences. This is hardly a new phenomenon. Technological changes have had such repercussions for many years, even decades.
    Take Bell Canada, for example, in the telecom sector. For 15 years, surveillance tools have been capturing and recording all data relating to workers’ production in order to measure and analyze their performance or incompetence, as the case may be. At Bell Canada, for instance, a performance management system based on forced ranking was introduced. Under this system, an individual ranked in the bottom quartile is met by the employer because algorithmic tools have determined that their performance is weaker than that of others. Because an employee is weaker than others, a performance management plan is applied, notwithstanding the manager’s judgment. The manager relies on the algorithmic tool to make a decision. That’s what we’ve seen in the telecom sector.
    In the transport sector, every single driver is monitored 24/7. All data is captured and recorded. Once again, algorithmic tools are superseding the judgment and expertise of individuals. These tools will tell a truck driver, for example, where to go to get from point A to point B, because it’s more efficient. We’re completely removing the worker’s judgment and replacing it with an algorithm.
    There are several similar examples, but, in general, we’re unaware of it, because it hasn’t been disclosed. If it doesn’t involve employers cutting jobs, it isn’t discussed. And yet, many jobs disappeared five, six or eight years after this kind of tool was integrated. So this dialogue never happens. That’s why we first need to develop mechanisms to inform and consult employees. Then, we need to work together to build the tools. Finally, we need to give ourselves the means to adapt them, if necessary.
    Are there any examples of social dialogue in this area?
    I’m talking mostly about Quebec. For instance, I'm aware of the Commission des partenaires du marché du travail, a social dialogue forum.
    Are there any good practices in workplaces?
    It’s in its infancy, but it’s inadequate. We’re already lagging.
    It’s disturbing to see that we’re moving forward without informing people. In the workplace, we’re just beginning to acknowledge these practices and their consequences.
    Thank you, Ms. Chabot.

[English]

     Madam Zarrillo, you have six minutes.
    Thank you so much, Mr. Chair. This is very interesting testimony today.
    I'm going to ask my initial questions to Mr. Lockhart. If I have time, I would like to ask some questions to Monsieur Carrière as well.
    I want to talk a bit about the points you made around the increased prosperity and how that's potentially not going to be distributed equitably among workers. My questions relate to protections of workers. You mentioned a responsible framework. I wonder if you could expand on what you think those responsible frameworks could look like on a federal level.
    I think that's probably a very challenging question to answer in a short time.
    What we certainly view as part of a responsible framework is making sure that when artificial intelligence is implemented, it's not being done in a way that's harmful to the workers who are using it explicitly. There are always risks of increased workplace surveillance and facial recognition being used in the workplace, and we definitely want to avoid any kind of negative impacts from that.
    Beyond that, there's a huge risk from AI that businesses will be able to implement AI and reduce labour, and that the increased productivity and benefits from that could be concentrated among just the ownership of the business. That runs the risk, obviously, of increasing wealth inequality in Canada. At the Dais we strongly believe that prosperity and GDP growth are beneficial for Canadians, but only when they are distributed among all groups.
    I don't think I have an answer for how to make sure the benefits that come from increased productivity for workers are distributed among all of the workers and the people in the firm, but I do know that's going to be an important part of keeping up with AI adoption.

  (1140)  

     Thank you for that.
    Do you think, perhaps, that the federal government could lead an advisory council or a round table? If so, who do you think should be on there? What groups should be represented?
    I think that is definitely a path that needs to be investigated. I think that when you do that, you need to make sure all groups are represented. Obviously, you need to make sure industry's represented. Having unions there is important.
    I think the trickiest part is making sure you have non-unionized workers represented there in some capacity, because a large portion of Canada's workforce is not unionized. If those voices aren't present at the table, then you really run the risk of a two-tiered system of unionized versus non-unionized workers.
    Thank you so much.
    Monsieur Carrière, I also have a question around the protection of workers. You talked about it.
    I'm worried about populating the tools with workers' ideas, skills and experiences, and then those workers never receiving any of the benefits of that. All of their intellectual and cognitive property and even their copyright rights are potentially at risk. I'm wondering if you could expand on how we protect workers' ideas, skills and experiences?

[Translation]

    As a union, we see that a number of tools exist to make the employee’s job easier. As I mentioned, there are negative impacts. Workers are being stripped of their autonomy and capacity for judgment. We’re turning individuals into automatons following a recipe previously determined by an algorithm.
    I’ll use Bell’s Blueprint as a case in point. Communication systems installation technicians are required to enter their objective and all the steps involved in their task into the program. This is a basic step. It’s not a complex process, but workers have to explain what they want to do, and the program tells them how to do it. Workers become mere implementers.
    In the job categories we represent, no one holds intellectual property on their ideas, because they’re already performing a job as an implementer. Workers are reduced to their simplest expression. They are stripped of their ability to judge, their expertise and the effect of having a great deal of experience in the sector, under the pretext that an algorithm can take anyone and have them do the same job. The impact is negative for workers. Work is becoming boring and so easy that there’s no challenge. As a result, people are leaving the company to work elsewhere. Artificial intelligence is being used as a partial solution to a labour shortage, but by making the work uninteresting, it’s causing turnover. It’s driving attrition.
    It’s not so much a question of protecting workers’ ideas, but of ensuring that human beings are contributing their skills, values and knowledge to their business. Currently, we’re seeing that tools aren’t having that effect.

[English]

    Thank you so much.
    I'm going to use my last less than a minute to ask Mr. Bessen this: We recently experienced the writers' and actors' strikes down in the United States. It had an affect up here in Canada. I'm from B.C. It put a lot of people out of work for over six months. I wonder what was learned around AI with regard to the recent strikes in the acting and writing fields?
    I'm not sure we've learned much about AI specifically.
    There have been a number of studies done on using AI to assist writers. There's some evidence that it helps less-skilled writers do a better job. I don't think that AI's anywhere near the point where it can really replace writers. I think that was being talked about, but I don't see any evidence that it's about to happen or can happen. My own experience—and the experiences of a whole number of other people who have tried to do writing with ChatGPT or whatever—is that there are some huge limitations on using this technology at this point.

  (1145)  

    Thank you, Madam Zarrillo.
    Mr. Aitchison, you have five minutes, please.
    Mr. Bessen, I'm going to start with you.
    I'm actually just going ask a question about housing, frankly. That's my portfolio. I know that there's a huge challenge with housing in the United States, as well as here in Canada. A big part of the problem is the lack of supply and the pace at which things get approved—with plans and all of that kind of stuff.
    I'm wondering if you could speak a little to the application of tools like AI to speed up the approvals process, for example, in municipal zoning and that kind of thing. When you made your comments, I kept thinking about how this is a tool to be used, not to be afraid of. It presents opportunities. I'm hopeful that maybe it presents some opportunities in the housing sector.
     That's certainly an interesting idea and one I hadn't thought about before. I immediately see that it runs into a problem, which is that all of the regulations and requirements that go into approvals are not something an AI system can just ignore.
    AI may be helpful. You would like to be able to see ways in which perhaps the various regulators would be able to use AI to analyze the various reports and speed up that process, but they'd have to be willing to do so. You might see ways in which AI could help compile all the various approvals.
    There are possibilities for it to work, but I think it's a difficult problem, because there's a big interaction between regulations and laws and the technology. You can very easily see a situation in which AI would be used and then there would be a lawsuit because somebody didn't like the outcome.
    For an industry that is incredibly over-regulated—I would suggest that housing is generally over-regulated—you made a comment there that made me think that maybe AI is a tool that could be used, as you said, to compile all of the existing rules and regulations. Maybe it could be a tool that could analyze the layers of regulation, the layers of bureaucracy involved, trim the process and eliminate a lot of overlap. Is that a potential perhaps?
    Yes. AI can make recommendations about what to trim, but it can't trim it itself. Obviously it requires legal and regulatory approval to trim the process. It's a good idea.
    Thank you.
    Mr. Lockhart, I will ask you to just provide some comment here on the same question, if you wouldn't mind.
    I would say two things. The first is that when we switch from talking about AI use in private workplaces to AI use by government, there are a lot of different questions that raises and a lot of different issues that come about. In the private sector a lot of the time we get to just focus on productivity, but in the public sector there's a lot more to consider than productivity. You can't just talk about making the process faster, because I think there's an important equity concern here even when it comes to housing applications. Handing over to an AI tool any kind of judgment on that makes for a real challenge.
    The second is more on the topic of using AI to cut down on regulations. I think you're going to really run into a challenge there, because there are real social considerations, as opposed to just productivity or efficiency considerations, that go into that kind of regulation system. It seems to me that it's probably better done and left to humans and human decision-making for now.
    I will throw it to you, Mr. Carrière, as well, if you're interested in commenting on that, sir.

[Translation]

     I will refrain from answering that particular question about the use of artificial intelligence and housing issues. I don’t think I have anything new to add.
    However, I will reiterate that we need to learn more about these tools. The way to better understand them is to talk about them, to provide a framework that forces employers to explain to their employees what they want to do, the goal they’re trying to achieve, the changes that will be made to their workplace and the repercussions on people’s autonomy.
    In a context where augmented work will occur, that’s terrific. In a context where we’re only getting diminished work results, it’s problematic. It all begins with knowledge. We need to know what we’re dealing with. We don’t even know whether we’re dealing with algorithmic tools for automated decisions or semi-automated decisions or whether they’re symbolic algorithms or machine learning algorithms. Those are things we simply don’t know. Workers don’t know if the algorithmic tool is capable of thinking for itself or if it’s just following a decision tree.
    We’re a long way from understanding. We need to develop mechanisms to learn more. Once we do…

  (1150)  

[English]

    Thank you, Mr. Carrière.
    Mr. Kusmierczyk, go ahead for five minutes, please.
    Thank you so much, Mr. Chair. I have a question for Mr. Carrière.
     You know, Liberals believe in the power of the bargaining table. That's why we introduced Bill C-58, which will ban the use of replacement workers. That's what differentiates us from the Conservative Party: We believe in the power of the bargaining table and we're putting forward the ban on replacement workers.
    Are you able to comment? Have you already seen the spectre of AI being part of discussions at the bargaining table? Are you currently seeing negotiations with employers? Are you seeing AI being raised in those bargaining discussions? I'm not sure how much time you've spent at those bargaining tables, but can you tell us a little about whether it's part and parcel of those discussions already?

[Translation]

    Thank you for the question.
    Presently, this is not something that’s openly and clearly discussed at the bargaining table. We aren’t discussing it. For example, recently, the St. Lawrence Seaway was closed for eight days. Could an algorithmic management tool one day manage the locks remotely? Very likely. Will this lead to job losses? Quite possibly. Is this being discussed at the bargaining table? No, it’s not on the table at all. There is no disclosure.
    It’s like asking workers to use up all their bargaining capital, an expression we use. Instead of seeking to improve their working conditions, they’d be asked to use all their bargaining capital to ask for transparency about artificial intelligence. That’s not something workers are interested in. Employers are not disclosing how such tools are being integrated, or what their future impact will be. There’s a huge demand on workers’ participation to populate the databases of these tools and to correct the margins of error, but they’re not told how this will affect their jobs or the evolution of their jobs.
    So the dialogue is non-existent. We have to start somewhere. Of course, the bargaining table is a start, but for all the sectors that are not represented, there have to be mechanisms in place for that dialogue to take place.

[English]

     I appreciate that response. I know that Unifor, even back in 2017, was hosting conferences and meetings on AI and on technology, so you're definitely not new to this issue; you're very much forward-looking.
    I want to ask if there is dialogue between unions. For example, are there conversations between Unifor and, let's say, the UFCW in the food-processing and food-picking sector? Are there conversations with other unions—you mentioned, for example, ports—to have that discussion? Are there conversations taking place between unions, as well, regarding the concerns about AI?

  (1155)  

[Translation]

    Yes, there are plenty of conversations between the groups, because unions are sharing what little knowledge they’ve acquired. We realize that all of this is in its infancy. Certain aspects of technology were introduced 15 years ago, and today, with the advent of artificial intelligence, they’re taking on incredible dimensions.
    Unions, not just American and Canadian unions, but international unions too, are exchanging best practices or examples of framework measures that could be included in collective agreements or in legislation.
    So there are discussions, but the observation remains the same: our knowledge on this subject is in its infancy. We know nothing. This dialogue needs to take place with employers to devise solutions. The aim is not to limit or reduce the effect of AI-related technologies, but to ensure that they represent a positive addition to the workplace, rather than the opposite.

[English]

    Thank you, Mr. Kusmierczyk and Monsieur Carrière.

[Translation]

    Ms. Chabot, you have two and half minutes.
    Thank you, Chair.
    Mr. Carrière, I’d like to ask you a question about the employer-employee relationship.
    When an algorithm that has built a decision tree is used to perform a function, what happens if something goes wrong? Who’s the boss in such a situation? I think this changes the employer-employee relationship.
    I’m quite surprised to see that currently, there isn’t more upstream dialogue about what’s going on. At the same time, I’m not surprised either. If we take the concrete example of Bell, what does this mean for a worker?
    Bell Canada uses a tremendous amount of data and conducts extensive monitoring in all types of jobs. Everything is recorded. Every activity is recorded in a computer. Every action taken and every gesture made by a worker is known. It’s the same for technicians on the road and people working on the networks. Everything is analyzed and everything is known.
    People’s performance is managed on the basis of targets to be achieved. Those are determined by the outcome of data analysis. If a technician is told that it takes 25 minutes to connect a line, but in fact takes 35 minutes to make the connection, he will be penalized. The vagaries of weather, for example, are not anticipated by the algorithm. The technician will be told that he’s doing a bad job because he’s not meeting the targets set by the algorithm. That’s where we stand now.
    Has the manager’s judgment been substituted by a ready-made solution from an algorithm? The answer is yes, and has been for quite some time. Again, this is an unknown for us, because we don’t really measure what it takes into account. When we ask the employer to share the criteria used for their management tool, we don’t get an answer, because it’s so specific. We’re not given the information.
    The manager is being replaced by an algorithmic management tool. At the end of the day, what is the basis for challenging the decision? This is where the question you raised, Ms. Chabot, is significant. You can’t go before an arbitrator or the courts and ask an algorithmic management tool why it made this decision rather than another. That’s why I mentioned earlier that we need to give ourselves the necessary means to correct the effects of algorithmic management decisions. This is the impression we get from people in the field. Managers today pass on messages, but all the tasks that involve judging a worker’s performance are carried out by this tool.
    Thank you, Mr. Carrière.

[English]

     Madam Zarrillo will conclude this....

  (1200)  

    Thank you.
    I'm going to ask Mr. Carrière.... Hopefully, we can keep it to about a minute, because I would also like to ask Mr. Lockhart about equity.
    Thank you so much, Monsieur Carrière, for bringing back the humanity part of this discussion. We are a committee that has “human resources” at the beginning of its title.
    I want to revisit something. The CLC—the Canadian Labour Congress—testified in front of this committee and recommended an advisory council on artificial intelligence.
    I'm wondering whether you agree with this recommendation—that the federal government should have an advisory council that looks at the impacts on human resources. If so, who should be on that advisory council? Who should be represented?

[Translation]

    This is an interesting first step. You certainly have to start with a consultation structure. Employers definitely need to be involved in the process, as well as unions and all the worker associations.
    We need plain language. Simple language is needed. This is something that seems so complicated to us that we need scientists and people to explain the impact of these replacements. We also need to reassure workers. The fear is that the machine will replace the individual. We don't see what's going on, but we make the work dehumanizing.
    The unions need to be at the bargaining table, but all the workers' associations also need to be at the bargaining table. We'll need plain language in order to fully understand the challenges.

[English]

    Thank you so much.
    Mr. Lockhart, I want to revisit equity.
    Again, this committee also looks at persons with disabilities.
    I'm wondering whether you could share a bit about the work and discussions happening in your organization around what equity needs to look like in relation to AI.
    AI has the potential both to promote equity and to harm it.
    If we look specifically at persons with disabilities, there are examples in which AI has been used to improve the capacity of people with disabilities to operate in a workplace. There is a café that recently opened in Tokyo that uses robots to help increase the motor function of people with disabilities in order to help them fully operate within that workplace.
     At the same time, if you don't take an equity lens when you're implementing artificial intelligence, those marginalized groups—people with disabilities and other groups like them—are going to be the first people harmed by the introduction of AI in the workplace.
     You have to start from a place of asking how AI can help uplift and increase the participation of everyone, and use that as your framework, instead of starting with, “We have AI. What can we get rid of with it?”
     Thank you, Madam Zarrillo. You're a little over.
    I want to thank the witnesses for appearing for this first hour on the AI study.
    With that, we will suspend for a few moments while we bring in our second panel of witnesses. We'll suspend for a few minutes.

  (1200)  


  (1205)  

    I call the meeting back to order.
    Members, we'll reconvene the committee as the witnesses, now all appearing virtually, have been sound tested. I've been told their sound is fine.
     We will begin with opening statements. I would ask everybody to keep their time within the five minutes or less, because there are four of you.
     We'll start with Mr. Autor for five minutes or less, please.
    You're the first one who showed up on my list. That's why you're going first, Mr. Autor.
    Thank you for having me. My name is David Autor, and I am the Ford professor of economics at the MIT Department of Economics, and also co-director of the MIT “shaping the future of work” initiative. I am honoured to speak with you today about my research on artificial intelligence and the future of work, and I apologize for my cold.
    AI presents obvious threats to workers and the labour force. While machines of the past could only automate routine tasks with clear rules, AI can quickly adapt to problems that require creativity and judgment. It seems reasonable to worry that AI will suddenly make huge swaths of human work redundant. I believe these concerns are somewhat misplaced, however. Strong demand for labour has persisted throughout past periods of technical change, like the industrial or computing revolutions, and all signs point to growing labour scarcity, not the opposite, in most industrialized countries, including Canada.
    Instead, the important question to ask is how AI will impact the value of human expertise, by which I mean the skills and judgment in specific domains like medicine, teaching and software development, or modern crafts such as electrical work or plumbing. Will new technologies augment the value of human expertise, or will it make human judgment valueless?
    In industrialized economies, expertise is the primary source of labour’s market value. Consider the jobs of air traffic controllers in comparison with crossing guards, both of whom have the job of protecting lives by preventing vehicle collusions. Air traffic controllers in the U.S. are paid four times more than crossing guards. Why? It's because they have scarce expertise, painstakingly acquired and necessary for their important work. The value of that expertise is augmented by tools: Without GPS, radar and two-way radio, an air traffic controller is basically a person in a field staring at the sky. Crossing guards provide a similar socially valuable social service, but most able-bodied adults can serve as crossing guards without formal training and without any expertise, and this virtually guarantees low wages.
    While technology makes air traffic controllers' expertise valuable, it can also make human expertise redundant. London cab drivers used to train for years, memorizing all the streets of London. GPS made this expertise economically irrelevant. It's no longer necessary. You might ask, why isn't all expertise eventually made superfluous by automation? The answer is that human expertise becomes relevant because its domain expands with social needs. Jobs like software developers, laparoscopic surgeons and hospice careworkers emerged only when technological or social innovations made them necessary. In fact, my co-authors and I estimate that around 60% of all jobs that people do in the U.S. today didn’t exist in 1940. Technology and other social forces can just as readily create opportunities for high-quality work as they can automate it.
    I believe that AI can create novel opportunities for non-college workers—low and middle-educated workers. With the support of AI tools, these workers could perform tasks that had previously required more costly training and highly specific knowledge. For example, medical professionals with less training than doctors could tackle more complicated tasks with the assistance of AI. In the U.S., in part due to technological innovations such as software that prevents the dispensing of harmful drug interactions, nurse practitioners have proven effective at tasks formerly reserved for doctors with five more years of medical education. AI could push this further, helping workers with less training deliver high-quality care. This is not to say that AI makes expertise irrelevant. It's just the opposite: AI can enable valuable expertise to go further. AI tools enable less experienced programmers to write better code faster. They help awkward writers to produce more fluid prose.
    This positive future of which I'm speaking is not guaranteed. We must make collective decisions to build it. For example, China has made substantial investments in AI technology, in part to create the most effective surveillance and censorship systems in human history. This is not a preordained consequence of AI, although it depends on it, but it's a result of a particular vision of how to use this new tool. Similarly, it is far from inevitable that AI will automate all of our jobs. That's a vision that many AI pioneers are pursuing. I think this would be mistake. To shape this protean technology, AI, to constructive ends, political leaders must work with industry, NGOs, labourers and universities to build a future in which machines work in service of minds.

  (1210)  

     Let me end by saying what government can do. I don't claim to have complete answers here, but let me say a couple of things. First, governments should germinate and fund human-complementary AI research. The current path of private sector development has a bias towards automation. Government can correct this by supporting the development of worker-augmenting AI in industries like health care, education or skilled crafts work.
    Second, I would prioritize protections for workers. Using AI for undue surveillance for high-stakes decisions like hiring and firing and to appropriate workers' creative works without compensation should be disallowed. Empowering workers to collectively bargain and including them in rule-making is a critical step.
    I'm also concerned about AI safety. I think governments are comparatively well equipped to regulate safety.
    Let me end by saying that rather than asking, “What will AI do to us?”, we should ask, “What do we want AI to do for us?” Answering that question thoughtfully and acting decisively will help us build a future that we all will want to inhabit and that we will want our children to inherit.
    Thank you very much. I welcome your questions.
    Thank you, Mr. Autor.
    Now we have Ms. Hadfield for five minutes, please.
    My name is Gillian Hadfield. I'm a professor of law and of strategic management at the University of Toronto, where I hold the Schwartz Reisman chair in technology and society and the Canada CIFAR AI chair at the Vector Institute for Artificial Intelligence. I'm appearing in a personal capacity.
    Thank you for this opportunity to speak to you on this subject of such critical importance.
    I want to highlight four key aspects of the impacts of AI on the labour market.
    First, AI is a general-purpose technology that is likely to transform almost all aspects of our economy and our society.
    Second, the latest advances in AI can be adopted relatively quickly, but Canadian businesses to date have been slow to adopt AI.
    Third, current AI systems are rapidly evolving to perform highly sophisticated tasks, meaning that high-income and high-education occupations may face the greatest exposure to this latest round of automation.
    Fourth, the profound impacts of AI across our economy and society demand regulatory shifts to ensure that the full benefits of AI can be realized.
    Let me go through each of these in a little more detail.
    First, AI is a general-purpose technology. This means it will transform almost all aspects of our economy and society, similar to the impact of the steam engine or information technology. For example, publicly available large language models such as generative pretrained transformers, GPTs, demonstrate the potential for AI to radically reshape the nature of work. These systems are designed to understand and generate human-like text, including computer code, on a massive scale, increasingly to reason and problem-solve, facilitating an almost unlimited range of applications.
    Second, the latest advances in AI can be adopted relatively quickly. ChatGPT's swift integration into everyday applications over the last year demonstrates this and suggests that the most recent strides in AI can be implemented relatively quickly, outpacing the adoption rates seen with earlier iterations of this technology. This presents an opportunity for Canadian business and policy-makers to boost productivity and economic growth; however, the committee should take note that Canada has to date been slow to adopt AI. According to a study by Statistics Canada, only 3.7% of companies were using AI at the end of 2021. Studies conducted by IBM and the OECD also suggest that Canada lags behind other economies according to AI adoption metrics.
    Third, AI systems are rapidly evolving to perform highly sophisticated and complex tasks. Specifically, AI is being fine-tuned in sector-specific software applications. A notable instance from my own field is CoCounsel, which is a LLM system built on top of GPT-4, functioning as an AI legal assistant for tasks such as legal research, writing and document analysis. CoCounsel has managed to achieve a higher score on the American uniform bar exam than the average test taker—in fact, 90% of test takers. It is also designed to address inherent risks such as AI hallucinations.
    Other examples beyond LLM systems include things like AlphaFold, which has solved the protein folding problem, described by a leading computational biologist as the first time an AI system has solved a major scientific problem. These advancements mean that AI can be harnessed more safely and effectively, particularly in sensitive and cognitively complex domains like law, science and health care.
    In one study, OpenAI researchers found that GPT exposure was higher at the higher income and education levels. That's something for us to take into account, thinking about how this would look different than in previous innovations.
    This brings me to my final and crucial point. The profound impacts that AI will have across our economy and society demand regulatory shifts to ensure that the full benefits of AI can be realized. Our current legal and regulatory frameworks were designed for a pre-AI era and may restrict innovative and productive uses of AI in workplaces. To harness the benefits of AI, we must update these frameworks to address the unique challenges and opportunities that AI presents. Furthermore, given that the nature of AI is rapidly developing technology, effective governance of AI demands that policy-makers move quickly to adopt an AI-enabling regulatory posture that seeks to properly regulate risks, as we do with all other economic activities, while supporting innovation and investment.

  (1215)  

     In conclusion, we stand at the cusp of a transformative era, and we should be acting to ensure that the benefits of AI are realized equitably and responsibly.
    Thank you.
    Thank you, Ms. Hadfield.
    Monsieur Lepage-Richer, you have five minutes.

[Translation]

    My name is Théo Lepage‑Richer, and I'm a post-doctoral researcher at the University of Toronto.
    First, I want to thank you for the opportunity to share a few thoughts with you today. These are the product of my research on artificial intelligence governance, a topic I address by combining historical research with public policy analysis.
    In previous meetings, several members of the committee raised the following question: how can we develop governance frameworks adapted to technologies that are evolving as quickly as artificial intelligence? This is indeed a legitimate issue that is regularly raised by the providers of this technology to encourage some restraint by public policy-makers. However, I'd like to qualify this question by pointing out the broader trends that the history of artificial intelligence in Canada highlight.
    The first federal AI programs provide a useful historical precedent to examine the impact of this technology on the organization of work.
    Starting in the 1960s, the Pearson government identified artificial intelligence as a promising technology to reduce the costs associated with hiring qualified public servants.
    In 1965, the National Research Council of Canada was mandated to develop a first artificial intelligence program to address the translation of official documents from English to French. As a strategy, program managers opted for the development of software tools that would allow the translation process to be broken down into simple sub-tasks. One of those tools, for example, was designed to produce literal translations of common names and verbs in a text, with the idea that operators would then take care of them by fine tuning them, adding the necessary determiners and revising everything. The purpose of these tools was to standardize specialized tasks such as translation, to the point where they could be assigned to workers without prior training, and, above all, at a lower level on the pay scale.
    Although inconclusive, this program launched a series of reforms aimed at reducing the federal government's dependence on skilled workers and, above all, restoring a certain level of control over the federal machinery.
    Under Pierre Elliott Trudeau, initiatives such as the CANUNET network and the Télidon system were put in place to create the necessary infrastructure to produce new data on the work of federal employees. In a recent article published by Fenwick McKelvey and myself, we suggest that the objective of these programs is to quantify the work of public servants so that it can be framed more narrowly using new data analysis tools developed in government and elsewhere.
    Fifty years later, the applications of artificial intelligence in the Government of Canada and elsewhere have changed. However, there are early warning signs of broader trends that can be identified in these early programs. Rather than completely replacing positions, artificial intelligence tends to be deployed in such a way as to restructure tasks so that they are assigned to workers with more precarious status, limit the opportunities that workers have to exercise their judgment, reduce the dependence of organizations on certain forms of expertise and replace investments in training and workforce development.
    These trends go beyond artificial intelligence, of course. However, as Paola Tubaro and her colleagues point out, these trends nevertheless tend to characterize the platforms, management practices, reforms and business models that depend on the deployment of this technology.
    As such, it is therefore urgent that the impact of artificial intelligence on the workforce become a key perspective for developing tailored policy responses. This position is shared by a number of people, including Emanuel Moss and Valerio De Stefano, who point out the inability of the risk-based approach that characterizes the current regulatory instruments to account for issues related to worker protection. To reflect the impact of artificial intelligence on the workforce, these instruments would have to take into account the impact of this technology on the distribution of wealth, the quality of jobs and the loss of salaried jobs to precarious or subcontracting positions.
    Until now, artificial intelligence has been perceived in Canada as an industrial policy issue, and not without success. However, it is crucial that investments in the AI industry complement, rather than replace, similar investments in human capital.
    While future applications of AI are difficult to predict, the structural effects of AI on the organization of work remain stable and can therefore inform policy responses to the test of technological change.
    Artificial intelligence is a challenge both in labour law and in industrial policy. I therefore encourage the members of this committee to consider the trends in which artificial intelligence has been embedded over the past 60 years to put in place the necessary safeguards to ensure that workers also benefit from the deployment of this technology.
    Thank you very much.

  (1220)  

    Thank you, Mr. Lepage‑Richer.

[English]

     We now have Ms. Jansen for five minutes, please.
    Thank you for the invitation to share my thoughts with the committee today. My name is Nicole Janssen. I am the co-founder and co-CEO at AltaML. It is the largest pure-play applied AI company in Canada. We create custom AI software solutions for enterprise-level clients in both the private and public sectors. We're not quite six years old, but we've already worked with over 100 companies on over 400 AI use cases.
    I base my thoughts today on my observations of those projects and the current and near-term capabilities of AI, as well as my knowledge of the AI ecosystem.
    I want to start by saying that AI will absolutely disrupt the Canadian labour force at all levels and professions and across all sectors, and that the work this committee is doing to understand those impacts is incredibly important.
    I will address the elephant in the room around massive job losses due to AI, which seems to be the largest concern we hear around jobs and AI. Of the 100 companies we have worked with, not one has implemented an AI solution and then made resulting job cuts. What we are seeing is that the AI tools are being used to increase productivity and, in most instances, are capable only of augmenting humans, not fully capable of replacing them.
    The fear of job losses from AI is consistent with the fear that has come for hundreds of years when a new technology is introduced. It can be traced as far back as the introduction of the mechanical loom. However, every single new technology advance in history has led to more jobs at higher wages.
    Overall, there will be net gains in the job market from AI, but certain jobs will absolutely see disruption. In fact, we're already starting to see that disruption. Jobs that require consuming large amounts of information and synthesizing it, such as content creation, or in the legal profession—paralegals, for example—will see significant disruption, as will jobs that require manipulating large amounts of numerical data, like research analysts or financial analysts. AI can identify trends in the market much faster than a human being can. Jobs that provide some form of external, repeatable assistance—like call centres, receptionists, or even executive assistants—will see disruption. Software engineering will be disrupted and already is being disrupted as AI becomes more and more capable of writing high-quality code.
    From what I have seen, these jobs won't disappear, but rather the individuals in them will need to adapt how they do their jobs, and their time will be focused on the higher-value work.
    You may have noticed that lots of those jobs I just mentioned are white-collar jobs. AI is designed to mimic cognitive function, and it's likely that higher-paying white-collar jobs will have the most exposure to the technology. That said, industrial robots and drones also use AI technology, and that's the one place I see a high likelihood of replacement of jobs, such as in the factory line or for warehouse workers. This is a transition that we have been seeing for many years, though, through automation. Then there's a likelihood of delivery persons over time being replaced by drones.
    Jobs in which a human touch or relationships with people skills are required will become more and more important. While these professions likely will use AI to support them, they will not see the same kind of disruption—so professions such as teachers, nurses, doctors, therapists, human resource managers, sales managers and public relations.
    Then we'll also have new jobs emerge in AI development, cybersecurity, ethical oversight, change management for AI integration, for data labelling professionals, hardware specialists for AI and prompt engineers. These jobs didn't exist a few years ago, and now we see job ads for them everywhere.
    What's clear is that the people of any profession who choose not to adopt and use the new technology will be the ones who lose their jobs to those individuals who do adopt the technology, because they'll be far more productive. Sectors that adopt AI will be more productive and put more people to work faster. If we use AI to approve building permits significantly faster, that puts a lot of people to work a whole lot faster. If we use AI to perfect our preventative maintenance in our plans, that will ensure more uptime and more work for more people.
    The productivity growth that AI has the potential to create is a huge advantage for Canada, as we currently face both a long-term labour shortage challenge and incredibly low productivity as a country.

  (1225)  

     We must embrace AI with careful consideration and proactive measures, investing in education and training programs that equip individuals with the skills needed in an AI-driven economy. We must also implement policies that ensure inclusivity, diversity and ethical use of AI.
    I'm here today to share my thoughts, partly because I know how important this topic is, but also because I knew I could rely on ChatGPT to formulate the version one of my comments, which I could then edit and put my own thoughts to, allowing me the efficiency to say “yes” to this request.
    Thank you. I welcome your questions.

  (1230)  

    Thank you, Ms. Janssen, for a very informative presentation before the committee this morning.
    Before we begin this round, committee members, we may get only one six-minute round. It's going to take 24 minutes, so that's going to run us close. If some of your colleagues want to share, that's entirely up to you, because I feel we'll probably get only the one round.
    Having said that, we'll begin with Ms. Ferreri for six minutes.
    Thanks, Mr. Chair, and thank you to our witnesses for being here today to testify in our study on AI and its implications for employment and the labour force.
    Ms. Janssen, if I can, I'll start with you. I really enjoyed your testimony. I loved that you used ChatGPT to help you through this. It's an interesting tool. I see it happening as well with our students in education, using it as a tool.
    One of the things you talked about is fear of the change, and that is almost a human psychology thing. We've seen this, as you mentioned, throughout time. It's natural evolution; it's inevitable progression to move forward.
    In the past, what was the defining factor that pushed it forward? Do you have any sort of historical reference for that? You talked about the mechanical loom. When we look at vehicles and airplanes, we see that so many people resisted that change, but it ultimately increased prosperity. It did not replace jobs.
    I'll speak maybe to the projects we've done in AI where we have had pushback from fear. There are a lot of them.
    The key is that the individual whose job will be impacted, the person whose workflow will change, has to be a part of the process of developing AI from the outset. They have to feel like they're part of the solution, not that this is being done to them.
    They also need to see that the goal in implementation of the AI is not to replace them. As soon as someone feels that their job is going to be taken as the outcome, they, the end user, will absolutely not implement this new technology.
    Thank you.
    That's really helpful. I love that you said on record “augment” and not “replace”. I think that's really important to have on record.
    If I may, I'll jump to Mr. Autor.
     I liked your testimony in regard to health care, as we have this massive doctor shortage, and we have a lot of advancements happening with AI in health care.
    I'm curious as to whether you have any input on privacy. One of the biggest concerns a lot of people have is how this is going to impact privacy, with health care being one of those areas of privacy.
    What do you think the government should be keeping an eye on in terms of ensuring citizen privacy when using AI?
    I think it's a very big issue, and depending on the regulatory regime, I don't think privacy is guaranteed in what can and cannot be tracked. Our phones are full-time surveillance devices that not only know all the things we do but report that information to third parties for money. That is then resold. Privacy will be compromised unless regulation prevents it and unless people have ownership of the right to privacy. I think it's a very serious concern.
    If I may, Mr. Chair, I'll respond very quickly to something that Ms. Janssen just said about AI and jobs. I do not think we should take it as a historical fact that technology has always improved jobs. The Luddites were absolutely correct that power mills wiped out their employment. Not only that, but wages didn't rise for six decades, and growth was stunted; starvation increased.
    I'm not saying that these advances weren't ultimately beneficial, but these technological changes are never uniformly an improvement for all jobs or all people. There are almost always losers—peoples whose expertise is devalued—and when we make these big transitions, we should be prepared to help people adjust to those transitions. This will not be costless—
     I'm sorry. I'm going to intervene. Thank you.
    I think you bring up valid points about learning from history. Ultimately, however, I think prosperity prevails in advancement. That's why these committees and studies are critical.
    On that point, I will go back to Ms. Janssen.
    You referred to a lot of white-collar jobs having first access to AI. How do you think we prevent that divide between the haves and have-nots, which could happen—and probably will happen—and the creation of a more polarized society? People will have access to technology that will further them economically, socially, etc.

  (1235)  

    I think companies are incentivized to sell to everyone. Selling only to the elite few will not make profits. As we saw in the past with cars, electricity, radio, computers and mobile phones, the makers of these technologies were highly motivated to drive down their prices until everyone on the planet could afford them. Now, I fully recognize that there are places on the planet where there is no accessibility to the Internet or phones, but those prices continue to be driven downward because there is a desire to reach the entire population. If you look at OpenAI's ChatGPT, it's free. There is access for everyone to use it.
    Will we start with the white-collar, higher-wage earners likely using AI first? Absolutely, we will, because it costs a lot to develop AI right now. However, as those prices come down—
    Thank you, Ms. Ferreri and Ms. Janssen.
    Mr. Van Bynen, go ahead for six minutes.
    Thank you, Mr. Chair.
    This has been a very informative discussion. Had I known what the text of Ms. Janssen's introduction would be, I would have used ChatGPT to develop my questions. Unfortunately, I didn't have the time to do so.
    There is concern about the benefits being equitable. I think the point was raised initially by Ms. Hadfield.
    How do you feel the government could be part of ensuring that the distribution of wealth and benefits is equitable across the economy?
    Part of the point about thinking about how we will adapt our regulatory environment is also thinking about how we adapt all of our funding, tax and benefit systems, as well. I think it's going to be a combination of involving workers, as Ms. Janssen emphasized, in the transformation process.... If we're starting to see much bigger returns on capital, maybe we need to figure out ways workers can be directly compensated for that, as well.
    I think the last point.... This is why I think it's very important for Canada to be focused on driving the responsible adoption of these technologies. It's because of the productivity challenges we face. Ultimately, you need productivity to fund all your welfare and support systems—shorter work weeks and so on.
    I think there are a lot of ways to spread the benefits, but it will take some deliberate efforts.
    We've heard that the transition is under way. In fact, it's been evolving over the past 10 to 15 years, or for as many as 50 years. The decisions and regulations seem to be built on lagging information, or on what we've experienced or have been seeing.
    What kind of information or data should the government be gathering to monitor, so it can start developing leading indicators to guide policy strategy and development?
    I'll start with Mr. Autor.
    It's difficult, because it's moving so fast, as Ms. Janssen and others noted. It would be helpful, I think, to involve the private sector, in order to try to get a sense. Even in the U.S.—which is not the world's leader in information collection, by any stretch of the imagination—we now do large surveys on who's using AI and what they're doing with it. However, we don't have a good sense.
    One thing is to understand what task it is being applied to, in what sectors and for what activities. Another is to look at the nature of how jobs are changing, which occupations are growing or shrinking, and what wages are being paid. It's even ideally about understanding, from workers, how their work is changing. Coming at it from both the workers' and firms' perspectives would be potentially complementary.

  (1240)  

     Ms. Hadfield, do you want to add to that?
    Yes. Thank you.
    It is moving very quickly, and we need to be thinking about agile methods for gaining increased visibility for government. You want to be very careful not to say you'll do another two-year study, because it's moving much faster than that.
    I think there's a lack of visibility for government into how these technologies are developing, because for the first time in history, it's almost entirely behind corporate walls.
     I do think it's really important to get that ground level. Again, Ms. Janssen's testimony is very helpful in terms of what this looks like on the ground level.
    For the CoCounsel example I gave you, I spoke to law firms that were implementing this and asked if they had laid off all their junior lawyers yet. They said that they actually had more work than they knew what to do with, because they could now take somebody's call one afternoon and be ready by the next day to give them good advice and take steps.
    There's actually a lot of unmet demand for effort, but you need to be at the ground level to find that out.
    I would say that developing agile methods for increasing government visibility into how things are changing on the ground is critical—like SWAT teams.
    Ms. Janssen, I have only about a minute or so left, and I want to get one more question in.
    Do you have any brief additional comments?
    The only thing I'd add is that only about 20% to 30% of AI that's being built is actually being adopted. You would want to focus your attention specifically on the sectors and companies that are adopting AI.
    Thank you very much.
    The next question is for Mr. Lepage-Richer.
    What are examples of the top countries that are preparing their workforces to deal with the impacts of AI? Are there examples out there that we could look at as best-case scenarios?

[Translation]

    That's a very good question. I can't think of an international example off the top of my head.
    In fact, we have to think about the hidden costs that are often associated with artificial intelligence. When you interact with a platform like ChatGPT, the human work behind it tends to be somewhat erased. However, behind a system like ChatGPT and all the other artificial intelligence systems that are trained using large amounts of data, humans have to work to label that data, format it and organize it, among other things, which is not a well-paid job.
    There are a lot of countries, especially emerging countries, that are going to train a whole workforce to do these tasks at a very low cost. The example that comes to mind is not necessarily an example that Canada wants to emulate because, when we talk about low-paying jobs associated with artificial intelligence, we can think about data labelling. This work is essential to all the artificial intelligence systems we use and develop, but it depends on thousands of workers who manually crunch data for pennies. The international examples that come to mind are not necessarily examples that we want to emulate, but that we want to keep in mind to remember the social and human costs associated with the development of these technologies.

[English]

    Thank you, Monsieur Lepage-Richer and Mr. Van Bynen.

[Translation]

    Go ahead, Ms. Chabot. You have six minutes.
    Thank you.
    I'd like to thank all the witnesses.
    Mr. Lepage‑Richer, according to the OECD, and as the union representative reminded us during the first hour, artificial intelligence objectives must be oriented toward sustainable development and must be human-centred, and we must act responsibly. You talked about fairness and accountability.
    Everyone agrees that artificial intelligence will be deployed, as was the case with robotics and automation. Things are going to change, but I want to talk about what happens when we hit our cruising speed. What does it take upstream, both from a regulatory and ethical standpoint, for this to have a positive effect on the workforce, not a negative effect?

  (1245)  

    The approach currently used in Canada to assess and anticipate risks and the impact of this technology is mainly based on self-assessment. The proposed artificial intelligence and data act promotes the idea that we must create a model so that businesses can govern themselves by taking certain parameters into account, while making sure that the effects on their work are as limited as possible.
    One of the problems I see with this approach is that AI is deployed in a very wide variety of sectors. Therefore, at some point, these tools need to be tailored to each sector and industry in which AI is deployed. This will allow us to properly represent the reality of workers and users whose quality of life, work and well-being are directly influenced by this technology.
    One of the first ideas that comes to mind is that risk assessment tools should be set aside for different industries. In fact, at all levels of government, there are specific frameworks to assess the environmental, financial, social or human impact. However, we do not see the same degree of precision in the evaluation of this technology when it is deployed.
    Off the top of my head, I would say that we need a more specific development of analytical tools.
    Thank you.
    I noted that the issue of self-regulation was one of the external criticisms of the bill. Is it responsible to ask companies that develop these tools to regulate themselves? It seems to me that self-regulation should be political and should not be based solely on AI designers or industries. What do you think?
    Right off the bat, I welcome your comment with enthusiasm.
    That's more or less the strategy that Europe has adopted. The European model relies a lot on independent or semi-independent committees to assess the impact of the deployment of this technology.
    However, I wonder to what extent this approach would be realistic in Canada. I'm thinking of the size of the European government and public compared to that of the Canadian government and public. Realistically, although I'm excited about your comments, I'm wondering to what extent the Canadian government could implement a such an evaluation model. That's why developing analytical tools that are better adapted to the various industries and sectors seems to me to be a realistic compromise in the Canadian context. I'm not hiding my preference for possible solutions.
    Thank you.
    Ms. Hadfield, thank you for your comments. We're talking about various sectors where artificial intelligence is deployed. You gave the example of legal aid.
    If we look at it in terms of gender or gender differences between men and women, do you think that the deployment of artificial intelligence will have a greater impact on jobs held by women or the more marginalized? Will there be specific consequences for women or people with disabilities?

[English]

     That's a topic near and dear to my heart. It's hard to say this, but I think actually what we are seeing is evidence that the current versions of large language models have a bigger impact on higher education occupations, so that we won't see that sort of pink collar effect that we may have seen in the past. I do think the legal application that I was talking about—it's true—could displace that paralegal level, which is probably female-dominated. I haven't looked at the statistics on that, but it's actually doing legal work all the way through the ranks of the law firm.
    I think it is something for us to pay attention to. I suspect this looks different from how it has looked in previous automation waves, however.

  (1250)  

[Translation]

    Thank you, Ms. Chabot.

[English]

     We'll go to Madam Zarrillo, and that will conclude this panel of witnesses.
    Madam Zarrillo, you have six minutes.
    Thank you, Mr. Chair.
    I do have only six minutes, and I have some committee business I want to speak to first.
    Just so that witnesses can prepare, I'm going to ask witness Autor and witness Janssen this question. It has been proposed in this committee that a federal advisory council be struck. I wonder if I could ask both of you, after I finish my other committee business here, what top three topics each of you feel need to be considered at a federal advisory council and, I guess, first of all, if you think that's a good idea.
    Mr. Chair, before I go to the witnesses, I want to respond to the letter the committee received back from Air Canada on our request for Mr. Rousseau, the CEO, to appear before committee. We received a letter that, I think, outlines that Mr. Rousseau does not plan on coming to committee.
    I was wondering if I could get consensus from the committee that we reach back to Air Canada and say that we strongly encourage Mr. Rousseau to come, because we don't want to have to summon him.
    Thank you, Ms. Zarrillo.
    I'm going to ask the clerk to speak to.... You're right. Your motion was adopted on November 8, and a letter went out. The clerk will address it.
    Thank you, Mr. Chair.
    As far as I know, the motion was adopted on November 8. The information was sent to Air Canada, and the letter you received a couple of days ago is the response to the motion.
    The motion requested the CEO.
    Yes, exactly.
    [Inaudible—Editor] could write back to them, and just let them know that's our expectation for Wednesday. If not, we will have to summons him, and I really don't want to have to do that. I wonder if we could get the support of the committee on that.
    What is the wish of the committee?
    Mrs. Gray, did you have your finger up, or Mr. Aitchison, on the issue of Air Canada?
    Mr. Chair, I support 100% what Ms. Zarrillo said. I think we should write back and tell the CEO of Air Canada he's welcome to bring whoever he'd like with him to help him out in his job, but this committee requested him, and that's who we expect.
    I have Mrs. Gray and then Mr. Fragiskatos.
    Thank you, Mr. Chair.
    I support that, as well. That is the will of this committee and, as my colleague said, the CEO is welcome to bring other individuals. That is the will of the committee. That is what was decided. We do have other parliamentary tools in order to have the CEO attend. Without having to utilize those, hopefully just this discussion will implore the CEO to attend, as it's the will of this committee. If not, there are other tools we can utilize.
    Thank you.
    To Mrs. Gray's point, the committee has the ability to summons.
    I have Mr. Fragiskatos next.
    Thank you, Mr. Chair.
    I hope it doesn't get to that point, but if it has to, of course I won't speak for MP Zarrillo, but certainly our side would support that. Support is what's being called for here.
    Air Canada looks quite bad, to be very frank about it. They can bring staff here, but their leadership needs to answer questions about a very important matter.
    Does that address...? You still have time to get back.
    I think it's very clear that the committee is unanimous. It is the CEO the committee wants to have appear before it at the earliest opportunity, and the CEO can bring support staff, as Mr. Aitchison pointed out, but the committee wishes to have the CEO.
    Seeing no dissent on that, I will ask the clerk to clearly get back to Air Canada on the wishes of this committee.
    Madam Zarrillo, you can go back to.... You still have several minutes.
    Thank you, Mr. Chair.
    I thank the committee members, and I apologize to the witnesses.
    I wonder if I could go to witness Autor and then witness Janssen. If we run out of time, if either of those witnesses would like to respond to the committee in writing, that would be great.
    My question is around a federal advisory council that was proposed by past witnesses. I wonder, Mr. Autor, if you could talk about the top three topics that you think should be considered in a federal advisory council, and then Ms. Janssen could respond to the same question.

  (1255)  

     Thank you, Mr. Chair.
    I do support the idea of a federal advisory council, as all folks here today have testified. This is moving very fast. It poses new opportunities and new challenges. Bringing in top expertise in an advisory role is an excellent idea.
    Of the three topics I would most address, one is how to use the technology to augment labour rather than automate it. I don't think we should take as a given that augmentation necessarily occurs. Countries steer technologies. Nuclear energy is used by North Korea solely for offensive weapons. It's used by Japan solely for energy generation. They have no offensive nuclear weapons. That's a choice of a country; it's not a characteristic of technology.
    How to use it well to augment workers is the first thing.
     The second thing is protection for workers. As I noted, undue surveillance, high-stakes decision-making by opaque algorithms, and AI's appropriation of workers' creative work without compensation should be regulated. We have fair use when it comes to intellectual property, but the laws were not written for AI.
    The final thing I would say is on visibility into these technologies. They are opaque. They're making high-stakes decisions, and often the creators of technologies will not even disclose what sources of data have been used for training. I don't think that's acceptable.
     I think there's a public interest in making sure that machines that are making important decisions—and valuable decisions; I use and support AI—need to be understandable to regulators and to consumers.
    Thank you so much. I'm so interested in this appropriation of expertise and creativity without compensation.
    Ms. Janssen, do you have three topics that you would want to have considered?
    The cat's out of the bag with AI. We're not putting it back.
    By the way, I am fully supportive of the committee. I would have the committee focus on the education, upscaling and supports we need to provide to our workforce as this begins to roll out. This means identifying those early, disrupted professions, following them, seeing what lessons can be learned from those professions, and then perfecting that change management as it rolls out across all sectors and professions.
    Then there is the responsible AI piece, which is the transparency, accountability, privacy and all of those pieces that come with responsible AI. That and the direct impacts on workers would be the areas I would focus on.
    Is that it, Ms. Zarrillo?
    We don't have enough time to go with another round, because we have one minute left.
    With that, I'll call for the adjournment of the committee meeting.
Publication Explorer
Publication Explorer
ParlVU