Skip to main content
Start of content

HUMA Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities


NUMBER 090 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Wednesday, November 22, 2023

[Recorded by Electronic Apparatus]

  (1635)  

[English]

     Committee members, the clerk has advised me that we have a quorum and that those appearing virtually have been sound-tested. All are okay except for one witness, but there is another witness from the same group who is okay.
    With that, I will call the meeting to order. Welcome to meeting number 90 of the House of Commons Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities. Pursuant to Standing Order 108(2), the committee is resuming its study on the implications of artificial intelligence technologies for the Canadian labour force.
    Today's meeting is taking place in a hybrid format, meaning there are members and witnesses appearing virtually and in the room. You have the option of choosing to participate in the official language of your choice by using interpretation services. There are headsets in the room. As well, if you are virtual, please use the "world" icon at the bottom of your Surface tablet and choose the official language of your choice.
    If there is an interruption in translation, please get my attention. We'll suspend while it is being corrected.
    I would ask those participating to speak slowly, if possible, for the benefit of the interpreters. To those in the room, keep your earpiece away from the mike to protect the hearing of the translators.
    Again, if you're appearing virtually, to get my attention use the “raise hand” icon at the bottom of your Surface.
    Today we will be meeting from 4:30 to 6:00 with witnesses.
    As an individual, we have David Kiron, editorial director, Massachusetts Institute of Technology Sloan Management Review, by video conference. From the Canadian Union of Public Employees—Quebec, we have Danick Soucy, president, political official, committee on new technologies, by video conference; and Nathalie Blais, research representative. Nathalie does have issues with sound. If they those cannot be resolved, she will not participate. From SAP Canada Incorporated we have Yana Lukasheh, vice-president, government affairs and business development.
    With that, we will start with five minutes for each group, beginning with David Kiron. You have five minutes or less, please.
     Distinguished members of the committee, thank you for organizing this important study and inviting me to participate in it.
    I'll discuss how AI is influencing four categories of work: designing work, supplying workers, conducting work and measuring work and workers. AI-related shifts in each category have policy implications. In aggregate, these shifts raise questions about how to optimize producer flexibility, worker equity and security. More broadly, these trends create policy opportunities for increasing productivity at the national level and strengthening social safety nets.
    Although we need policies to address worker displacement from AI-related automation, policy also needs to address AI's influence on a wide range of business activities, including human-machine interactions, surveillance and the use of external or contingent workers. Policy addressing AI in workforce ecosystems should balance workers' interests in sustainable and decent jobs with employers' interests in productivity and economic growth. The goal should be to allow businesses to meet competitive challenges while avoiding dehumanizing workers, discrimination and inequality.
    I refer to "workforce ecosystems" rather than "workforces". Our ongoing research on workforce ecosystems demonstrates that more and more organizations rely on workers other than employees to accomplish work. These include contractors, subcontractors, gig workers, business partners and crowds. Over 90% of managers in our global surveys view non-employees as part of their workforce. Many organizations are looking for best practices to ethically orchestrate all workers in an integrated way.
    I'll start with designing work. The growing use of AI has a profound effect on work design and workforce ecosystems, including greater use of crowd-based work designs and disaggregating jobs into component tasks or projects. Consider modern food delivery platforms like Grubhub and DoorDash that use AI for sophisticated scheduling, matching, rating and routing, which has essentially redesigned work within the food delivery industry. Without AI, such crowd-based work designs wouldn't be possible.
    AI is also driving recent trends to create work without jobs. On the one hand, this modularization of work can facilitate mobility within the firm and improve employee satisfaction by efficiently matching workers with tasks. On the other hand, designing work around tasks and projects can increase reliance on contingent workers for whom fewer benefits are required. Greater numbers of Canadian contingent workers can increase burdens on government-sponsored safety nets.
    Now I'll move to supplying workers. On the one hand, AI is transforming business access to labour pools. On the other hand, workers have more opportunities to work across geographic boundaries, creating opportunities for more workers. Using AI to find suitable workers can have both negative and positive consequences. For example, AI can perpetuate or reduce bias in hiring. Similarly, AI systems can help ensure pay equity or contribute to inequity through the workforce ecosystem, by, for example, amplifying the value of existing skills while reducing the value of other skills. It remains an open question on whether AI-driven work redesigns in the global economy will increase or decrease the supply of workers for Canadian businesses.
    I'll go to conducting work. In workforce ecosystems, humans and AI work together to create value, with varying levels of interdependency and control over one another. As MIT Professor Thomas Malone suggests, people have the most control when machines act only as tools. Machines have successively more control as their roles expand to assistants, peers and finally, managers. Emergent uses of generative AI in each category raise a variety of policy questions regarding worker liability, privacy and performance management, among other considerations.
    The last category where AI is influencing work is measurement. Firms increasingly use AI to measure behaviours and performance that were once impossible to track. From biometric sensors to corporate email analysis to sentiment analysis, advanced measurement techniques have the potential to generate efficiency gains and improve conditions for workers, but they also risk dehumanizing workers and increasing discrimination in the workplace.

  (1640)  

     That's about five minutes. I'm happy to continue. I have a conclusion, but I'm also happy to stop there.
    You can continue during the question period and capture anything that you missed in your opening statement.
    We will now go to Mr. Soucy for five minutes or less, please.

[Translation]

    My name is Danick Soucy, and I am the political representative of the Committee on New Technology, Quebec division of the Canadian Union of Public Employees, CUPE for short. CUPE Quebec's Committee on New Technology is attempting to gain a better understanding of emerging technologies that could impact the work of our members, including artificial intelligence, or AI. The committee's objective has never been to oppose technological breakthroughs, but, instead, to find ways of adapting to them.
     One year ago, rapid advances by ChatGPT surprised the world and even AI specialists. We now know that generative AI systems are able to perform a variety of tasks. Not only can they allow for the automation of manual labour, but they can also perform numerous professional creative tasks or those normally undertaken by office staff. Generative AI has immense possibilities and could cause serious upheavals in the working world and in Canadian society if no guardrails are put in place.
    We believe that it is imperative that action be taken immediately to regulate AI before companies undertake large-scale implementation, so that everything possible is done to avoid bringing in systems that cause problems for workers or for society at large. The old saying an ounce of prevention is worth a pound of cure certainly pertains to AI, which, in spite of its usefulness, can cause dangers on many different levels.
    One of the dangers is that many AI systems were trained using the Internet. As a result, they have incorporated biases and inaccurate data that can lead to discrimination or disinformation. However, commercial AI systems are more non-transparent than ever, and their suppliers do not always reveal what data sets they were trained on. In addition, the autonomy of AI systems makes it more complex to determine who or what is responsible when harm is done. The public and employers must be educated on this issue.
    In the workplace, this can mean rejections of either applications or promotions, or non-compliance with workers' fundamental rights in terms of privacy or the protection of personal information. AI systems used to assign duties to workers can also impact their health and safety by intensifying their work or by limiting, for example, their decision-making leeway, which is recognized as a work-related psychosocial risk.
    AI should not lead to discrimination, result in increased occupational health and safety problems or jeopardize an employee's privacy or personal information.
    Available data on the possible impacts of AI systems on labour vary greatly. However, a shocking study published by Goldman Sachs, a U.S. investment bank, estimated last March that AI could result in the automation of 300 million full-time jobs worldwide. This estimate includes the disappearance of a quarter of the work currently done in the U.S. and Europe. This is, by far, the most alarming assessment of which we are aware.
    In such a scenario, what would happen to laid-off workers? Would employment insurance be all they could count on?
    Would companies be responsible for their retraining?
     Would they be required to train their staff whose work was transformed by AI?
    Would they compensate governments for income tax revenues lost because of the use of AI to protect our public services?
    The government cannot consider the use of AI solely from the angle of innovation, productivity and economic growth. It must also take into account the adverse impacts that AI systems would have on citizens and their ability to contribute to Canadian society more generally.
    To this end, CUPE Quebec recommends that governments maintain a dialogue with all groups in civil society, including unions, on the subject of AI and that the government entrust Statistics Canada with the mandatory collection of information on the progression of AI and its impacts on work and on labour.
    Furthermore, the regulations to be implemented quickly should at least address the following four elements.
    First, employers should be obligated to declare any use of AI in the workplace and involve workers or their union representatives prior to the design and implementation of AI systems.

  (1645)  

    Second, employers should be required to train or requalify personnel affected by the adoption of AI.
    Third, implementation of a legal framework is necessary to protect the fundamental rights of workers and to identify those responsible for AI systems.
    Fourth and finally, requirements should be imposed relating to the responsible development of AI for the granting of any public funding.
    Thank you for your attention.
    Thank you, Mr. Soucy.

[English]

     Madame Lukasheh, you have five minutes.
     Thank you, Mr. Chair and members of the committee. We appreciate the opportunity to appear before you today to contribute to the study regarding the implications of artificial intelligence technologies for the Canadian labour force.
     SAP is a software technology application enterprise with long-standing operations in Canada spanning over 30 years. We work with organizations of all sizes across the public and private sectors to enable them to become part of a network of intelligent and sustainable enterprises.
     Our secure and trusted technologies run integrated AI-powered business processes in the cloud. More specifically, our applications cover enterprise resource planning, human resources and procurement and finance management, including travel and expense claims.
     SAP is a global enterprise present in 140 countries, with Canadian operations of strategic importance. We contribute $1.5 billion annually to the Canadian GDP and have a total of 7,000 jobs in our ecosystem from coast to coast to coast. Our innovation labs where our R and D is conducted are located Montreal, Waterloo and Vancouver.
    We understand that Canada's labour force today is confronted by the fast-paced evolution of AI technology, and workers are increasingly faced with a series of complex decisions related to implementation and training as organizations are evolving within this new digital era. As AI is increasingly used to automate decisions that have a significant impact on people's lives, health and safety, we recognize that governments have an important role to play in promoting innovation while safeguarding public interest.
     Concerns, which we hope to discuss as a part of our testimony today, are common and are often overlooked practices associated with a general lack of AI integration, which we have seen impact many industries, including Canada's public sector. For example, I'm referring to the use of disconnected or complex legacy systems across organizations, outdated manual processes, limited interoperability and few end-to-end processes across human capital management functionalities. When not addressed, these have implications on recruitment, retention and skills training, not to mention the cost associated with the operation such legacy systems.
    The boundless potential of generative AI could bolster Canada's economy by $210 billion, greatly boosting Canadian workers' productivity. It's important that organizations seek experienced industry partners that are equipped to guide operations and organizations through their digital transformations, leveraging technologies like AI to level up the workforce. At SAP, we see that potential and opportunity to unlock productivity and value across our economic sectors. For example, AI can address some of the top workforce challenges of our times from recruiting and training to increasing employee engagement and retention.
    I'll run through a few examples. Recruiting AI software can remove unconscious biases in job descriptions. Recruiting automation can lighten the administrative burden by automating the delivery and receipt of necessary documents. Specialized AI-enabled training is interactive; it's continuously learning and adapting to each worker's learning style, whether it's visual, auditory or written. AI analytics, specifically sentiment analytics, can identify how workers are feeling. AI performance analytics allow managers to extract bias-free insights from continuous real-time assessments via multiple sources.
    Another area where technology can support is accessibility. Software solutions can enable the inclusion of members of the disability community into today's workforce. As a co-founding member of the ministerial advisory board that established the Canadian business disability network, SAP advocates for the acceleration of the adoption of technologies that embed tools like AI to onboard members of the disability community into today's workforce.
    Canada's potential in this space is vibrant and remains globally competitive, with a diverse AI ecosystem that attracts more AI talent and brings more women into AI-related roles than all of our G7 peers.
     The high concentration of talent in Canada contributes to a rising volume of AI patents filed nationally and the highest number of AI publications per capita in 2022. It is even more important that public policy favour retention of top AI talent in this country to uphold our competitive edge and support sustained innovation.
    The impact of AI to Canada's labour force remains undeniable, and public policy must allow for better digital integration with Canada's industrial base to strengthen our local ecosystem that is inclusive of SMEs, minority-owned businesses and indigenous businesses.

  (1650)  

     Mr. Chair, thank you. I'm happy to take questions.
    Thank you, Ms. Lukasheh.
    We'll now begin questioning with Mrs. Falk.
    Mrs. Falk, you have six minutes, please.
    Thank you very much, Chair.
    Thank you to each of our witnesses for taking the time to come here to share your experiences and thoughts regarding AI.
    My first question would be for SAP.
    You did mention a few side effects or outcomes of some of your clients using AI. I'm just wondering if you have more examples—either for the better or for the worse—of things they have experienced, anticipate or fear having to encounter, if that makes sense.
    Absolutely.
    I think you'll notice that within our customer base, they realize the value that AI brings into their business processes, and they see the value it can unlock.
    I'll probably use, at a very high level, a few examples. Take banks, for example. They have a lot of financial reports and data that they have to manipulate through different data sources. AI can automate a lot of these tasks and summarize a lot of that data. That would provide a lot of efficiency for the workforce in that particular bank to dedicate the time to a lot more strategic work, instead of a lot of the data analysis.
    Another example would be within manufacturing. Some of our customers are leveraging AI technologies to look at sales performances, identifying where the underperforming regions are, looking at their procurement and their supply chain, and looking at their HR and trying to find efficiencies across....
    That's probably what I would give as an example.

  (1655)  

    In your example of a bank, would the job being completed right now by a human then be displaced, or are some of your clients finding alternative work, still in the bank? Are we anticipating or seeing a job loss for a person?
    I won't be able to speak on behalf of the banks, but what I can say is that, at SAP, we always view the necessity of the human in any of the work being done. We see AI as an augmentation tool, and not as a replacement for a particular job. This is the case for the various industries that we cover across the board, whether in Canada or around the world.
    That brings an important question about how we support the employees to better skill them for the new tools they're going to be leveraging, to be able to use their time in more strategic ways that are linked to business processes.
    For sure. Thank you.
    You also mentioned different industries. Would SAP say that they anticipate that there may be a different quantity of AI that would be used in different industries?
    Are you able to explain that a bit better? I'm not sure I understand the question.
    Are there some industries that might be utilizing or may utilize AI more than other industries?
    That will depend on the industry itself and how they intend to leverage the technology and AI.
    What I can say for the industries that are using our human resource application—the software applications, as an example—is that the AI is already embedded in the tool, so it's being leveraged in the same way within the different industries that are leveraging that particular product or solution. I would say that.
    Thank you.
    Are you able to speak at all about the impacts that AI would have on the working conditions of workers? Would the workload increase or decrease?
    What we can say is that AI definitely optimizes a lot of the manual workloads that are currently being done by humans, so there's an efficiency gain that is being done there.
    Again, it's not to replace that individual person, but really to make their job a little bit easier and have them concentrate on a lot more strategic work, rather than spend hours on automated work where they could be leveraging AI, which they can do in a span of minutes. ChatGPT is an example of that.
     Sure.
    Would we find the probability of errors going down? If it's augmenting data, for example, would AI do it better than a human?
    Given the fact that the AI technology is able to access data sets from different sources, it will definitely be able to do the job quicker and be less time consuming, because it has the end-to-end vision from the whole process. Whether the data is better or not, that is left to be determined by the user, but the human factor always has to remain there to be able to be that oversight, as well.
    Thank you.
    Thanks, Mrs. Falk.
    Joining us now, we have Ms. Nathalie Blais.

  (1700)  

[Translation]

     Mr. Coteau, you may go ahead for six minutes.

[English]

    Thank you so much, and thank you to the witnesses for being here.
    This is our last day listening to witnesses. All of the witnesses have really contributed to a very interesting conversation on the subject. Many and various points have been brought forward that have complemented each other.
    I want to speak to a point that's been brought up by more than one person, specifically about how machine learning works and how data.... I think you just referenced data coming from many, many different sources. If the data that we're using is building the AI through machine learning, there's no question that bias will be embedded into the technology we're building.
    Technology mirrors society as a whole. Here's a good example. If AI were being used in the judicial system, it would look at, let's say, the last 70 years of court cases. If that were the case, and if we acknowledged that the AI would be built from that machine learning and datasets that have lasted the 70 years, we would now be making decisions based on that data, and there would be a bias embedded in it if we acknowledged that the system had systemic barriers in place.
    The big question is this. I think Mr. Soucy brought up the fact that we need to be careful that the technology that we're putting forward doesn't set bias against some workers. I guess my question for the union representative is, how do we use the collective agreement process and how do we hold companies accountable when the datasets they're using are often in a black box-based information set that's not shared with the public? These algorithms are private.
    How do we ensure that we can find a balance between what's being built and how it serves workers in general?
    That question is for Mr. Soucy.

[Translation]

    I'm going to let Ms. Blais answer that.
    Thank you for your question. It's a good one.
    I was at a telecommunications symposium recently, and one of the issues discussed was how reliable AI systems were when trained on data that aren't entirely reliable. For example, when the Internet is used to train an AI system, it really captures everything out there, even though some of that information is false and some is true.
    How do you make sure an AI system trained on those data is reliable?
    When that question was put to business people in the telecommunications sector, they all evaded the question. The reason I'm telling you that story is that, afterwards, I spoke with the person moderating the panel discussion. She, herself, is a technology expert, and she said that the only way to make sure the data are high quality is to require companies to disclose where the data used to train their systems came from. Developers would have to tell companies purchasing AI software whether the systems were trained on data pulled from the Internet, private corporate data, academic data or government data.

[English]

     Thank you for that response.
     I will move over to SAP, based on the response to the question I asked. If we're going to use technology like AI for recruitment and training, which you mentioned earlier in your testimony.... Part of a company's competitive edge is making sure that the algorithms and software it's using are private, because that's intellectual property. At the same time, we need to make sure that the datasets that are being used are fair and come from reliable places. Many big organizations like the Amazons and the Microsofts may not be unionized, so there's a disconnect with that collective agreement process.
    How do we make sure that big companies like SAP are bringing forward AI based on machine learning that is equitable and transparent? How do we go about doing that? At the same time, how do you keep your competitive edge? That's a tough question, but maybe you have some thoughts on that.

  (1705)  

    I'll answer that question in terms of what is probably relevant for SAP and what can be given as a response for SAP.
    Larger companies, when it comes to data and AI and machine learning, or large language models, have been quite deliberate in the way they design many of those algorithms, so that they are reliable, safe and responsible. We have many strategies and ethics that go behind it and that are in place. I will start with that.
    I would like to know about the ethics piece. How do you make sure the ethics piece is kept if it's really behind the company? The company needs to preserve some of its—
    Absolutely.
    There are compliances we have to address and abide by, as do other industry members in the different sectors as well. The data that goes behind....SAP does not own that data. It is the customer's data. We provide the technology, the tools, and the customer maintains that data. It's hard for me to answer that question from that perspective, but there's a lot that goes into these tools and how they're used.
    The data, depending on the sources it's coming from, yes, has to be validated. It has to be verified to make sure that it does not cause bias and unintended consequences. The developers in our industry are consistently looking at how to improve that technology and how to improve leveraging of the good or clean data, I should say.
    Thank you, I appreciate that.
    Thank you, Chair.
    Thank you, Mr. Coteau.

[Translation]

    Go ahead, Ms. Chabot. You have six minutes.
    Thank you, Mr. Chair.
    Thank you to the witnesses for being here.
    This is our last day hearing from witnesses on the implications of artificial intelligence for the Canadian labour force, and I'm not sure we've gone as far as we need to. We are actually still missing quite a bit of the information we need to measure the impact.
    Mr. Soucy and Ms. Blais, thank you for your input. Some of your fellow union representatives told the committee that it is detrimental to workers when they aren't told ahead of time about the implementation of new technologies like AI or the purpose of those technologies.
    You said that AI could even cause upheavals in the working world—hence the importance of regulating AI.
    What do you mean by regulating? My Liberal colleague pointed out that not all workers are unionized. How do we regulate AI in a practical and effective way?
    It's important to have clear laws that define the responsibilities around the use of AI. That will ensure that even non-unionized workers are protected.
    I gather, then, that more effectively regulating AI could also mean amending labour laws.
    Is that correct?
    Precisely. Labour laws and the Labour Code have to be adapted to address new technologies.
    A common refrain is that no one is against new technology because it helps society move forward, but it has to work for humans, not the other way around.
    Do you have any specific recommendations to support workers as far as privacy, data and workplace health and safety are concerned?
    It's important to see the technologies as tools, not a way to replace humans and the work they do. The technologies absolutely mustn't put the health and safety of workers at risk. Everything has to be laid out clearly, including straightforward and publicly available remedies in case the system fails.

  (1710)  

    It's really important to educate the public and companies. The government is really pushing AI. In the fall, Montreal hosted an AI event called All In 2023. Minister Champagne and Prime Minister Trudeau were there. There's a big appetite in the government for AI. There's a push to move in that direction, but it's also important to keep in mind that public literacy may not be at the level it should. When interacting with AI or when faced with the collection of certain data, people may not understand they need to be mindful and take appropriate precautions.
    The first step is education. Next is making sure that not only labour laws, but also specific AI legislation is responsive to this reality. The proposed artificial intelligence and data act, currently being studied by the Standing Committee on Industry and Technology, could include guidelines to ensure that the use of AI does not infringe on workers' fundamental rights or jeopardize their health and safety.
    ChatGPT has revolutionized the online world. Some are calling for a moratorium on AI technology before things go any further, precisely to educate the public and employers. I heard the same thing in the wake of that big conference in Montreal.
    Is it absolutely necessary, in your view, to do the work on the front end before continuing down this path?
    Yes, we have to make sure the law is clearly defined before we go full steam ahead with AI. Otherwise, workers' rights could be violated until the legislation comes into force.
    Certainly, it's tough to take action on the front end, before AI is widely implemented. I take your point, Ms. Chabot, but the train has already left the station. Something has to be done to put things on hold if the idea is really to do the work beforehand.
    Some companies still haven't adopted AI systems, so in their case, it would be possible to take action on the front end. I don't see how we can stop a train that's already coming down the track.
    I know your union has a lot workers in the telecommunications sector. Does implementing AI technologies pose any specific risks in that sector or other sectors? Can you give us any actual examples?
    In the communications sector, more broadly, a closed captioning company comes to mind. We found out purely by chance from an employee in our union that the company was introducing AI.
    What happened is that the employee was asked to revise some captioning that had been done. She thought she was revising a contractor's work. She was asked twice more to revise texts, and each text was better written and better overall than the time before. She eventually realized that, unbeknownst to her, she was training an AI system.
     That's a good example of the problems associated with AI—companies are not being transparent. We are also seeing a lot of jobs being moved.
    Thank you, Ms. Blais and Ms. Chabot.

[English]

     We have Madame Zarrillo for six minutes, please.
    It's so interesting to hear the testimony today, and there's just so much that we still need to learn and know.
    I wanted to go to Mr. Kiron. You spoke about the potential for dehumanizing workers, and I'm interested in exploring this a little bit. I wonder if you could share what risk factors there are that would contribute to dehumanization in regard to AI.
    Sure.
    One of the big threats around dehumanization comes with surveillance in the workplace. There are AI technologies that enable business owners and business managers to track very specifically what is happening with workers. It's super-constrained. It's not only keystrokes, but it's whether or not you're being attentive, whether you're focused on a screen or what your biosignals are. You have that level of intrusion, and the job could be the only kind of job you could get, or you might even feel lucky. But the actual performance of the job is under so many different tabs that it's like you're a machine, being played by the manager elsewhere. That's one example.

  (1715)  

    We're thinking about that, because we're federal regulators here and we want to make sure that we're protecting workers and the human factors of people in our community. Can you share with the committee your thoughts about what we need to do as regulators to protect from that dehumanizing risk?
    When I go to my doctor and he asks what my problem is, I say, “Well, my shoulder hurts.” He says, “When does it hurt?”, and I say, “When I do this.” Then he says, “Well, don't do that.” Regulations can be very targeted in saying that there are certain types of control in the workplace that are just unacceptable. They constitute a dehumanizing effect on workers [Technical difficulty—Editor].
    It looks like we lost that witness.
    I'm going to move over to Madam Lukasheh on the information around disability. This committee also considers persons with disabilities and their inclusion. I just wonder if you wouldn't mind expanding a little bit on opportunities to have AI make the workplace more equitable and inclusive, and also any risks that you see in that space.
    Absolutely. We certainly see the opportunity for software applications that leverage AI tools to be made available to members of the disability community across the spectrum. The applications that we use are accessible and compliant, and they will allow different members to play a role in the workforce, depending again on the user needs and user experience.
    Whether it is an individual who is visually impaired...there are ways that AI can be worked into a software application to allow them to still participate in that workload. I'm happy to provide more details on this particular business council, but also about how SAP more broadly thinks about disability inclusion, which is factored into the design phase of our applications.
    Stemming from our global CEO's office, we have a full expertise in how to make sure that our applications are disability and accessibility friendly. That goes across the board for all of our applications that are used right now.
     Thank you for offering additional information. To have the makeup of that council and how it works would be great.
    In testimony at the committee for this study, it's been suggested that there should be an advisory council for the rolling out of regulation as they relate to workers. I wonder whether you could share what you think would be important representation around any kind of an advisory council, federally, that would look at the implementation of AI in the labour market.
    Absolutely. The Disability Inclusion Business Council that was recently formed takes a bit of that perspective as to how we ensure that a lot of what the business community across the board is using—the design of their offices all the way to the IT and the software that they leverage—is accessible. That component of the study could partly be taken with that in mind.
    From a regulatory perspective, for example, the federal government in Canada has an accessibility act, which a lot of the providers have to abide by. That is one way we adhere to it.
    Regarding how regulations are evolving, I think that's where we can take the conversation. Having an advisory board look at the different and evolving ways that AI technology can play in that role, I think is a very valid conversation and one that we should probably be taking a deeper look at.

  (1720)  

    Thank you so much.
    Thank you, Ms. Zarrillo.
    Ms. Gray, you have five minutes.
    Thank you, Mr. Chair, and thank you to all of our witnesses for being here.
    My first question is for Mr. Kiron.
     You stated in an article you co-wrote that “These analytic systems, which we call smart KPIs, can learn, and learn to self-improve, with and without human intervention.”
    Do you believe that due to this, AI would be able to collect private data? If so, are there gaps in privacy legislation that you would recommend the government amend or implement?
    That's a fascinating question.
    On the whole issue of KPIs and acquiring private data to improve KPIs and help them learn, to the extent that businesses use private customer data and that's part of their datasets, there's definitely regulation that constrains how businesses can use that personal data outside of the organization.
    Within the organization, there's obviously.... You can't see social security numbers outside of HR. So with the fact that KPI data is being used to help train new KPIs or better KPIs, and the KPIs themselves can learn from this data, it could be limited to whatever is appropriate within the enterprise's uses of the data, if that makes sense.
    Great. Thank you.
    I have just a quick question. Do you think the development of AI will pose risks to someone's privacy and intellectual property?
    Oh, yes, and it already has.
     The large language models, for example, have been trained on datasets that include published works by writers around the world. I think there's a class action suit going on with writers like Stephen King saying, “Look, your tool that you're making billions of dollars from—you have like a $90-billion capital valuation—is piggybacking on my work and it's completely uncompensated.”
    There's that kind of rip-off of intellectual property—absolutely.
    Then, in terms of privacy, there are so many different ways that AI is going to interfere with people's privacy. If you just take generative AI, we've talked about ChatGPT. There's Claude, and Bard from Google. There are all of these LLMs that are out there.
    These companies are trying to stay ahead of the issue by putting in guardrails that are ethical and all that, but what we haven't talked about is that there is going to be a grey market for large language models that are free of these constraints that governments and companies in the public eye are focused on. What do you do with that?
    Ms. Chabot, it's very hard to do a moratorium on that kind of thing.
     Thank you very much. Thank you for that explanation. It was very helpful.
    Mr. Chair, I would like to move in a different direction here for a moment and pause.
    I would like to move a motion. This has been circulated to the committee. I'll just read the motion here:
Given that,
the Auditor General of Canada recently issued a scathing report on the Liberal Government’s Benefits Delivery Modernization programme, identifying delays, cost overruns and concerns on the viability of increasingly outdated technology;
this project was budgeted for $1.75 billion when launched in 2017 but has nearly doubled in cost, to $3.4 billion;
new reports from ESDC projects a revised cost estimate of $8 billion marking a 357% increase from the original price tag;
the completion date for the project has been pushed to 2034;
That the committee undertake a study of no less than four (4) meetings to review the government’s Benefits Delivery Modernization program and the Auditor General of Canada's report on this matter and that, the Auditor General of Canada, the Minister of Citizen’s Services, the Minister of Employment, Workforce Development and Official Languages, the President of the Treasury Board, and all relevant officials from these departments be invited to appear before the committee on this matter for two hours each; and that the committee report its findings and recommendations to the House.
    Mr. Chair, just to put this into perspective, the benefits delivery modernization programme is the largest IT project ever taken on by the Canadian government. It was projected, as I said, to cost $1.75 billion. According to reports, it's now projected to cost an estimated $8 billion.
    Costs have gone up. Expensive consultants have been hired. Timelines are extended. Liberal ministers need to answer questions to be held accountable for this chronic pattern of lack of oversight and mismanagement with yet another IT project. As an example, the ArriveCAN app project didn't work. It cost taxpayers $54 million and is now under criminal investigation. The Liberals recently paid over $600,000 to consultants to advise on how to reduce spending on consultants.
    The government does not deserve the benefit of the doubt here. This is a massive spending project of taxpayer dollars. This human resources committee needs to scrutinize this.
    I hope to have support of all members of this committee.
    Thank you, Mr. Chair.

  (1725)  

    Thank you.
    For the benefit of witnesses appearing, this is a normal process. A member can use their time to introduce a motion. We suspend the interaction with witnesses while we're doing this. We have now have the floor.
    The clerk has advised me that the motion is in order and to be moved today. It's now open for discussion.
    I have Mr. Fragiskatos, on the motion of Ms. Gray.
    Thank you very much, Mr. Chair.
    I'm not so sure that we can count on the veracity of those numbers. I'm not saying I'm not willing to delve into these issues further, but I think we will have an opportunity to do so—not today, not in coming meetings, but when we have the supplementary estimates. I think that offers us a chance to continue this committee's focus on agenda items that we have already agreed to. I think we should resume the meeting at the earliest opportunity.
    I'm happy to move to a vote.
    Seeing no further discussion, I'll call for a recorded vote on the motion of Ms. Gray.
    (Motion negatived: nays 7; yeas 4)
    The Chair: The motion is defeated. We'll resume to the matters before the committee.
    Ms. Gray, you do have about 50 seconds.

  (1730)  

     Thank you, Mr. Chair.
     It's really unfortunate that the members opposite have not supported this motion, considering that the Auditor General has written this committee saying that she is willing to come before the committee to discuss the very damning report that they put together—so that's unfortunate.
    Ms. Gray—
    Yes, Mr. Chair.
    —the business before the committee is the witnesses appearing. The motion you moved was voted on and defeated. I would ask you to bring your comments to the agenda item currently before the committee, which is the questioning of witnesses on the AI study.
    Mr. Chair, I think I have a few moments here, so—
    On a point of order, I think the 50 seconds is done. This is my point. The clock shouldn't stop because the member is speaking about.... The time has been exhausted, so I think we need to move on to the next speaker.
    Thank you, Mr. Coteau.
    Burned up my time....
    There's no direction. When a member moves a motion in their time, we do suspend, and I allow the time left, and that is the procedure of committees, but Ms. Gray used up the rest of her 50 seconds with the discussion.
    Now we will move to Mr. Kusmierczyk for five minutes.
    Thank you, Mr. Chair.
    I thank the witnesses for an excellent conversation this afternoon.
    I have a question for Professor Kiron. Work rarely happens in isolation. Workers rarely work in isolation. They work on teams. I want to ask you whether you've considered how AI may impact teamwork or could impact collaboration in various settings, whether it's an office setting, a warehouse or a factory.
    I'm curious if you've given teamwork and the impact of AI on teamwork some thought.
    We've looked at this. We did a study with the Boston Consulting Group and a professor from Boston College, Sam Ransbotham, on this very topic.
    The ways that machines and humans interact fall into different categories. I'll try to keep this as concise as possible, but you can have the machine doing.... Take decision-making. The machine makes the decision all by itself, and it's an automated thing. Take fraud detection. These AI technologies are sifting through so many parameters that no human could do it, possibly. It's making decisions about what constitutes fraud.
    There are other kinds of things where the AI would contribute to a decision, but the human would have final decision-making authority over it. Similarly, the human could contribute to the AI making a final decision. Take fraud. There's another fraud instance, but it reaches a level where it's not really clear whether or not it's fraud, so the human might play a role in that kind of decision.
    There's a whole spectrum, and what we found is that AI at a very high level, when humans are working with AI, emboldens and strengthens teamwork on the part of humans. Humans are more satisfied working with AI than teams not working with AI. It increases collaboration.
    That's interesting. The reason I ask is that I'm reading a book right now by Dr. Brian Goldman on teamwork. He looks at the operating room, a complex environment where you have many surgeons, doctors and nurses operating together, and mistakes sometimes happen. Suboptimal decisions are made. I'm wondering how AI might be utilized to prevent some of those mistakes and help optimize decisions in a complex dynamic setting. I very much appreciate what you brought to the table there with your insights.
     Ms. Lukasheh, I believe that today you were moderating a panel, if I'm not mistaken, with the Canadian Chamber of Commerce on AI, and you had some really interesting guests on your panel. There were folks from Microsoft and others. Were there any interesting insights? Did anything surprise you from those discussions, anything you'd like to share with us that is pertinent to our conversation?

  (1735)  

     Indeed, as co-chair of the Canadian Chamber of Commerce's Future of AI Council, we did have our first executive summit today, and it was a successful one.
    We had members from all sizes of companies and from all different industries come together. We discussed AI technology as an emerging new technology, where it's going and where it's headed. We all came to a consensus that it is fast-paced. It is consistently evolving, and it is going to continue evolving in all our different sectors.
    Currently, there is legislation before Parliament that looks at how to regulate AI. The conversation around whether Canada is going in the right direction, around legislating and regulating AI, is a mixed bag in terms of the sentiment around the current legislation. Overall, we can all agree on the fact that we do need some level of principles and regulations in this space.
    We appreciate that different companies than are currently leveraging this AI technology are unlocking value and benefits from it. They're seeing realized and happen fairly quickly.
    Looking at the productivity of AI and looking at how we in Canada can create an ecosystem that is both domestically and globally competitive was also an interesting conversation that we broached,in terms of how AI can play a factor into that.
    I'll stop there, but I'm happy to speak more about it.
    Thank you, Mr. Kusmierczyk.

[Translation]

    Go ahead, Ms. Chabot. You have two and a half minutes.
    Thank you, Mr. Chair.
    This is for either Ms. Blais or Mr. Soucy. I'd like to give you a chance to finish answering the question I asked you earlier, about the effects of implementing AI systems.
    We know that many employers in certain sectors contract out work. Do you have any real-life examples of the impact AI is having in those sectors?
    I can talk about the telecommunications sector. Canada's big telecom companies outsource work overseas, to workers in countries that don't have the same laws we do. That's true for call centres, IT helpdesks, planning design and so on. That alone raises concerns around the privacy of Canadian customers and the employees of those companies.
    We've also noticed that AI tends to enhance the capabilities of other technologies. For instance, when combined with AI, 5G technology, which is currently being deployed, will allow for the automation of numerous activities in telecom companies, possibly leading to the demise of highly skilled jobs.
    I don't know how those employees would be retrained. Companies are reluctant to do that as of now. That's what we have realized. Companies prefer to use contractors to do all the work within the company or hire people straight out of school.
    The government talks a lot about the middle class. What's going to happen to middle-class workers whose jobs are in the process of being automated? Will they be retrained to do work equally as technical as the jobs being taken over by AI? That's something to consider.
    Do you think employers have a duty when it comes to training employees? Should employers already be training skilled employees in anticipation of the transition?
    Would you like to answer that, Mr. Soucy?
    Yes, employers should retrain employees and give them a chance to move to another position within the company. They can't just let workers end up jobless. Ultimately, society will have to take care of those people who are out of work.
    Employers have a duty to their employees. It's not okay to toss employees aside to reap the advantages of new technologies while society pays the price.

  (1740)  

    Thank you, Ms. Chabot.

[English]

     Madam Zarrillo, go ahead for two and a half minutes.
    Thank you, Mr. Chair.
    I'm going to direct my question initially to Mr. Soucy and then, if we have time, to Mr. Kiron. I'm interested in talking a little bit about consent, the consent of workers that was introduced with this idea of surveillance of workers and really having workers be part of the conversations around what technology comes into the workplace.
     I wonder if there have been any conversations, Mr. Soucy, around the consent of workers and what kind of federal legislation could be in place to protect workers and allow them to give consent before they're surveilled.

[Translation]

    When it comes to consent, it's important to know what information is being collected about workers. Employees can't really give their consent when employers aren't transparent and don't disclose the data they are collecting.
    Employers should be required to disclose the data they are collecting.

[English]

    Thank you so much.
    I will just ask one more question to Mr. Soucy before I move on. There is testimony that recommends that an advisory council be struck by the federal government. I'm just wondering if you believe transparency is one of the key areas that an advisory council needs to look into.

[Translation]

    Yes, without a doubt. Transparency is really the key to instilling confidence in the public and workers. Without transparency, it's going to be extremely difficult to get society to accept the implementation of AI in the workplace.

[English]

    Thank you so much.
    Mr. Kiron, I just want to ask you too about consent and how a federal regulation could allow for consent for workers or could protect workers in that space.
    I would elevate the question to focus on decent jobs. If legislation enables businesses to have jobs that are, for lack of a better word, indecent, and if you were to consent, they would be so dehumanizing that you wouldn't want to actually populate your economy with this kind of work situation. Consent in that context would just perpetuate these really awful working conditions, but if those are the only jobs that they people can get, they will consent. I don't know how much that can be generalized, but that's definitely a consideration, and a limitation on consent can solve all of these problems. The same is true with transparency.
    Thank you, Mr. Kiron.
    Ms. Gray, go ahead for five minutes.
    Thank you, Mr. Chair.
     I'll go back to Mr. Kiron.
    Have you had a chance to review the new AI rules that came out earlier this year in the U.S.? Do you believe there would be any benefit to Canada if we were to harmonize our rules with those of the U.S. or of other countries? Do you have any comments on that?
    Unfortunately I don't want to represent myself as enough of an expert to say a lot about all of the different regulations that are going on. In the EU and in the U.S. one of the big considerations that Canadian legislators need to factor in is how to enable AI to flourish in a way that supports businesses and the workers without creating a dehumanizing inequitable two-tiered system with workers.

  (1745)  

    Thank you.
    Do you believe, from your experience, that there's a potential for opening up copyright issues? Do you think our copyright laws are strong enough in Canada right now?
     Again, I'm sorry. I don't know enough about the Canadian context, but they are not strong enough. Copyright laws are not strong enough.
    It's not clear. There are so many issues that are new and need to be wrestled with that haven't really been wrestled with.
    You have singers. You can use large language models to say, “Come up with a song in the style of Harry Styles” or whoever, and it can create a song with the lyrics and the musical accompaniments. Is Harry Styles owed anything as a result of this?
    Great. Thank you for your comments on that and for that comparison. I appreciate that.
    Mr. Chair, I would like to go forward with moving another motion here. This has been circulated to the committee.
    I will read the motion:
That, pursuant to the Order of Reference of Thursday November 9th, 2023, the Minister of Employment, Workforce Development and Official Languages, the Minister of Housing, Infrastructure and Communities, the Minister of Diversity, Inclusion and Persons with Disabilities, the Minister of Labour and Seniors, the Minister of Families, Children and Social Development, and the Minister of Citizens’ Services, appear before the Committee for no fewer than 2 hours each to consider the Supplementary Estimates (B) before Friday, December 1st, 2023.
    It is a normal practice for us to have ministers come to the committee, so this is formally requesting them to do this. This is also particularly important considering that motion I previously put forth, which was not successful, to look at the benefits delivery modernization programme.... In fact, the Liberal member opposite noted that it would be something that could be brought up when the ministers come here to talk about estimates, so this is perfect timing. Therefore, this should be easily supported by the members here.
    This is really important considering that we're looking at the numbers; we're looking at the extra spending of the government. We also have a new minister in here as well with a new portfolio, and so this is really timely to have this minister come forth. We haven't had this minister before the committee.
    As I mentioned earlier, we also have the Auditor General's report, which hasn't been addressed yet, and we can question the ministers on that as well.
    Thank you very much, Mr. Chair.
    Thank you.
    We now have, on the motion, Mr. Aitchison, and I believe, Mrs. Falk, Ms. Ferreri and Mr. Kusmierczyk.
    Again, to the witnesses, this is in order before the committee.
    Mr. Aitchison, if you soon don't get the floor, I will go to Mrs. Falk.
    Mrs. Falk, and then Ms. Ferreri, Mr. Kusmierczyk, Madame Chabot and Mr. Coteau.
    I have a point of order.
    Go ahead.
    Should we end the session with the witnesses at this point, considering that there's such a long list? Is the committee business at 6 o'clock?
    Yes, committee business is at 6 o'clock. The meeting is still within its timeslot with the witnesses, so I will ask the witnesses to stay until we deal with this.
    Mrs. Falk, you have the floor. Then it's Ms. Ferreri, then Mr. Kusmierczyk and Ms. Chabot.

  (1750)  

    Thank you very much, Chair.
    It's long-standing practice, as we all know, in each committee to have estimates. I know, as Ms. Gray said, we do have a new minister too, who hasn't had the opportunity to come to this committee yet to express himself or speak to the mandate that he has, so I think it's completely reasonable that we have these ministers come to committee, preferably in this session here.
    Ms. Ferreri.
    I was wondering if I could add an amendment to the current motion, so that we could start the meetings in December. Are we able to do that at all, or is that...?
     An amendment is totally in order. Are you proposing an amendment to this phrase?
    Yes. I'm proposing an amendment that—
    Then be clear—
    —do it before we rise for the Christmas break, basically, to get these ministers in.
    Okay. Now we're on the amendment. The amendment is that the—
    I'm sorry. To follow up with that, Chair, I'm going to echo what my colleagues have said.
    Regarding the Minister of Citizens' Services, who now oversees passports, with the holiday season upon us and estimates and all of these things, we haven't seen this minister yet, so I think that getting them in ASAP and before we rise in December would be ideal. Hopefully, we have the support of the committee on this.
    Okay. At the moment, we have an amendment to the motion that was under discussion. The amendment is to have those meetings before the House rises—before our Christmas period.
    Is there any discussion on the amendment by Ms. Ferreri?
    Chair, can I ask for a three- or four-minute suspension?
    Sure.
    The committee will suspend for three minutes.
     We're suspended.

  (1750)  


  (1755)  

    The Chair: The meeting has now resumed.
    Mr. Fragiskatos, you called for the suspension.
    Are we on the amendment?
    Yes. It's the amendment by Ms. Ferreri.
    I'd like to go to a vote. I think we're ready to proceed to a vote on the amendment.
    I just want to clarify that I'm asking for a start date for this study of December 1.
     Thank you very much for that specific date for the estimates.
    Before December or starting in December...?
    We're saying “started on” as opposed to “before”.
    I'll allow it on the clarification. You cannot amend your own amendment, but I will allow it on a clarification.
    Thank you, Chair.
    Okay. Ms. Ferreri has clarified her amendment.
    I have Ms. Chabot on screen with her hand up.
    Go ahead, Ms. Chabot.

[Translation]

    Could you clarify something, please, Mr. Chair? The motion we got last week was about inviting five ministers for two hours each. Is that the motion we are debating right now? If I understand correctly, there is now an amendment on the floor to have the study start on December 1.
    I agree with the substance of the motion—to invite the ministers—but when we get into time frames, I think it undermines committee business and the priorities we've set. At six o'clock, so in three minutes, we are supposed to deal with committee business. Is it possible to keep discussing that and put the current discussion on hold?
    Inviting five ministers for two hours each starting on December 1 would obviously delay our agenda for December.

[English]

     That's correct, Madame Chabot. Provided that the committee voted to accept the amendment of Ms. Ferreri, it would then change the agreed-to calendar. The motion has six ministers in it, not five, so that would be longer.
    Are we okay to go to a vote on the....
    Madame Chabot.

[Translation]

    Does the main motion still call for two hours with each minister?

[English]

    That's correct.

[Translation]

    If I understand the rules correctly, we vote on the amendment first. Then, we debate the motion.
    Is that correct?
    Yes, that's correct.
    Thank you.

[English]

    Mr. Fragiskatos, do you want the floor?
    I just want to say that we're ready to move to a vote, Mr. Chair.
    Madame Chabot, you still have your hand up. Are you okay?
    I'll ask the clerk to call a recorded vote on Ms. Ferreri's amendment to Mrs. Gray's main motion. I'll get the clerk to read the amendment.

  (1800)  

    Thank you, Mr. Chair.
    The amendment is to replace “before Friday, December 1st, 2023” with “starting on Friday, December 1st, 2023”.
    (Amendment negatived: nays 7; yeas 4)
    Now that we're back to the main motion, I do have an amendment that I think takes into account what Ms. Chabot has raised and that would allow the committee, Mr. Chair, to look at these issues and also proceed along the lines of what we've already agreed to for an agenda.
    We know what the main motion is, so I'll just begin at the word “appear” to make it efficient here. My amendment would be as follows: “appear before the Committee for no fewer than one hour each, in two panels of three, to consider the Supplementary Estimates (B).”
    Madame Chabot had her hand up first.
    Madame Chabot, do you wish to speak on the amendment by Mr. Fragiskatos?

[Translation]

    When I put my hand up, I also wanted to propose an amendment to schedule one hour with each minister, instead of two hours. I take it that Mr. Fragiskatos's new amendment does that, so I'm in favour of it.
    Thank you.

[English]

    Now we have Ms. Falk on the amendment of Mr. Fragiskatos.
    Thanks, Mr. Chair.
    Just for confirmation.... That's not one hour per minister but would be three ministers for one hour, for a total of two hours. Is that correct?

[Translation]

    Yes, that's right.

[English]

     Okay.
    I think this sets a very bad precedent when it comes to transparency. In the past I know that we have had one hour for a minister and one hour for their department. The minister usually brings departmental staff to answer any technical questions that they may need assistance with. I just think this sets an awful precedent for whomever will be in government, today or in the future. It skirts around transparency, especially when we have a government like this that spends billions upon billions upon billions. There seem to be slush funds in places.
    It's absolutely unacceptable for me to agree to not have each minister, per the tradition that we've had in this committee for a very long time, of two hours for one minister. I think it looks like a cover-up. It looks like the Liberals are continuing to hide from accountability. It's very sad.
    I have a point of order, Mr. Chair.
    Clearly state your point of order.
    It's around process. We've had deliberations now on motions and amendments for almost 20 minutes. Unless you're going to clearly say you're going to allocate more time.... It's past six o'clock and I know it was scheduled for committee business.
    I think having the witnesses stay here for 20 minutes while we conduct our business is very disrespectful to these very hard-working professional people. They have flown across the country in some cases to come to provide information, and here we just stop our entire process so that we can debate motions.
    It's perfectly correct that you're allowed to do that, but I think we need to be very clear to the witnesses and let them know if the intention is to have them stay and continue to provide information, or can we thank them and release them from their testimony at this point?

  (1805)  

    Thank you, Mr. Coteau.
    I'm going to take the prerogative as chair and advise the witnesses that they can exit at this time. We were scheduled up until six o'clock. We will move to committee business.
    Witnesses, thank you for appearing before the committee today for this study and providing your testimony. You can chose to exit at your discretion.
    We will now go back to Mr. Fragiskatos. You had your hand up.
    Just to be clear, the amendment ends at “Supplementary Estimates (B)”, so I am also wanting to strike the words, “before Friday, December 1st, 2023.”
    Is that a new amendment?
    No, it's the same amendment. I'm just clarifying my full amendment.
    I allowed one clarification because it didn't alter the substance.
    Madame Chabot had her hand up. Then I'll go to whoever else.
    Madame Chabot, on the amendment by Mr. Fragiskatos.

[Translation]

    It's true that the committee now has six ministers within its purview, which wasn't the case before. If we want each minister to appear for two hours, we would have to schedule six full meetings. However, inviting three ministers at the same time and questioning them for six minutes, or two minutes in our case, is not much in the way of scrutiny or the democratic process. I think it would be better to schedule one hour with each minister.
    I'm not there in person, but I would have liked to propose an amendment to the original motion, to invite each minister for one hour, so that's what I'm proposing, Mr. Chair.

[English]

    Madame Chabot, are you making a subamendment to Mr. Fragiskatos' amendment, or is it just a discussion point?

[Translation]

    I'd like to propose a subamendment, but I don't have the text of the member's amendment. It would be helpful to have a copy before I propose my subamendment. Basically, I just want to remove the part that says “two panels of three”, in reference to the ministers.
    Can we get the text of the amendment, Mr. Chair?

[English]

     Thank you, Ms. Chabot.
    The amendment that was provided—

[Translation]

    Could you read it again, please?

[English]

    I will have it reread.
    I'll get the clerk to read the amendment by Mr. Fragiskatos.
    Thank you, Mr. Chair.
    After “appear”, you would read it as “before the committee for no fewer than one hour each, in two panels of three, to consider the supplementary estimates (B).”

[Translation]

    Thank you to the chair and the clerk.
    I propose removing the part of the amendment between commas, in other words, the reference to “two panels of three”. That is my subamendment.

  (1810)  

[English]

     Thank you.
    Is there discussion on the subamendment by Madame Chabot?
    Mr. Fragiskatos, we're now on the subamendment by Ms. Chabot.
    Since we don't have it in writing, I wonder whether we could just move to the vote on my amendment. Then, if Ms. Chabot wants to raise it at a later meeting, we can look at it.
    Procedure-wise, we have a subamendment before the committee. We will deal with the subamendment according to procedure.
    Mr. Peter Fragiskatos: It was just a creative idea.
    The Chair: Ms. Chabot made a subamendment. It does not have to be provided in writing. The committee must deal with it.
    Go ahead, Ms. Ferreri.
    Thanks, Mr. Chair.
    To Ms. Chabot's subamendment, I appreciate what she's trying to do, but I have a couple of questions here.
     We're going to remove the word.... The subamendment covers this, as well. We're going to completely remove a date, according to the Liberals here. They want to make it open-ended. The ministers can come whenever they want. There is no accountability here.
    Why wouldn't you want to do this now?
    It's fairly common to look at schedules and plan accordingly; plus we have other work we agreed to do. That's how committees work.
    Mr. Chair, this is very slippery. It's odd to me, as somebody sitting here. You have ministers. You have estimates. Not only are the Liberals, right now, trying to get them to not come here for an appropriate amount of time, but they're also now removing a date, so there is no accountability on when they are going to get here. That makes zero sense. They are trying to do two things, and they are trying to be slippery with their words by adding two panels. Thank goodness for the Bloc here, which has tried to remove that with their subamendment.
    Come on. This is gross.
    Is there discussion on the subamendment by Madame Chabot?
    Madame Zarrillo, go ahead on the subamendment.
    Mr. Chair, I have a point of order.
    We're supposed to be adjourned by now. I'm wondering what.... We published that we have work to do at six and that we were doing something different, so I just want to understand. Can we adjourn for the work that we need to do in camera?
    Yes, if somebody wants to move adjournment of the first part—
    I'm going to move adjournment, then.
    If we adjourn, the entire meeting is adjourned.
    I move adjournment, because we are scheduled to be in camera at six o'clock.
    We can adjourn, but I would need a motion to go to the business part of the meeting. You can call for adjournment of the meeting.
    I'd like to call for adjournment of the meeting, so we can move into the business part of this meeting.
    If you adjourn the meeting, the meeting is finished. You can adjourn debate, currently, then make a motion to move into business.
     Okay. Adjourn debate, then, so we can move to committee business, please.
    Is it agreed to adjourn debate?
    Ms. Rosemarie Falk: I would like a recorded vote.
    The Chair: We will do a recorded vote on adjourning debate. The clerk will read it in.
    The vote will be to adjourn the debate on the subamendment, the amendment and the motion itself. It's the whole thing. You cannot just adjourn the debate on the subamendment, because it's still there.
    Go ahead, Mr. Aitchison.
    Thank you.
    Does that mean the debate is adjourned but we immediately go to votes on the amendment, subamendment and all that kind of stuff, or is it just over and we move on to the next thing?
    It's just a process question from me.
    If the adjournment of the debate under the motion is adopted, we go back to the meeting in public and...where we were before.
    If you want to go in camera, we will have to suspend.

  (1815)  

    Is everybody clear on what we're voting on?
    We're voting on Ms. Zarrillo's—
    It's on an adjournment of the debate on the motion.
    The Chair: It's Madame Zarrillo's motion.
    It's to move into the in camera meeting, as scheduled.
    (Motion agreed to: yeas 6; nays 5)
    We will now need a motion to move to the in camera business portion of the meeting.
    Mr. Chair, I'd like to move that we go to the in camera portion of this meeting, as scheduled.
    Do we all agree?
    Some hon. members: Agreed.
    The Chair: Okay. We'll suspend for two minutes while we move to the in camera portion to conclude the meeting.
    [Proceedings continue in camera]
Publication Explorer
Publication Explorer
ParlVU