Thank you very much for the opportunity to speak here. I really appreciate the standing committee dealing with these issues. My name is Ben Wagner. I'm with the Privacy and Sustainable Computing Lab in Vienna.
We've been working closely on these issues for some time, specifically trying to understand how to safeguard human rights in a world where artificial intelligence and algorithms are becoming extremely common. This has included helping prepare Global Affairs Canada for the G7 last year. It was a great pleasure to work with colleagues there like Tara Denham, Jennifer Jeppsson and Marketa Geislerova.
The results that were produced in that, I think, are quite relevant also for this committee. You have the Charlevoix common vision for the future of artificial intelligence. Related to that, last year we were also working on—this is now in a Council of Europe context—a study on the human rights dimensions of algorithms, which I also think would be extremely helpful, especially if you're discussing studies and common challenges faced. Many of the common challenges you're discussing are already mentioned in these G7 documents and also in the statements developed by the Council of Europe.
To come back to a more general understanding of why this is important, artificial intelligence or AI is frequently thought of as some unusual or new thing. I think it's important to acknowledge that this is not a new and unusual technology. Artificial intelligence is here right now and is present in many existing applications that are being used.
It's increasingly permeating life-worlds, and it will soon be difficult to live in the modern world without having AI touch your life on a very daily basis. Its deep embedding in societies of course poses considerable challenges, but also opportunities. I think when we specifically look at the ethical and regulatory dimensions as, I believe, this committee is doing, it's extremely important to remember to try to ensure that all citizens have access to the opportunities of these technologies and that the opportunities provided by these technologies are not limited to just a select few.
With regard to how that can be done, there is a variety of sets of challenges and different issues. One of the most common ones is whether we talk about ethical framework or a more regulatory governance framework. I think it's important that they not be played off against each other. Ethical frameworks have their place. They're extremely important and they're extremely valuable, but of course they can't override or prevent governance frameworks from functioning. Indeed it would be difficult if they could. But if they function in parallel in a useful and sustainable manner, that can be quite effective.
The same is true even if you take a more governance-oriented human rights-based framework. It's very frequent that in these contexts different human rights are played off against each other. The right to freedom of expression is seen as being more important than the right to privacy. The right to privacy is seen as being more important than the right to free assembly, and so on. It's very important that in developing standards and frameworks in this context, we always consider all human rights and that human rights be the basic foundation for how we think about algorithms and artificial intelligence.
If you look at the Charlevoix documents that were developed last summer, you'll also note a considerable focus on human-centric artificial intelligence. While that's an extremely important design component, I think it's also important to acknowledge that human-centric focuses alone are not enough. At the same time, while we're seeing an increasing number of automated systems, lots of actors who are developing automated systems are not willing to admit how they're actually developing them or what exact elements are part of these systems.
It's often joked that some of the most frequently used examples in the start-up business plans of artificial intelligence are closer to Mechanical Turk—that is to say human labour—than to actual advanced artificial intelligence systems. This human labour often gets lost on the way or fails to be acknowledged.
This is also relevant in the context of extra-legal frameworks that are frequently applied when we talk about ethical frameworks, when we talk about frameworks that don't govern in the way that rule of law can. I think we need to be extremely careful there with regard to the extent to which frameworks like this actually come to replace or override the rule of law. That's specifically also the case where we see lots of conversations right now. I'm sure you will have heard about Google's AI board, which was recently created and then shut down within the space of just a week or two.
You'll notice that there's an attempt on the one hand, a great push by some actors, to try to be more ethical, but this ethical framework is not enough and the actors realize this, given the heavy criticism of this that you see, which again isn't to say that ethics isn't important or ethics is necessary but that ethics needs to be done right if it's going to have a meaningful impact on this. That means there's a strong role for the public sector as well. We can't allow ethics squashing. We can't allow ethics shopping. We can't allow for lowering the bar for the standards that we already have.
As I'm sure you are aware, the existing standards in many areas of public governance—when we're talking about existing norms related to how we govern technology and how we govern the activities of corporations, if you look at the business and human rights framework of the United Nations, for example—are already relatively weak. In some areas, there's a danger that these ethical principles will even go below existing business and human rights standards.
At the same time, to take a more positive note as well, there is an extremely important role for the public sector here, and I think it's again possible to commend the work specifically of Michael Karlin, who has done some fantastic work on algorithmic impact assessments for the Government of Canada. There's really an important measure to be seen there in how Canada is also taking a lead and really showing what is possible in the context of these algorithmic impact assessments. I can definitely commend his work there.
At the same time, when you look at the recent accusations now that Facebook has been breaking Canadian privacy laws, we have a serious issue related to implementation. Specifically, these breaches that have been of concern to numerous Canadian privacy regulators do raise a question. Can we just focus on the public sector alone and can the public sector alone lead the way, or do we need to take similar considerations for, at the very least, large, powerful private sector companies? Because in the world we live in right now, whether you're talking about opening a bank account, posting something on Facebook, talking to a friend online or even getting a pizza delivery, algorithms and AI are part of every step that takes place in that context.
Unless we're willing to limit the agency of these algorithms, there are two things when we consider those things democratically relevant. They increasingly begin to dominate us, and this is not a Terminator-like scenario where we need to be scared that the robots will come and take over the world.
It's rather that, through these technologies, a lot of power becomes concentrated in the hands of very few human beings, and these are precisely the types of situations that democratic institutions, such as the parliamentary committee that's hearing about this topic right now, were built to deal with. That is to ensure that the power of the few is spread to the power of the many, and to ensure that having access to AI and to the benefits of AI, but also to the foundational promise of AI that technology can make people's lives better, both inside Canada and beyond, is accessible to every human being, and that basic human rights provide the core foundation of how we develop and how we think about technology in the future.
Thank you very much for listening. I look forward to answering any questions you might have.
Hello. My expertise is in computer science. I've been a pioneer of deep learning, which is the area that has changed AI from something that was happening in universities into something that is now taking a big economic role and where there are billions of investments in industry.
In spite of the progress that's remarkable, it's also important to realize that the current AI systems are very far from human-level AI. In many ways they are weak. They don't understand the human context, of course. They don't understand moral values. They don't understand much but they can be very good at a particular task and that can be very economically useful, but we have to be aware of these limitations.
For example, if we consider the application of these tools in the military, a system is going to take a decision to kill a person and doesn't have the moral context a human can have to maybe not obey the order. There's a red line, which the UN Secretary-General has talked about, that we shouldn't be crossing.
Going back to AI and Canada's role, the thing that is interesting is we've played a very important role in development of the recent science of AI and clearly we are recognized as a scientific leader. We also are playing a growing role on the economic side. Of course, Canada is still dwarfed in comparison to Silicon Valley, but there is a very rapid growth of our tech industry regarding AI and we have a chance, because of our strength scientifically, to become not just a consumer of AI but also a producer, which means Canadian companies are getting involved and that's important to keep in mind as well.
The thing that's important, in addition to the scientific leadership and our growing economic leadership regarding AI, is moral leadership, and Canada has a chance to play a crucial role in the world here. We have already been noticed for this. In particular I want to mention the Montreal declaration for responsible development of AI to which I contributed and which is really about ethical principles.
Ten principles have been articulated with a number of subprinciples for each. This is interesting and different from other efforts in trying to formalize the ethical and social aspects of AI because in addition to bringing in experts in AI, of course there were scholars in the social sciences and humanities, but ordinary people also had a chance to provide feedback. The declaration was modified thanks to that feedback with citizens in libraries, for example, attending workshops where they could discuss the issues that were presented in the declaration.
In general for the future, I think it's a good thing to keep in mind that we have to keep ordinary people in the loop. We have to educate them so they understand issues because we will take decisions collectively, and it's important that ordinary people understand.
When I give talks about AI, often the biggest concerns I hear are about the effect of AI on motivation and jobs. Clearly, governments need to think about that and that thinking must be done quite a bit ahead of the changes that are coming. If you think about, say, changing the education system to adapt to a new wave of people who might lose their jobs in the next decade, those changes can take years, can take a decade to have a real impact. So it's important to start these things early. It's the same thing if we decide to change our social safety net to adapt to these potential rapid changes in the job market. These things should be tackled fairly soon.
I have another example of short-term concerns. I talked about military applications. It could be really good if Canada played more of a leadership role in the discussions that are currently taking place around the UN in the military use of AI and the so-called “killer drones” that can be used, thanks to computer videos, to recognize people and target them.
There's already a large coalition of countries expressing concern and working on drafting an international ban. Even if not all the countries—or even major countries such as the U.S., China or Russia—don't go with such an international treaty, I think Canada can play an important role. A good example is what we did in the nineties with anti-personnel mines and the treaty that was signed in Canada. That really had an impact. Even though countries such as the U.S. didn't sign it, the social stigma of these anti-personnel mines, thanks to the ban, has meant that companies gradually have stopped building them.
Another area of concern from an ethical point of view has to do with bias and discrimination, which is something that is very important to Canadian values. I think it's also an area where governments can step in to make sure there's a level playing field between companies.
Right now, companies can choose to use one approach—or no approach at all—to try to tackle the potential issues of bias and discrimination in the use of AI, which comes mostly from the data that those systems are trained on, but there will be a trade-off between their use of these techniques and, say, the profitability or the predictability of the systems. If there is no regulation, what's going to happen is that the more ethical companies are going to lose market share against the companies that don't have such high standards, and it's important, of course, to make sure that all those companies play on the same level.
Another example that's interesting is the use of AI not necessarily in Canada but in other countries, because these systems can be used to track where people are by, again, using these cameras all over the place. The surveillance systems, for example, are currently being sold by China to some authoritarian countries. We are probably going to see more of that in the future. It's something that is ethnically questionable. We need to decide if we want to just not think about it or have some sort of regulation to make sure that these potentially unethical uses are not something that our companies are going to be doing.
Another area that's interesting for government to think about is advertising. As AI becomes gradually more powerful, it can influence people's minds more efficiently. In using information that a company has on a particular user, a particular person, the advertising can be targeted in a way that can have much more influence on our decisions than older forms of advertising can. If you think about things like political advertising, this could be a real issue, but even in other areas where that type of advertising can influence our behaviour in ways that are not good for us—with respect to our health, for example—we have to be careful.
Finally, related again to targeted advertising is the use of AI in social networks. We've seen the issues with Cambridge Analytica and Facebook, but I think there's a more general issue about how governments should set the rules of the game to minimize this kind of influencing by, again, using targeted messages. It's not necessarily advertising, but equivalently somebody is paying for influencing people's minds in a way that might not agree with what they really think or what's in their best interests.
Related to social networks is the question of data. A lot of the data that is being used by companies like Google and Facebook, of course, comes from users. Right now, users sign a consent to allow those companies to do whatever they want, basically, with that data.
There's no real potential strength for bargaining between a single user and those companies, so various organizations, particularly in the U.K., have been thinking about ways to bring back some sort of balance between the power of these large companies and the users who are providing data. There's a notion of data trust, which I encourage the Canadian government to consider as a legal approach to try to make sure the users can aggregate—you can think of it like a union—where they can negotiate contracts that are aligned with their values and interests.
I want to talk more about regulation than ethics, particularly because of the most recent example where Facebook has said to our Privacy Commissioner, “Thanks for your recommendations; we're not going to follow them”, so I think we need stronger rules as far as they go.
Mr. Wagner, in a recent article, one of the three examples you use about AI is social media content moderation. At this committee we've talked about algorithmic transparency. In the EU it's algorithmic explainability. In that article you noted that it's unclear what that looks like. It's a new idea, obviously, in the sense that, when we've spoken to the U.K. information commissioner and had recent conversations with the EU data protection supervisor, they are just scaling up their capacity to address this issue and to understand what this looks like.
Having looked at this issue yourself and written about this, when we talk about algorithmic transparency, is there a practical understanding that we ought to have? It's one thing to make an recommendation on algorithmic transparency. What should it specifically look like?
Thanks to both of you for appearing before the committee today.
Professor Bengio, you're to be congratulated for the work that you did on the Montreal declaration for responsible development of artificial intelligence, but as this committee has learned and as the public is—I hope—increasingly aware, much of the development of artificial intelligence has been funded by the “data-opolies”, by the Facebooks, by the Googles and by the increasing notoriety, as we learn, of their disregard for written and unwritten ethical guidelines and laws.
Just in passing, and you may not be aware of it, when this committee visited Facebook's headquarters in Washington last year, we were told almost in a passing comment that, when we asked if the company would accept increased regulation in Canada, the sort of investment we made in the AI hub in Montreal might not continue to be forthcoming, which hit me like a clunker. It was basically a threat from a “data-opoly” that Canada would be ostracized from AI investment should we increase regulation, even along the lines of the EU's GDPR or elements of it.
The question is to both of you. Large companies are already using and exploiting artificial intelligence in a variety of very commendable, wonderful ways, but also, in any number of ways that disregard ethical and legal guidelines. Should they be responsible for the misuse or the abuse of AI that occurs on their platforms?
I think threats of this kind are quite indicative of a general regulatory challenge, which is that every country wants to be the leading country on AI right now, and that doesn't always lead to the best regulatory climate for the citizens of those countries.
There seems to have been some kind of agreement between the Government of the Netherlands and the automobile industry which is developing AI into self-driving cars to not look so closely when they build a factory there, in order to ensure that as a result of building that factory, they will bring jobs and investment to these countries.
I think that the impact of AI and these technologies will be sufficiently transformative that while these large U.S. giants seem quite important right now, that may not be the case in a few years' time. A lot of the time I think the danger is actually the other way around. The public sector has historically invested a lot more than many people are aware of, and a lot of the fortunes of these large well-known companies are based on that. Of course, in political terms, it always looks more attractive to have Google, Facebook or Tesla as part of your local industries, because this also sends a political message.
I sense that this is part of the challenge that has led regulators down the path where we have real regulatory gaps. I would also caution from expecting just information commissioners or privacy regulators to be able to respond to this. It's also media regulators, people responsible for elections, people responsible for ensuring that, on a day-to-day basis, competition functions.
All of these regulators are heavily challenged by new digital technologies, and we would be wise as a society to take a step back and make sure they're really able to do their job as regulators, that they have access to all of the relevant data. We may find that there are still regulatory gaps where we possibly need even additional regulatory authorities.
There, I think the danger is to say we just want progress; we just want innovation. If you do that a few times and keep allowing that to be a possibility.... It doesn't mean that you have to say no to people like Facebook or Google if they want to invest in your country, but if you start getting threats like this, I would see them as exactly what they are: a futile attempt to resist the change that is already coming.
You're raising I think some very disturbing, broad questions that are so much beyond the scope of our committee and what we do as politicians. My day job is to get Mrs. O'Grady's hydro turned back on—her electricity. That's what keeps me elected.
However, when we're talking AI with you, we're talking about the potential of mass dislocation of employment. What would that mean for society? We have not even had conversations around this. There's the human rights impact, particularly exporting AI to authoritarian regimes and what that would mean.
For me, trying to understand it, there are the rights of citizens and personal autonomy. The argument we were sold—and I was a digital idealist at one point—was that we'd have self-regulation on the Internet and that would give consumers choice; people would make their decisions and they'd click the apps that they like.
When we're dealing with AI, you have no ability as a citizen to challenge a decision that's been made, because it's been made by the algorithm. Whether or not we need to look at having regulation in place to protect the rights of citizens....
Mr. Wagner, you wrote an article, “Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping?”
How do you see this issue?
I'm sure you've guessed, from the title you mention, that I do see the rise of ethics—and I explicitly wouldn't include the Montreal declaration in this, because I don't think it's a good example of it.... There are certainly many cases of ethical frameworks that provide no clear institutional framework beyond them. A lot of my work has been focused, essentially, on getting people to either do human rights and governance, or, if they will do ethics, then to take ethics seriously and really ensure that the ethical frameworks developed as a result of that are rigorous and robust.
At the end of the article you mentioned there is literally a framework of criteria on how to go through this: external participation, external oversight, transparent decision-making and non-arbitrary lists of standards. Ethics don't substitute fundamental rights.
To come back to the example you mentioned on self-regulation on the Internet and how we all assumed that that would be the path that would safeguard citizens' autonomy, I think that's been one of the key challenges. This argument has been misused so much by private companies that then say, for example, “Well, we have a million likes, and you only have 500,000 votes. Surely our likes are worth as much as your votes.” I don't even need to explain that in great detail. It's just this logic of lots of clicks and lots of likes surely can be seen as the same thing as votes. This, in a democratic context, is extremely difficult.
Lastly, you specifically mentioned exporting AI to authoritarian regimes. I think there is a strong link between the debates we have about exporting AI to authoritarian regimes and how we trade in and export surveillance technologies. There are a lot of technologies that are extremely powerful that are getting into the wrong hands right now. Limiting that or ensuring, through agreements like the Wassenaar arrangement and others, that there is dual-use control for certain types of technology will become increasingly important.
We have existing mechanisms. We have existing frameworks to do this, but unless we're willing to implement those and sometimes also say that we will do it collectively as a group, even if this means having slightly less—and I emphasize “slightly less”—economic growth as a result of this, we can still also say we're taking more leadership on this issue. It's going to be very difficult to see where these short-term economic gains are going to meaningfully provide for a human rights environment we would want to stand behind in the years and decades to come.
I'm a music buff. Every morning I wake up, and YouTube has selected music for me. Their algorithms are pretty good, and I watch them. I'm also a World War II buff, and YouTube offers me all kinds of documentaries. I see some of these documentaries on the great historian David Irving, who is a notorious Holocaust denier, and they come up in my feed.
Now, I have white hair; I know what David Irving is, but if I'm a high school student, I don't. It has a lot of likes because a lot of extremists are promoting it. The algorithm is pushing us towards seeing content that would otherwise be illegal.
In terms of self-regulation, I look at what we have in Canada. In Canada, we have broadcast standards for media. That doesn't mean we don't have all manner of debate and crazy commentary, and people are free to do it, but if someone was on radio or television promoting a Holocaust denier, there would be consequences. When it's YouTube, we don't even have a proper vehicle to hold them to account.
Again, in terms of the algorithms pushing us towards extremist content, do you believe that we should have some of the same kinds of legal obligations that are for regular broadcast media? You're broadcasting this. You have an obligation. You have to deal with this.
Unfortunately, there is a lot of confusion in many people's understanding of AI. A lot of it comes from the association we make with science fiction.
The real AI on the ground is very different from what you see in movies. The singularity is about the theory. It's just a theory that once the AI becomes as smart as humans, then the intelligence of those machines will just take off and become infinitely smarter than we are.
There is no more reason to believe this theory than there is, say, to believe some opposite theory that once they reach human-level intelligence it would be difficult to go beyond that because of natural barriers that one can think of.
There is not much scientific support to really say whether something like this is an issue, but there are some people who worry about that and worry about what would happen if machines became so intelligent that they could take over humanity at their own will. Because of the way machines are designed today—they learn from us and they are programmed to do the things we ask them to do and that we value—as far as I'm concerned, this is very unlikely.
It's good that there are some researchers who are seriously thinking about how to protect against things like that, but it's a very marginal area of research. What I'm much more concerned with, as are many of my colleagues, is how machines could be used by humans and misused by humans in ways that could be dangerous for society and for the planet. That, to me, is a much bigger concern.
The current level of social wisdom may not grow as quickly as will the power of these technologies as they grow. That's the thing I'm more concerned about.
There's a challenge in that if we assume human intervention alone will fix things, we will also be in a difficult situation because human beings, for all sorts of reasons, often do not make the best decisions. We have many hundreds of years of experience of how to deal with bad human decision-making and not so much experience in how to deal with mainly automated human decision-making, but the best types of decisions tend to come from a good configuration of interactions between humans and machines.
If you look at how decisions are made right now, human beings often rubber-stamp the automated decision made by AIs or algorithms and put a stamp on it and say, “Great, a human decided this”, when actually the reason for that is to evade different legal regulations and different human rights principles, which is why we use the term quasi-automation. It seems like it's an automated process, but then you have three to five seconds where somebody is looking over this.
In the paper I wrote and also in the guidelines of the Article 29 Working Party, guidelines were developed for what is called “meaningful human intervention” and only when this intervention is meaningful. When human beings have enough time to understand the decision they're making, enough training, enough supports in being able to do the only event, then it's considered meaningful decision-making.
It also means that if you're driving in a self-driving car, you need enough time as an operator to be able to stop, to change, to make decisions and a lot of the time we're building technical systems where this isn't possible. If you look at the recent two crashes of Boeing 737 Max aircraft, it's exactly this example where you had an interface between technological systems and human systems, where it became unclear how much control human beings had, and even if they did have control, so they could press the big red button and override this automated system, and whether that control was actually sufficient to allow them to control the aircraft.
As I understand the current debate about this, that's an open question. This is a question that is being faced now. With autopilots and other automated systems of aircraft, this will increasingly lead to questions that we have in everyday life, so not just about an aircraft but also about an insurance system, about how you post online comments, and also how government services are provided. It's extremely important that we get it right.
Some things that you said are pure fiction, but others are cause for concern.
I think that we should be concerned about a system that uses artificial and programmed intelligence to target, for example, all parliamentarians in a certain country. This situation is quite plausible from a scientific point of view, since it involves only technological issues related to the implementation of this type of system. That's why several countries are currently discussing a treaty that would ban these types of systems.
However, we must remember that these systems aren't really autonomous at a high level. The systems will simply follow the instructions that we give them. As a result, a system won't decide on its own to kill someone. The system will need to be programmed for this purpose.
In general, humans will always decide what constitutes good or bad behaviour on the part of the system, much like we do with children. The system will learn to imitate human behaviour. The system will find its own solutions, but according to criteria or an objective chosen by humans.
Yes, it's quite possible.
Your example of a machine that assigns the work already exists. For example, today, couriers who carry letters from one end of the city to the other are often guided by systems that use artificial intelligence and that decide who will carry a given package. There's no longer any human contact between the dispatcher and the person performing the tasks.
As technology advances, obviously more and more of these jobs, especially the more routine jobs, will be automated. In the courier example that I just provided, the dispatcher's job was the most routine and easiest to automate. The work of a human who walks the streets of the city is more difficult to automate at the moment. However, it will probably happen eventually.
It's very important for governments to plan, anticipate the future and think about measures that will minimize the human misery that may result from this development if it were left to run its course.
You're asking a good question, but I don't think that there's a general answer.
This requires the use of experts, who will review ethical and moral issues, along with technological and economic concerns in each relevant area. The goal is to establish guidelines to both foster innovation and protect the public. I think that this is generally possible. Of course, several companies have protested that there shouldn't be too many barriers. However, in most cases, I don't believe that the expected results pose an issue.
As we said earlier, there are issues in some situations, but there's no easy solution. We specifically talked about [Technical difficulty—Editor] illegal videos on Facebook. The issue is that we don't yet have the technology to identify these videos quickly enough, even though Facebook is researching ways to improve this type of automatic identification. However, not enough humans are monitoring everything put on the Internet in order to remove things quickly and prevent things from being posted.
The task is practically impossible, and there are only three possible solutions. We can shut everything down, wait until we've developed better technology, or accept that things aren't perfect and that humans carry out the monitoring. In fact, this is already the case right now, when people have the opportunity to click on a button to report unacceptable content.
In the experience of what I've seen, when people try to develop general regulations for all of AI or all of algorithms or all of technology, it never ends up being quite appropriate to the task.
I think I agree with Mr. Bengio in the sense that, if we're talking about certain types of international regulation, for example, it would be focused on automated killer systems, let's say, and there is already an extensive process going on in this work in Geneva and in other parts of the world, which I think is extremely important.
There is also the consideration that could be made whether Canada wants to become, itself, a state that has protections equivalent to the GDPR and that, I think, is also a relevant consideration that would considerably improve both flows of data and protection of privacy.
I think all other areas need to be looked at in a sectoral-specific way. If we're talking about elections, for example, often AI and other automated systems will abuse existing weaknesses in regulatory environments. So how can we ensure, for example, that campaign finance laws are improved in specific contexts, but also ensure that those contexts are improved in a way that they consider automation? When we're talking about the media sector and issues related to that, how can we ensure that our existing laws adapt and reflect AI?
I think if we build on what we have already, rather than developing a new cross-sectional rule for all of AI and for all algorithms, we may do a better job.
I think that also goes at the international level where it's very much a case of building on and developing from what we already have, whether it's related to dual-use controls, whether it's related to media or whether it's related to challenges related to elections. I think there are already existing instruments there. I think that's more effective than the one-size-fits-all AI treaty.
I think the automated drones, or what are termed LAWS—lethal autonomous weapons systems—are definitely areas where further focus is acquired. I would also say that what's been mentioned here about the spread or proliferation of surveillance and AI technologies that can be misused by authoritarian governments is another an area where there is an urgent need to look more closely.
Then, of course, you have whole sectors that have been mentioned by this committee already—media, hate-speech-related issues and issues related to elections. I think we have a considerable number of automated technical systems changing the way the battleground works, and how existing debates are taking place.
There's a real need to take a step back, as was mentioned and discussed before, in the context of AI potentially being able to solve or fix hate speech. I don't think we should expect that any automated system will be able to correctly identify content in a way that would prevent hate speech, or that would deal with these issues to create a climate. Instead, I think we need a broad set of tools. It's precisely not relying on just humans or technical solutions that are fully automated, but instead developing a wide tool kit of issues that design and create spaces of debate that we can be proud of, rather than getting stuck in a situation where we say, “Ah, we have this fancy AI system that will fix it for you.”
Mr. Bengio, my riding is bigger than Great Britain, and I live in my car. My car is very helpful. It tells me when I'm tired, and it tells me when I need to take a break, but it's based on roads that don't look like roads in northern Ontario. I'm always moving into the centre lane to get around potholes, to get around animals and to get away from 18-wheelers. I start watching this monitor, and sometimes I'm five minutes from the house and it's saying I've already exceeded my safety capacity.
I thought, well, it's just bothering me and bugging me. I'll break the glass. Then I read Shoshana Zuboff's book on surveillance capitalism and how all this will be added to my file at some point. This will be what I'm judged on.
To me, it raises the question of the right of the citizen. The right of the citizen has personal autonomy and the right to make decisions. If I, as a citizen, get stopped by the police because I made a mistake, he or she judges me on that and I can still take it to some level of challenge in court if I'm that insistent. That is fair. That's the right of the citizen. Under the systems that are being set up, I have no rights based on what an algorithm designed by someone in California thinks a good roadway is.
The question is, how do we reframe this discussion to talk about the rights of citizens to actually have accountability, so their personal autonomy can be protected and so decisions that are made are not arbitrary? When we are dealing with algorithms, we have yet to find a way to actually have the adjudication of our rights heard.
Is that the role you see legislators taking on? Is it a regulatory body? How would we insist that, in the age of smart cities and surveillance capitalism, the citizen still has the ability to challenge and to be protected?
It's interesting. This question is related to the issue of the imbalance of power between the user and large companies in the case of how data is used. You have to sign these consents. Otherwise you can't be part of, say, Facebook.
It's similar in the way the products are defined remotely. As users, we don't have access to the details of how this is done. We may disagree on the decisions that are made, and we don't have any recourse.
You are absolutely right. The balance of power between users and companies that are delivering those products is something that maybe needs rethinking.
As long as the market does its job of providing enough competition between comparable products, then at least there is a chance for things to be okay. Unfortunately, we're moving towards a world where these markets are dominated more and more by just one or a few players, which means that users don't have a choice.
I think we have to rethink things like notions of monopolies and maybe bring them back. We need to make sure one way or another that we re-equilibrate the power differential between ordinary people and those companies that are building those products.
I want to thank our witnesses.
I think it's alarming just for you to say that essentially AI is largely unregulated. We're seeing that with data-opolies as well, and we're really trying to grasp what we do as regulators to protect our citizens.
The challenge is before us, and it's certainly not easy, but I think we will take your advice. Mr. Wagner, you said to start early. It already feels like we're too late, but we're going to do our best.
I want to thank you for appearing today from Vienna, and from Montreal as well.
We're going to suspend for a few minutes to get our guests out so we can get into committee business.
[Proceedings continue in camera]