:
Thank you so much for inviting me.
I have been researching the ethical challenges of algorithms and AI for nearly half a decade. What's become apparent to me in that time is that the promise of AI largely owes to its apparent capacity to replace or augment any type of human expertise. The fact that it's so malleable in that sense means that the technology inevitably becomes entangled in the ethical and political dimensions of the jobs, the practices and the organizations in which it's embedded. The ethical challenges of AI are effectively a microcosm of the political and ethical challenges that we face in society, so recognizing that and solving them is certainly no easy task.
I know, from witnesses in your previous sessions, that you've heard quite a bit about the challenges of AI, dealing with things such as accountability, bias, discrimination, fairness, transparency, privacy and numerous others. All those are extremely important and complex challenges that deserve your attention, and really the attention of policy-makers worldwide, but in my 10 minutes I want to focus less on the nature and extent of the ethical challenges of AI and more on the strategies and tools we have for solving them.
You've heard also quite a bit about the tools available to address these ethical challenges, using things such as algorithmic and social scientific auditing, multidisciplinary research, public-private partnerships, and participatory design processes and regulations. All of those sorts of solutions are essential, but my concern is that we're perhaps broadly using the wrong strategy or at least an incomplete strategy for the ethical and legal governance of AI. As a result, we may be expecting too much from our current efforts to ensure AI is developed and used in an ethically acceptable manner.
In the rest of my statement, what I want to address are the significant shortcomings that I see in current efforts to govern AI, specifically through data protection and privacy law on the one hand and through principled self-governance on the other. My principal concern here is that these strategies too often conceive of the ethical challenges of AI in an individualistic sense, when in fact they are collective challenges that require collective solutions.
To start with data protection and privacy law, responsibility far too often falls on the shoulders of individuals to protect their vital interests, or their privacy, autonomy, reputation and those sorts of things. Data protection law too often ends up protecting data rather than the people the data represents. That shortcoming can be seen in several areas of law globally. The core concepts of data protection and privacy law—personal data, personally identifiable information and so forth—are typically defined in relation to an identifiable individual, which means that the data must be able to be linked to an individual in order to fall within agreement of the law and thus to be protected by the law.
The emphasis on the individual is really mismatched with capabilities of AI. We're excited by AI precisely because of its ability to find small patterns between people and group them in meaningful ways, and to create generalizable knowledge from individual records or individual data. In the modern data analytics that drive so many of the technologies we think of as AI, the individual doesn't really matter. AI is interested not in what makes a person uniquely identifiable but rather in what makes that person similar to other people. AI has transformed privacy from an individual concern to a collective challenge, yet relatively little attention is actually paid in existing legal frameworks to collective or group aspects of privacy. I see that as something that really needs to change.
That shortcoming itself extends to the sorts of legal protections that we quite often see in data protection and privacy law that are offered to individuals and to their data. These protections are still fundamentally based on the idea that individuals can make informed decisions about how they produce data, how that data is collected and used, and when it should not be used. The burden is really placed on individuals to be well informed and to make a meaningful choice about how their data is collected and used.
As is suggested by the name, informed consent only works if meaningful, well-informed choice is actually possible. Again, we're excited about AI precisely because it can process so much data so quickly, because it can identify novel and unintuitive patterns within the data and because it can produce knowledge from them. We're excited because the data analytics that drive AI are so big, so fast and so unpredictable, but the voracious appetite that AI has for personal data, combined with the seemingly limitless and unpredictable reusability of the data, means that even if you're a particularly motivated individual, a well-informed choice about how your data is collected and used is typically impossible. Under those conditions, consent no longer offers meaningful protection or allows individuals to control how their data is collected and used.
Moving forward, in terms of data protection and privacy law in particular, we need to think more about how to shift a fair share of the ethical responsibility to companies, public bodies and other sorts of collectives. Some of the ethical burden that's normally placed on individuals should be placed on these entities, requiring them, for example, to justify their data collection and processing before the fact, rather than leaving it up to individuals to proactively protect their own interests.
The second government strategy I want to address has seen unprecedented uptake globally. To date, no fewer than 63 public-private initiatives have formed to determine how to address the ethical challenges of AI. Seemingly every major AI company has been involved in one or more of these initiatives and has partnered with universities, civil society organizations, non-profits and other sorts of bodies. More often than not, these initiatives produce frameworks of high-level ethical principles, values or tenets meant to drive the development and usage of AI.
The strategy seems to be that the ethical challenges of AI are best addressed through a top-down approach, in which these high-level principles are translated into practical requirements that will act as a guide for developers, users and regulators. The ethical challenges of AI are more often than not presented as problems to be solved through technical solutions and changes to the design process. The rationale seems to be that insufficient consideration of ethics leads to poor design decisions, which create systems that harm people and society.
These initiatives are essentially producing self-regulatory frameworks that are not yet binding, in any meaningful sense. It seems as though the blame for unethical AI tends to fall, again, on the individuals, or individual developers and researchers, who have somehow behaved badly, as opposed to any sort of collective failure of the institutions, businesses or other types of organizations driving development in the first place.
With that in mind, I'm not entirely sure why we assume that top-down principles and codes of ethics will actually make AI, and the organizations that create it and use it, more ethical or trustworthy. Using principles and ethics is nothing new. We have lots of well-established professions, such as medicine and law, that have used principles for a very long time to define their ethical values and responsibilities, and to govern the behaviour of the professionals and organizations that employ them.
If we can think of AI development as a profession, it very quickly becomes apparent that it lacks several characteristics necessary to make a principled approach actually work in practice.
In the first place, AI development lacks common aims and fiduciary duties to users and individuals. Take medicine as a counter example: AI development doesn't serve the public interest in the first instance, in the same sense. Developers don't have fiduciary duties toward their users or people affected by AI, because AI is quite often developed in a commercial environment where fiduciary duty is owed to the company's shareholders. As a result, you can have these principles that are intended to protect the interests of users and the public coming into conflict with commercial interests. It's not clear how those are going to be resolved in practice.
Second, AI development has a relatively short professional history and it lacks well-established and well-tested best practices. There are professional bodies for software engineering and codes of ethics, but because it's not a legally recognized or licensed profession, professional bodies exercise very little power over their members, in practice. The codes of ethics they do have tend to be more high-level and relatively brief in comparison to other professions.
The third characteristic that AI development is seemingly lacking is proven methods to translate these high-level principles into practical requirements. The methods we do have available tend to exist or have been tested only in academic environments and not in commercial environments. Moving from high-level principles to practical requirements is a very difficult process. The outputs we've seen from AI ethics initiatives thus far have almost universally relied on vague, contested concepts like fairness, dignity and accountability. There's very little offered in the way of practical guidance.
Disagreements over what those concepts mean only come out when the time comes to actually apply them. The huge amount of work we've seen to develop these top-down approaches to AI ethics have accomplished very little in practice. Most of the work remains to be done.
What I would conclude with is that essentially ethics is not meant to be easy or formulaic. Right now we too often think of ethics purely in terms of technical fixes or checklists or impact assessments, when really we should be looking for and celebrating these normative disagreements because they represent, essentially, taking ethical challenges seriously in the plurality of opinion that we should expect in democratic societies.
The difficult work that remains for us in AI ethics is to move from high-level principles down to practical requirements. It's really only in doing that and in supporting that sort of work that we'll really come to understand the ethical challenges of AI in practice.
Thank you, and I look forward to your questions later.
[Translation]
Thank you, Mr. Chair and members of the Standing Committee on Access to Information, Privacy and Ethics, for giving me the opportunity to speak with you today about artificial intelligence in radiology, specifically in relation to ethical and legal issues in the implementation of this technology in medical imaging.
My name is Dr. An Tang and I am here representing the Canadian Association of Radiologists (CAR), as chair of the Artificial Intelligence Committee within the CAR.
[English]
The CAR AI working group is composed of more than 50 members who have a keen interest in technology advancement in radiology as it pertains to AI. The composition of this working group is varied, from predominantly radiologists to physicists, computer scientists and researchers. It also includes a philosopher specialized in the ethics of AI and an academic lawyer.
Under the CAR board of directors' leadership we have been entrusted with taking a global look at AI and the impact it will have on radiology and patient care in Canada.
I believe I speak for most of my colleagues in thinking that this is a good-news story and that AI can dramatically impact the way radiologists practise, in a positive way. Through the collection of data and simulation, using mathematical algorithms, we can help reduce wait times for patients, thus expediting diagnosis and positively affecting patient outcomes.
AI software analyzing medical images is becoming increasingly prevalent. Unlike early generations of AI software, which relied on expert knowledge to identify image features, machine learning techniques can automatically learn to recognize these features with the use of training datasets.
AI can be used for the purpose of detecting disease, establishing diagnosis and optimizing treatment selection. However, for this to be performed accurately, access to large quantities of medical data from patients will be required. This, of course, brings the privacy question into the equation. How do we collect this data while still guaranteeing we are collecting this information in an ethical way that protects the privacy of our patients?
Because of the transition from film to digital imaging that occurred two decades ago in radiology, and because of the availability of digital records for each imaging examination, radiology is well positioned to lead the development and implementation of AI and to manage associated ethical and legal challenges.
[Translation]
CAR believes that the benefits of AI can outweigh risks when institutional protocols and technical considerations are appropriately implemented to safeguard or remove the individually identifiable components of medical imaging data.
[English]
Technology advancements are occurring so quickly that they are outpacing current radiology procedures. We need to establish regulations pertaining to data collection and ownership to ensure that we are safeguarding patients and not infringing on ethical or privacy guidelines.
The CAR is advocating for the federal government to take a leadership role in the implementation of an ethical and legal framework for AI in Canada. Despite health care being a provincial priority, AI is a global issue. We feel the government is well positioned to lead the provinces in the regulation of the implementation of such a framework. Similar examples are the federal government's leadership in the national medical imaging equipment fund in the early 2000s.
The CAR can help, and the AI working group, under the CAR board's leadership, has published two white papers on AI, the first published in 2018 on AI in radiology, and a general overview of machine learning and implementation in radiology. This second paper, published in May 2019, focused on ethical and legal issues related to AI in radiology.
We have provided copies of the white papers, with our recommendations, for each of you. For the purpose of the discussion, I would like to highlight the more prevalent ones as they relate to the federal government's role in this capacity.
The first is the implementation of a public awareness campaign regarding consent and patient sharing of anonymized health data and harm reduction strategies. This information is essential for helping to identify disease and treatment for future AI applications.
Second is the general adoption of broad consent by default, with the right to opt out.
Third is developing a system for ensuring data security and anonymization of radiology data for secondary use, and implementing system standards to ensure that this criterion is being met.
Fourth, train radiology data custodians and establish clear guidelines for their role in the implementation of data sharing agreements for common AI-related scenarios and third parties.
The CAR has to work with the federal government and provincial ministries of health, including the Canadian Medical Protective Association, or CMPA, to develop guidelines for appropriate deployment of AI assistive tools in hospitals and clinics, while looking at minimizing harm and liability for malpractice for errors involving AI. We need to educate radiologists and other health care professions on the limitations of AI and reiterate the use of the tool in supplementing the work rather than replacing radiologists.
AI is not going away. Sharing medical data is a complex issue that balances individual privacy rights versus collective societal benefits. Given the potential of AI in helping to improve patient care and medical outcomes, I believe we will start to see a paradigm shift from patient's rights to near absolute data privacy through the sharing of anonymized data for the good of society.
[Translation]
We need to work together to implement a framework to ensure that we can move forward with this technology, while respecting the patient's anonymity and privacy. AI in healthcare is going to happen sooner or later; let's make sure it is implemented in an ethical way.
[English]
Thank you for your time. I'm happy to answer your questions in either French or English.
:
Thank you Mr, Chair and members of the committee.
It is a privilege to be here today to present the industry's perspective on behalf of the Information Technology Association of Canada. ITAC is the national voice of the telecommunications and Internet technology industry. We have more than 300 members, including more than 200 small and medium-sized businesses.
[English]
As already noted by the other speakers, there's a lot of promise and opportunity behind artificial intelligence to support economic growth and societal improvements, and the opportunities are seemingly boundless. From human mobility by automating vehicles to precision health care, many of our forthcoming solutions will be powered by artificial intelligence.
To realize the full benefits of artificial intelligence, we'll need to create systems that people trust. I've provided a brief outline of the slides that I'll present here today, including our industry's obligations and where our industry is already going; a call on our government to lead in terms of developing an ongoing dialogue via public-private partnership; the types of impacts that this will have on our workforce and the need for re-skilling, upskilling and training; and the recommendations in order to build trust in artificial intelligence.
Canada has been recognized as a global leader in artificial intelligence research and development. We are attracting global talent to universities across Canada to study in this field. We're already experiencing the benefits of AI in a number of fields, from start-ups and SMEs to larger global tech companies, all of which have developed AI systems to help solve businesses' or some of society's most pressing problems. Many others are using AI to improve supply chain efficiencies, to advance public services and to advance groundbreaking research. By leveraging large datasets, increased computing power and ingenuity, AI-driven solutions can address any number of societal or business problems, from precision or predictive health care to automated and connected vehicles improving human mobility and decreasing traffic, having an exponential impact on our environment.
AI systems need to leverage vast amounts of data. The availability of robust and representative data, often de-identified or anonymized, is required for building and improving AI and machine learning systems. We can't overstate this enough: Having access to broad and vast amounts of data is the key to advancing our artificial intelligence capabilities in Canada.
That said, the AI ecosystem is global. It's very competitive and it's multi-faceted. Our association welcomes a multi-stakeholder engagement approach to artificial intelligence, one that encourages Canada to bolster global engagement on AI policy to ensure we are all prospering from the potential benefits for our societies.
I'll note six key factors for the committee to consider.
First, traditional industries are already seizing and leading in AI opportunities. From oil and gas to mining, forestry and agriculture, they are embracing this technology to drive efficiencies and compete on a global scale. They are developing new services and new products based on the information being analyzed and leveraging artificial intelligence.
Second, AI is a journey. This isn't going to be an end state. This is going to be something that continues to evolve over the forthcoming decades.
Third, central to any economy's digital transformation is cultural transformation, and misinformation in this space will kill consumer and citizen trust in new technology and artificial intelligence.
Fourth, there will be workforce disruption, but based on historical factors, we believe new technologies including AI will create more job opportunities than it will kill.
Fifth, we need partnerships for workforce development, including the re-skilling and upskilling of existing people who may force disruption, based on their current roles.
Sixth, next-generation policies are needed. These are next-generation technologies. It's time for us to start thinking outside the box.
When I first joined government in 1999, one of the first jobs I had was working to support the development of PIPEDA. I was also one of the lead architects of Canada's anti-spam legislation. I did my master's thesis on why SMEs struggle to comply with CASL and PIPEDA, so I've been working on this for the better part of the last 17 or 18 years. Interestingly, we never foresaw the impact that data would have on the legislative frameworks we have today. We couldn't foresee, when developing PIPEDA or CASL, the types of data-driven businesses that have come our way to date.
Next, I want to talk about industry's obligation to promote responsible development and use of artificial intelligence.
First, we recognize our responsibility to integrate principles and values into the design of AI technologies, beyond compliance with existing laws. While the potential benefits to people in society are amazing, AI researchers, subject-matter experts and stakeholders should continue to spend a great deal of time working to ensure the responsible design and deployment of AI systems, including addressing safety and controllability mechanisms, the use of robust and representative data, enabling greater interpretability and recognizing that solutions must be tailored to the unique risks presented by the specific context in which a particular system operates.
Second, in terms of safety, security, controllability and reliability, we believe technologists have a responsibility to ensure the safe design of AI systems. Autonomous AI agents must treat the safety of users and third parties as a paramount concern, and AI technology should strive to reduce risks to humans. Furthermore, the development of autonomous AI systems must have safeguards to ensure controllability of the AI systems by humans, tailored to the specific context in which a particular system operates.
Third is robust and representative data, with a specific focus on mitigating bias. To promote the responsible use of data and to assure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias and to test for potential bias before and throughout the deployment of AI systems.
AI systems need to leverage large datasets. The availability of robust and representative data for building and improving AI and machine learning systems is of utmost importance.
By the way, this could be a significant competitive advantage for Canada. We have a globally representative population, including indigenous communities. It would be a wonderful target for medical testing and AI testing in the medical field.
In terms of interpretability, we should leverage public-private partnerships to find ways to better mitigate bias, inequity and other potential harms in automated decision-making systems. Our approach to finding such solutions should be tailored to the unique risks presented by the specific context in which a particular system operates.
Finally, the use of AI to make autonomous consequential decisions about people informed by, but often replacing, decisions made by humans has led to concerns about liability. Acknowledging existing legal and regulatory frameworks, our industry is committed to partnering with relevant stakeholders to form a reasonable accountability framework for all entities in the context of automated systems.
We believe we should leverage and build a public-private partnership that can expedite AI R and D, democratize access to data, prioritize diversity and inclusion and prepare our workforce for the jobs of the future. ITAC members also believe that we need to prioritize an effective and balanced liability regime via the continued engagement of multi-stakeholder expert groups. The right solution is only going to come from an open exchange with all actors in the AI supply chain.
If the value favours only certain incumbent entities, there's a risk of exacerbating existing wage income and wealth gaps. In this scenario, this isn't “us versus them”, “private versus public”. It's just “us”. There should be increased partnership to explore how to develop a safer and more secure and trusted data-driven digital economy.
There is a concern that AI will result in job change, job loss and worker displacement. While these concerns may be understandable, it should be noted that most emerging AI technologies are designed to perform a specific task or to assist and augment a human's capacity rather than to replace a human employee. This type of augmented intelligence means that a portion—most likely not all—of an employee's job could be replaced or made easier by AI.
Leveraging AI to complete an employee's menial tasks is a way to increase their productivity by freeing up time to engage in customer service and interaction or more value-added job functions. Nevertheless, while the full impact of AI on jobs is not yet fully known in terms of both jobs created and jobs displaced, an ability to adapt to rapid technological change is critical. We should leverage traditional human-centred resources as well as career educational models, and newly developed AI technologies should assist in developing both the existing workforce and the future workforce to help Canadians navigate through career transitions.
:
I'm happy to answer this.
It's a very important question, how we both identify bias as it's picked up by algorithms and then mitigate it once we know that it's there. I think the simplest way to explain the problem is that we live in a biased world and we're training algorithms and AI with data about the world, so it's inevitable that they pick up these biases and can end up replicating them or even creating new biases that we're not aware of.
We tend to think of bias in terms of protected attributes—things such as ethnicity, gender or religion, things that are historically protected for very good reasons. What's interesting about AI is that it can create entirely new sets of biases that don't map onto those characteristics or even characteristics that are humanly interpretable or humanly comprehensible. Detecting those sorts of biases in particular is very difficult and requires looking essentially at the set of decisions or outputs of an algorithmic system to try to identify when there is disparate impact upon particular groups, even if they are not legally protected groups.
Besides that, there is quite a bit of research, and methods are being developed to detect gaps in the representativeness of data and also to detect proxies for protected attributes that may or may not be known in the training phase. For example, postal code is a very strong proxy for ethnicity in some cases. It's discovering more sorts of proxies like that.
Again, there are many types of testing—automated methods, auditing methods—whereby essentially you are doing some sort of analysis of training data of the algorithm while it's performing processing and of the sets of decisions that it produces.
There is, then, no simple answer to how you do it, but there are methods available at all stages.
:
I tend to say that there is no single silver bullet for appropriate governance of these systems, so risk assessments can be a very good starting point.
They're very good in the sense of catching problems in the pre-deployment or procurement stage. Their shortcoming is that they're only as good as the people or the organizations that complete them, so they do require a certain level of expertise and, potentially, training—essentially, people who are aware of potential ethical issues and can flag them up while actually going through the questionnaire.
We've seen that with other sorts of impact assessments such as privacy impact assessments, environmental impact assessments and, now, data protection impact assessments in Europe. There really has to be a renewed focus on the training or the expertise of the people who will be filling those out.
They are useful in a pre-deployment sense, but as I was suggesting before with biases, problems can emerge after a system has been designed. We can test a system in the design phase and during the training phase and say that it seems to be fair, it seems to be non-discriminatory and it seems to be unbiased, but that doesn't mean that problems won't then emerge when the system is essentially used in the wild.
Any sort of impact assessment approach has to be complemented as well by in-process monitoring and post-processing assessment of the decisions that were made, and very clear auditing standards in terms of what information needs to be retained and what sorts of tests need to be carried out after the fact, again, to check for things like bias.