Thank you very much for the invitation and the opportunity.
There's been a proliferation of hate speech online, propaganda, radicalism and obscenity. In 2016, Cision documented a 600% increase in the amount of hate speech in social media postings between November 2015 and November 2016. In 2019, Léger Marketing indicated that 60% of Canadians report having seen hate speech on social media.
These statistics should not come as a surprise to anyone. When the federal government repealed section 13 of the Canadian Human Rights Act in 2013, we lost the capacity to protect against this. For the past six years Canadian citizens have had little ability to protect themselves against online hate speech and discrimination.
The fundamental problem is that Criminal Code provisions are often ineffective; prosecutions are few; proof of intent to promote hatred against a group beyond a reasonable doubt is almost impossible to meet. The 2008 Saskatchewan Provincial Court case of Crown v. Ahenakew demonstrates that clearly.
In the case of Saskatchewan Human Rights Commission v. Whatcott, the Supreme Court of Canada, in a unanimous decision, stated that an effective way to curb hate speech is not within the Criminal Code, but in a civil process through human rights commissions. The commission argued that the Criminal Code provisions regulate only the most extreme forms of hate speech, advocating genocide or inciting a breach of the peace. The Supreme Court specifically and narrowly defined hate speech to ensure that human rights legislation does not unreasonably infringe on freedom of expression. This is the most important contribution the Saskatchewan Human Rights Commission has made to Canadian jurisprudence. I put forward the idea that this case provides a blueprint for the work of this committee.
Judge Rothstein made the following salient points for the court.
The court described nine indicia of hate in paragraph 44 which are clear, concise and unambiguous. The argument for free speech is not a shield to be used to protect against hate speech. The courts have consistently used the hate speech definition from the 1990 Taylor case in the Supreme Court of Canada. This analysis excludes merely offensive or very hurtful, obnoxious expressions.
Expression which debates the merits of reducing the rights of some Canadian citizens who are vulnerable is not a prohibition. It restricts the use of the expression that exposes the members to hatred. Ideas are not the target; rather the mode of expression of the idea is the target.
Ironically, hate speech arises in public debates and can be very restrictive and exclusionary. Legitimate debate in our democracy that is expressed in a civil manner encourages the exchange of opposing views. Hate speech is antithetical to that objective. It shuts down dialogue by making it difficult or impossible for members of a vulnerable group to respond, thereby stifling discourse. Hate speech that shuts down public debate cannot dodge prohibition on the basis that it promotes debate.
Preventative measures in human rights legislation reasonably centre on the effects rather than the intent of the hatemonger. The evil of hate propaganda is beyond doubt. Hate expression causes real harm to real people. Hate speech demeans, denigrates and dehumanizes the citizens it targets. Through hate speech individuals are told they are entitled to less than other Canadians because of the characteristics they possess.
With the advent of instant unfettered electronic communication, the opportunity for dissemination is nearly unlimited and largely uncontrolled. A realistic view of modern society must inform free speech, discourse, and the limits thereon.
The Whatcott judgment was rendered in February 2013. Later that same year, section 13 of the Canadian Human Rights Act was officially repealed. The repeal was based on the argument that it unduly fettered free speech. Opponents to the section provided only anecdotal examples that justify their position. There is no empirical evidence that human rights legislation unduly fetters legal speech. Contrary to the arguments of the free speech advocates, Canada has no democratic tradition of unbridled free speech. Freedom of speech in Canada has always been freedom governed by limits recognized in law.
Principles of freedom of speech were originally derived through common law principles showing up in the Constitution Act, 1867. Freedom of speech was expressly declared in the Canadian Bill of Rights, 1960. A Canadian citizen's right to freedom of expression was not given express constitutional protection until the enactment of the charter in 1982.
Despite the charter protection of freedom of expression, there are numerous limits to free expression that are justifiable in a free and democratic society. Reasonable limits to expression protect against greater harms that flow from unfettered speech.
Some of those limitations include defamation, libel, slander, perjury, child pornography, court ordered publication ban, limits on tobacco, alcohol and drug advertising, insider trading, fraud in the business sector, copyrights, trademarks, and hate speech. There are literally hundreds of legally justified limitations on freedom of expression in Canada.
However, let's remain focused on hate speech. Here are the recommendations of the Saskatchewan Human Rights Commission to this committee:
First, the Saskatchewan Human Rights Commission supports the reintroduction of prohibitions in the Canadian Human Rights Act against hateful expression, and the inclusion of telecommunications and the Internet in that act and that re-inclusion.
The provision could be more effective if the Canadian Human Rights Commission is permitted to commence a complaint on its own initiative on behalf of an affected group, such as a class action type of model. The Saskatchewan Human Rights Commission has that ability. Proceeds of a successful complaint could be paid to a community organization that supports the targeted group and/or fights against hate speech.
We must enact meaningful legislation that allows human rights commissions to do their job effectively and to hold those who spread online hate responsible for their actions.
Second, create legislation that holds companies financially accountable for hosting, spreading or creating content that foments online hate. Germany passed the “Facebook act”, which requires social media networks with more than two million users to take down hateful content within 24 hours or face a very significant financial penalty.
In the United Kingdom, the “Online Harms White Paper” has proposed establishing an independent regulator that would write a code of practice for social networks and Internet companies and have the ability to fine companies that don't enforce those rules. In Canada, we must follow suit.
Recently, giant tech companies such as Microsoft, Twitter, Facebook and Google came together to condemn online hate and agreed to a nine-point plan on how to curb hate. That is a very good thing. However, we cannot rely on commercial entities to determine what type of behaviour and content is acceptable. That would be a fundamental abdication of the legislative responsibility of Parliament. Instead, we need to develop a “made in Canada for Canada” plan, a plan created by governments after thorough consultations with industry stakeholders, a plan that publicly sets out rules, that monitors platform compliance and that penalizes when necessary.
Third, Canadian agencies must be given the means and mandate to monitor and investigate online hate, extremism and radicalized influences. In a time when hate and misinformation spread like wildfire online, data collection and intelligence gathering are paramount. That is why part of a “made in Canada for Canada” plan should include a partnership between federal security agencies, social media companies and Internet providers. We have arrived at a moment in our history in which words and well-intentioned platitudes no longer suffice.
The digital revolution, which has transformed society for both good and ill, has begun to disrupt our democracy. Individuals and groups, foreign and domestic, are using online misinformation, hate and extremist recruitment to erode democratic discourse and to drive a wedge between Canadian citizens.
We cannot let that happen. We need to take action. Our leaders must have the authority and the moral courage to do what is right. They must choose unity over division, understanding over ignorance, and respect over hate. They must make decisions that work towards the greater good, that respect the rule of law, reflect the charter, and in turn, make the difficult decisions that protect what it means to be a Canadian citizen.
This starts by enacting meaningful legislation that will allow governments, human rights commissions, industry, regulatory agencies and the public to effectively combat online hate and misinformation. That's where it starts, but that's not where it ends.
Fourth, we must also invest in education so that youth of tomorrow no longer—
My remarks will be offered in both official languages.
Thank you for inviting the Canadian Human Rights Commission to participate in this discussion today on online hate. I am joined by my colleague Monette Maillet, Deputy Executive Director and Senior General Counsel.
The proliferation of online hate is a clear and present danger. In recent years it has become painfully clear that allowing online hate to fester can result in horrific consequences. We are therefore encouraged that the justice committee is conducting this important study. We are pleased to see that you are hearing from several witnesses representing the people and communities most often targeted by hate.
Hate speech, and particularly online hate, is both an urgent public safety issue and a fundamental human rights issue. Hate speech violates a person's most basic human rights and freedoms: the right to equality and the right to live free from discrimination.
I will focus my remarks on three key points. First, online hate causes harm. Second, there is a gap in the law when it comes to protecting people from online hate. Third, a comprehensive strategy is needed.
The Internet has given everyone the power to have their own platform and to be a broadcaster. People can be louder than ever before and influence more people than ever before. In many ways, this is a major step forward. However, the Internet has made it possible to amplify and spread hate speech.
Far too often, people are victimized by online hate because of their race, religion, gender, sexual orientation or where they're from. Online hate has been found to cause fear and serious psychological harm. It shuts down debate and it promotes conflict, division and social tension. At its most serious, online hate incites violence, and too often, far too often, leads to tragic situations.
If Canadians targeted by online hate are expected to live their lives in a toxic atmosphere, we're basically failing them. Canada has a responsibility under international and domestic laws to promote equality and to protect all Canadians from discrimination.
This brings me to my second point. There is a gap in the law when it comes to protecting people from online hate. The now repealed section 13 of the Canadian Human Rights Act has given the commission an informed perspective on addressing online hate in Canada.
As many of you may know, section 13 was originally written into the CHRA to prevent harm from prohibited hate messages, based on anti-Semitism being communicated by telephone in the 1970s. Following the attacks of September 11, section 13 was broadened to include messages communicated over the Internet. For many years, it was effective in shutting down a number of extreme neo-Nazi websites. However, this approach is not well suited to respond to today's rapidly evolving technology. As you know, section 13 was deemed to be a constitutionally sound provision.
As well, the Supreme Court of Canada has confirmed that some limits to free speech are justifiable in a free and democratic society. We have noted that previous witnesses have spoken of the need for a definition of “hate”. To this end, we encourage this committee to look at the definitions put forward by the Supreme Court of Canada, as well as the hallmarks of hate developed by the Canadian Human Rights Tribunal.
In the discussion around freedom of expression and hate speech, we must not forget the fundamental right to equality and to be free from discrimination. There is no hierarchy of rights, and rights sometimes compete. The commission believes there needs to be an appropriate balance. That is going to require meaningful participation and accountability of all involved parties.
What we can say for certain is that something must be done quickly to address the proliferation of online hate. It threatens public safety, violates human rights and undermines democracy. As other witnesses have said, addressing online hate will require a proactive approach that involves tracking, intervention and prevention.
This brings me to my third point. A comprehensive strategy is needed. It will take a concerted and coordinated long-term effort that is proactive, multipronged and multi-faceted. It will take innovative thinking, technical expertise, proper resourcing, coordination and co-operation.
The strategy will need to bring together all levels of government, telecommunication and Internet providers, social media platforms, civil society, academia and, most importantly, victims of hate.
These efforts must be led by the government. The government has a duty to meet its domestic and international human rights obligations. This includes protecting citizens from hateful speech.
In conclusion, the Canadian Human Rights Commission is committed to fighting against hate and to participating in a broader, coordinated solution.
In response to evidence heard by the committee, the CHRC finds that a simple amendment to the Canadian Human Rights Act to include provisions similar to the former section 13 would be insufficient. In this modern era, this legal change alone could neither provide the scope nor the level of protection or remedies necessary to prevent online harassment or to effectively reduce hate propaganda.
If the committee or the government explores possible amendments to the Canadian Human Rights Act or to other legislation as part of a broader response to hate propaganda issues, the CHRC would be happy to contribute its expertise.
In the coming days, the CHRC will submit a number of documents, including a summary report of a recent jointly organized event to discuss online hate.
Thank you. My colleague Monette Maillet and I would be pleased to answer your questions.
Thank you very much for inviting me to address the committee today. I'd like to speak to you about the work the department is undertaking related to racism and religious discrimination.
Evidence is clear that racism and discrimination continue to exist in Canada. Addressing them is part of the federal government's responsibility to sustain a society that values all its members and treats them with dignity and respect.
One way that is achieved is through Canada's multiculturalism policy, which was designed to create a climate in which the multicultural heritage of each of us is valued and to contribute to building a society where all can participate in the economic, social, cultural and political life of Canada.
The multiculturalism program works toward these objectives by focusing its efforts on building an integrated and socially cohesive society; improving the responsiveness of federal institutions to the needs of a diverse population; and engaging in discussions on multiculturalism, inclusion and diversity at the domestic and international levels.
There are four key activities that the multicultural program undertakes. First is grants and contributions via the community support, multiculturalism and anti-racism initiatives program. Second is public outreach and promotion through public events and key outreach initiatives such as Asian Heritage Month and Black History Month. Third is support of federal and public institutions to help them meet their obligations under the Canadian Multiculturalism Act. Fourth is international engagement through providing support for Canada's membership in the International Holocaust Remembrance Alliance and ensuring Canada meets its obligations as a signatory to the International Convention on the Elimination of All Forms of Racial Discrimination.
In budget 2018, new funding in the amount of $23 million over two years was allocated to the program: $21 million to support events and projects that target racism and discrimination with a particular focus on indigenous peoples and racialized women and girls, and $2 million to support cross-country consultations on a new national anti-racism and anti-discrimination approach.
Budget 2018 also provided $9 million over three years to the Department of Canadian Heritage and $10 million over five years to the Public Health Agency of Canada to address the challenges faced by Black Canadians.
In 2018, the Minister of Canadian Heritage and Multiculturalism was asked by the to develop a new federal anti-racism approach to combat racism and discrimination. In support of this mandate, we carried out engagement sessions from October 2018 to March 2019 to gather input from Canadians, including experts, faith and community leaders, and those with lived experiences of racism and discrimination.
In total, 22 in-person sessions were held, involving over 600 participants from some 443 organizations. Over 1,000 online submissions were received.
A further $45 million over three years was allocated in budget 2019 for the multiculturalism program to develop and implement a federal anti-racism strategy. In the budget announcement, the strategy was described as finding ways to counter racism in its various forms, with a strong focus on community-based projects. The announcement also highlighted an anti-racism secretariat that would work across government to identify opportunities, coordinate activities and engage with Canada's diverse communities.
Increasingly intolerant and racist language—hate speech—is available online. It isn't just flourishing in private conversations on social media platforms such as Facebook or Instagram. It's also on the rise on more public sites such as YouTube, and in comments sections, web forums and blogs.
Participants in our engagement sessions told us that online hate is an underlying factor that contributes to or causes racism. It is a serious phenomenon that exists in many forms and significantly impacts young people. People told us that social media can play a significant role both in spreading hate and also in combatting it.
Canadian Heritage plays a vital role in the cultural, civic and economic life of all Canadians. We'll continue to use the levers available to us to work towards addressing hate online, together with our federal partners and with communities.
Good morning, Mr. Chair, members of the committee and ladies and gentlemen.
Thank you for inviting me here to speak with you today.
I am Superintendent Kim Taplin, and as you know, I am the officer in charge of National Crime Prevention and Indigenous Policing Services.
The RCMP takes hate-motivated crimes and incidents very seriously and is committed to continuing to provide services that are focused on the safety of our communities.
Canadians are increasingly active online, with some using multiple communication devices and a wide variety of tools, such as instant messaging and various social media applications, which provide enormous benefits for Canadian society, but also present unintended opportunities to spread hatred.
A hate-motived crime, whether online or not, is any criminal offence motivated by the offender's hate, bias or prejudice towards a group or individual, based on colour, race, religion, national or ethnic origin, age, sex, sexual orientation, gender identity or expression and mental or physical disability. This would include a physical attack against someone because of their disability or sexual orientation, or hate-motivated vandalism, such as hate graffiti at a religious institution.
A hate or bias incident may be motivated by the same factors as those of a hate-motivated crime, but does not reach the threshold of being a criminal offence. Such incidents may include name-calling or racial insults.
If not addressed, you've heard here today that both hate-motivated crimes and incidents can be a warning sign and even a catalyst for more serious violence in communities. They also have negative impacts on communities' well-being and safety.
The RCMP proactively works with communities to identify, prioritize and solve problems. This collaborative approach is based on the philosophy that prevention is a core responsibility of policing, where decisions are evidence-based and responses should be community-led, police-supported, sustainable and flexible.
The RCMP has several consultative committees through which communities' interests become reflected in our work, such as the commissioner’s advisory committee on visible minorities, the commissioner's national indigenous advisory committee and the national youth advisory committee. The RCMP also participates in external committees, such as Public Safety Canada's cross-cultural round table on security, and Canada's anti-racism strategy, led by the Department of Canadian Heritage.
Statistics Canada estimates that two out of three victims of hate-motivated crime do not report to police. The RCMP is focused on increasing the reporting by building trust with community members. The RCMP also has a national operational policy to assist investigators dealing with hate-motivated crimes and is committed to monitoring threats to public safety. This includes intelligence gathering and ongoing assessment, in collaboration with law enforcement partners, to determine the severity of the threat level posed by any particular actor or group.
To properly investigate incidents of online hate, law enforcement must be able to work as effectively in the digital world as in the physical. Rapid technological advancements continue to underlie the complexity of police investigations, including online hate.
It is important to note that investigating hate-motivated crimes falls under the mandate of the local police of jurisdiction. Furthermore, the RCMP has deemed it a priority to recruit qualified applicants from a wide range of backgrounds to better reflect the diverse population of Canada. The RCMP also ensures that all employment policies, practices and standards are fully inclusive and provide all Canadians with equal and fair opportunities within the spirit of employment equity policies and legislation.
In support of our collective effort to counter hate-motivated crimes and incidents, I encourage all communities to become educated on, and speak out against, hate; to enhance situational awareness of related issues in their communities; to practise emergency procedures; to be vigilant; and to contribute to community resilience. The RCMP has been part of these efforts in many communities across Canada, and will continue to reach out with professionalism and compassion to enhance trust with the communities we serve.
I would be happy to respond to any of your questions.
First of all, we have to assume that material will be clearly hateful, extreme, and it causes harm. Once we have that set of facts before us, how do we then deal with it?
It's our opinion that we need a multipronged approach, that a provision in the Canadian Human Rights Act cannot stand alone. Clearly, we need agencies, regulatory agencies, police, social media platforms, Internet service providers, and so on, to play a role.
The question is, do you become reactive, so after something happens, a complaint gets filed or a charge is laid? Section 13 of the Canadian Human Rights Act was very effective at shutting down websites. There could be some amendments around jurisdiction, perhaps providing the commission with a way to deal with things more quickly, but the issue with a complaint-based system is that it takes time.
If we are limiting freedom of expression, we have to ensure that it's very narrowly limited. The issue becomes what happens to social media. Websites, you can shut down, and you can fine Internet service providers, but if we were to open the Canadian Human Rights Act to complaints based on Twitter, YouTube and Facebook, I can't imagine that we would be resourced to do any other work. That is something that the committee should consider.
However, in terms of a proactive compliance model whereby you have standards, I'm sure the committee has heard of examples in Europe where that has happened, where they're held accountable. Internet service providers, Facebook, YouTube and Twitter are held accountable for letting hate fester online and potentially cause harm and lead to violence.
To the witnesses, thank you so much for being here and providing your testimony today.
My first question is for Ms. Inman from the Department of Canadian Heritage.
I heard you speak about the anti-racism work, which is certainly commendable and incredibly important in the prevention of people getting to a point where they're spreading hateful messages or sharing those things. However, I wonder if there is anything in particular where you're dealing with online hate or talking about an education program for Canadians.
That's something we've heard pretty consistently from people who have testified here, that we need, now, a full-blown, almost immediate ability to educate Canadians, not just K to 12, which is fantastic, but all Canadians on what constitutes hate speech, what to do if they see hate speech, and to really have that happen in a very quick manner, because Canadians are struggling. I hear all the struggles that you're having in trying to address this and it's moving very quickly.
Have you been directed by the to start such a program, or are you looking at that behind the scenes? It would be helpful to know.
I'm afraid I'd have to speculate on why it's not being used. It may be, perhaps, as was mentioned, that lack of resources might be an issue.
It has been mentioned that section 320.1.... Maybe I'll just set out the parameters of it. Section 320.1 is a specific provision in the Criminal Code that was created by the Anti-terrorism Act back in 2001. It allows a judge to order the deletion of hate propaganda that's made publicly available on a computer system that is within the jurisdiction of the court. There are safeguards built into that particular procedure, whereby the person who has put the material on the computer, for example, can come before the court and argue as to why it should not be deleted.
To my knowledge, I'm not aware that this provision has ever been used. I can't speculate, really, as to why, other than maybe a lack of resources or perhaps the need for more education. It also has been mentioned on occasion in these hearings, I believe, that for this provision there's a requirement to obtain the consent of the appropriate attorney general as well.
There has been discussion here about hate speech and the hate speech provisions in the Criminal Code. I thought I would just mention that what's probably most relevant in this context are the hate propaganda provisions in the Criminal Code. There are three of them: advocating or promoting genocide against an identifiable group; inciting hatred in a public place that is likely to lead to a breach of the peace where it's directed against an identifiable group; and, wilful promotion of hatred against an identifiable group.
It has been mentioned that intention is needed for the hate speech crimes. In fact, intention is needed for two of the three hate speech crimes: advocating or promoting genocide and the wilful promotion of hatred. The one that requires inciting hatred in a public place likely to lead to a breach of the peace has a lesser mens rea component—probably recklessness—and that's because of the imminent danger to the public peace.
Thank you, Mr. Chair, and thank you to the committee for the invitation to appear before you today.
It is frankly disturbing that we live in a world where online hate is rising, where what Whitney Phillips has called “the oxygen of amplification” has elevated extremist views and where in several cases online hate speech has directly led to offline violence, so I very much welcome the committee's careful consideration of how Canada can address these troubling developments.
I've personally examined European and North American approaches to hate speech, extremism and disinformation. Today, I will briefly outline some of the other approaches democracies are taking, which I talk about more in my brief that I've submitted and, second, how the German example in particular raises some questions for the reconsideration of introducing section 13 again. Finally, I'll discuss some measures that could be taken to address a broader category of harmful speech, which is a non-legal category, but I can try to address some of the broader questions that have been raised.
Let me first state the very sobering fact that hate speech is not a problem that can be solved. It will be a continual, evolving and ongoing threat. Still, levels of hate speech can ebb and flow. This depends upon the architecture of online ecosystems and the type of speech they promote, as well as the broader political, economic and cultural factors. This can facilitate more hate speech and hate-related crime, but it can also do the reverse.
First, this is an international problem, as I've mentioned. Democracies around the world are trying to find ways to address this issue. Let me name a couple of the examples that we can discuss in questions.
First, the U.K. has suggested an approach to regulate through a “duty of care” framework that requires social media companies to have a design that prevents online harms. France has suggested a regulation that would mandate transparency and “accountability by design” from the social media companies. Finally, Germany has taken a legal approach, creating a law that requires social media companies with more than two million unique users in Germany to address and enforce 22 statutes of speech law that already exist in Germany.
There's a range of things, from the legal to the co-regulatory to self-regulatory and codes of conduct.
In the case of what we're discussing today, the German Netzwerkdurchsetzungsgesetz, or NetzDG, is particularly instructive. Passed in 2017 and in force since 2018, this is technically a German mouthful word that is literally translated as “network enforcement law”, so it doesn't introduce new statutes of speech law. Rather, it requires social media companies to enforce law that already exists and to actually attend to complaints that are posted within 24 hours or face up to 50 million euros of fine per post.
Let me then talk about some considerations this raised. First, this was not about introducing new law but enforcing existing law. It has been a major problem in the German case to get Facebook and company to comply. Second, it raises questions about how we get social media companies to actually comply with and enforce existing law. It also raised the question of the scale. To give you a sense, YouTube and Twitter, in a six-month period, were receiving more than 200,000 complaints, so there's a question of the scale of the enforceability and potential backlogs. There's also the question of whether things would be enforced nationally or globally. We've seen that mostly what falls under it is actually being taken down under a company's global terms of service.
This law also only deals with pieces of content, so it doesn't deal with other ways in which hate can be propagated or funded online through ecosystems. Let me give a Canadian example here.
Very recently, a member of the Canadian far right tried to use the GoFundMe platform to raise money for an appeal against a libel suit he had lost for defaming a Muslim Canadian. Ontario Supreme Court Justice Jane Ferguson called the far right man's words “hate speech at its worst”, but only after complaints from a journalist and members of the public did the GoFundMe platform actually take down this man's appeal for funds, even though it violated their terms of service. This is just one illustration of how this is broader than actual pieces of content.
Finally, let me talk about the way in which we might address a broader category of harmful speech, which is a non-legal category of speech but speech that may undermine free, full and fair democratic discourse online. I've written a report with Chris Tenove and Fenwick McKelvey, two fellow academics, about how we can address this problem of harmful speech without infringing on our democratic right to free expression. Let me give three suggestions.
First, we have suggested the creation of a social media council. This would mandate regular meetings of social media companies and civil society, particularly marginalized groups that are disproportionately affected by hate and harmful speech online. This social media council could be explicitly created through the framework of human rights. The idea is supported by, amongst others, the UN special rapporteur on freedom of expression and opinion. By linking to international human rights, this would also ensure that Canada doesn't inadvertently provide justifications for liberal regimes to censor speech in ways that could deny basic human rights elsewhere around the world.
Second, we should firmly consider what kinds of transparency we might mandate from social media and online companies. There's so much that we don't know about the way the algorithms work and whether they promote bias in various kinds of ways. We should contemplate whether to, along the lines of algorithmic impact assessments, require audits and transparency from the companies to understand if their algorithms are themselves facilitating discrimination or promoting hate speech.
Third, we need to remember that civil society is an important part of this question. This is not something to solely be addressed by governments and platforms. Civil society plays a key role here. We often see that platforms only take down certain types of content after it has been flagged and raised by civil society organizations or journalists. We need to support those civil society organizations and journalists who are working on this, and also who are supporting those who are deeply affected by hate and harmful speech.
Finally, we also need to support the research that thinks through the sort of positive element of this, that is to say, how do we encourage more constructive engagement online?
As you can see from this short testimony, there's much to be done, on all sides.
Thank you for inviting me to be part of this conversation.
I want to begin by thanking you for inviting me to address the committee today. I'm sorry I can't be there in person, but I'm here virtually, in the capacity of director of the Institute of Islamic Studies at the University of Toronto where I am also professor of law and history.
At the Institute of Islamic Studies, I oversee a collaborative research project that we call the study of Islam and Muslims in Canada, or SIMiC for short. SIMiC is a collaborative project that partners with six Canadian universities and six community partner organizations. SIMiC blends research with a public responsibility for recalibrating the conversation on Islam and Muslims today.
I do not need to tell you that the existence of Islamophobia in our country is real and extremely concerning; you know this. I'm here because there are things we can do. Drawing on the work of SIMiC, I can identify three specific things you may want to consider as part of a whole-of-government approach, particularly as they relate to Canada's Muslim community as a target of online hate.
The first concerns a reliable data architecture that provides disaggregated data on those communities most targeted. One core feature of SIMiC is to identify gaps in Canadian data architecture to chart the demography of Canada's Muslim communities. Comprising a team of academic researchers, settlement agencies and community organizations, the big data group at the institute is interested in determining what sorts of measures might be put in place to gain a better understanding of who Canada's Muslim communities are as well as their values, their hopes and their aspirations in Canada for themselves and their families.
This summer, one of our research fellows will examine the extent to which existing datasets across the country, including raw datasets from StatsCan research data centres, can tell us something about Canadian Muslims in terms of gender, ethnic or racial category, educational achievement, employment status, income levels and so on. We plan to launch the report in September 2019, and I will share that report with this committee if it so desires.
One key issue concerns the fact that StatsCan asks about religious identity only decennially rather than quinquennially. This approach is fundamentally counterproductive given that the current state of online hate quite often targets groups based on their religious identity. If we are to combat hate that targets people because of their religion—and let's be clear that's exactly what is happening with regard to Muslim Canadians—then we cannot continue to embrace an outdated data architecture that leaves us blind to the terrain in which we much now do our work.
The big data group at SIMiC exists in part to illustrate exactly why we need to rethink data architecture policies at a national scale, starting with a religious identity question in StatsCan's quinquennial census.
My second suggestion for something you may want to consider comes from the work we are doing on global anti-terrorism programs. The institute is part of a consortium of universities around the world examining the extent to which government programs on countering violent extremism have a disproportionate effect on certain communities and, in doing so, ignore others that need to be part of any inquiry.
While we're at the beginning stages of this work, our research has turned up a glaring issue in Canada that may fall within the ambit of this committee. In 1989, Canada was a founding member of the Financial Action Task Force, or FATF, which at the time was charged with combatting money laundering as part of the war on drugs.
After 9/11, the FATF issued a new set of special recommendations to track and combat terrorism financing. FATF guidelines recommend that each state party adopt what it calls a risk-based assessment model, or RBA, to prioritize its targets and allocate its limited resources.
In 2015 Finance Canada issued a self-assessment to FATF. In that self-assessment, Finance Canada outlined Canada's RBA in relation to anti-terrorism financing. It identified 10 groups that posed the greatest threat of terrorism financing in Canada. Eight of them are Muslim-identified groups; one is Tamil and the other is Sikh. In other words, as far as the Government of Canada is concerned, 100% of terrorism financing risk comes from racialized groups and 80% comes from Muslim-identified groups. Nowhere in the 2015 document is there reference to white supremacist groups, white extremist groups and so on, despite the fact that such groups are no less prone to violence, as we have sadly seen.
What does this have to do with online hate? While you will no doubt hear many arguments about freedom of expression as you attempt to regulate online hate, you already have a mechanism in place to track the financial funding of such hate, namely, FATF special recommendation number 8, which identifies charities and other not-for-profit organizations as being vulnerable to terrorist financing.
The aim here in my suggestion is to go after those philanthropic organizations that fund the cacophony of hate. The U.S. is already ahead of the game on this. Think tanks and sociologists have issued reports identifying the principal funders of hate.
While any given instance of online hate is relatively cheap, my suggestion is that you revisit Canada's RBA to use existing financial monitoring regimes to turn off the spigot of funding across the board.
My third and final suggestion concerns not so much combatting online hate as promoting new storytelling opportunities to enhance and improve on gaps in Canada's cultural heritage. Alongside our big data group is a second group that is working to create an archive that documents the history of Muslims in Canada. lt is an archive that will be created through collaboration among researchers at the university, community organizations and those individuals who hold records that capture this history.
Our environmental scan of Canada' s major archival institutions shows that there is little if any representation of the various communities, in particular racialized minorities and Muslims, that constitute the fabric of our national mosaic. Whereas other jurisdictions, such as the U.K. and the U.S., have a growing culture of community archiving projects, this phenomenon is mostly unsupported by the government in Canada.
We are beginning to see some movement in this regard with respect to Canada's indigenous communities, thanks in part to the work of the TRC and new funding schemes allocated to preserving indigenous knowledge. The archive project we are creating is a joint project in which the University of Toronto will serve as a core institutional partner. We have the digitization technology to create an open-access digital archive. Robarts Library has a storage facility for any and all analog copies that we obtain. Thomas Fisher Rare Book Library will provide future researchers with a venue to access those hard copies.
By the end of the summer, the institute will publicly launch its acquisition policy in consultation with our community partners. Moreover, colleagues have expressed an interest in tying their course work to the archive whereby students can help us identify records while they also achieve course credit. Such archives not only foster education and community but also create opportunities for people to tell new stories about themselves and their communities in an academically rigorous way, with thick description. In short, our archive not only promises more speech, but it will deliver better speech.
While we have the infrastructure and overhead to make this possible, our greatest challenge, and the challenge to any such archival project, is to identify funding sources to support archival review processes which involve human capital. The Department of Canadian Heritage certainly offers some funding for such projects, but the envelopes are limited. Its mandate is not narrowly focused on groups targeted by hate. Moreover, many of its grants expressly disqualify university-affiliated projects like ours, despite the fact that universities are well positioned with infrastructure to carry out such projects.
It has been our experience that the Social Science and Humanities Research Council does not fund such projects, in part because they do not fall within prevailing views of what counts as formal research.
While we remain committed to this project, our environmental scan suggests that supporting digital archival projects in participation with targeted communities can create a counterbalance to the online hate that we see proliferating. Consequently, this committee may wish to recommend jump-starting the creation of participatory digital archives, with a specific focus on those minority groups subjected to online hate.
Thank you very much, and I welcome your questions.
Thank you very much for the invitation to speak today.
My name is Naseem Mithoowani. I am a lawyer practising in Toronto, Ontario.
As some of you may know, I am also one of the individuals who initiated human rights complaints in 2008 against Maclean's magazine for having published a feature article entitled “The future belongs to Islam”, authored by Mark Steyn. Maclean's, at that time, was our only national news magazine in an era when social media hadn't yet taken off. The article, therefore, garnered a fair amount of attention.
lt described Muslims as being engaged in a nefarious plot to take over western democracy as we know it. lt insinuated that all Muslims were guilty either by being directly involved in violence or by supporting the goal silently. Muslims living in the west were demonized as “hot for jihad” and as breeding like “mosquitoes” for the sole aim of supplanting the western populations where we lived but with whom we shared no allegiance. Muslims were portrayed as inherently violent and deceitful.
The Muslim community felt the harm of these words in their bones. This was a call to action for the west to wake up to the threat of Muslims living among them. lt was, in essence, asking Canadians to view their Muslim neighbours with suspicion.
We also found 21 articles printed in the previous two years in Maclean's that contained the same anti-Muslim themes, referring to Muslims as “sheep-shaggers”, “global security threats”, “barbarians” and prone to frenzy. One memorable piece even suggested that the CBC comedy Little Mosque on the Prairiewas part of a conspiracy to distract the watching public from the security threat that Muslims posed by instead promoting them and portraying them as good and friendly community members.
We found exactly zero counter articles or critical analysis in response.
We sought a meeting with Maclean's to propose that they consider running a counter piece to address the allegations made in Mr. Steyn's article. More and better speech, we reasoned, was a win for all parties involved. It was only when we were completely shut out by Maclean's that we filed human rights complaints. The very fact that we had done so, in and of itself regardless of the outcome, was seen as proof of abuse, justifying the repeal of section 13 by the Conservative government at the time.
With the benefit of hindsight, I think there are very few people who would today believe that we had no reason to be alarmed over the content of the publication in question.
Those who peddle the rhetoric of a Muslim takeover don't care that the claim contains no truth. Suspicion and fear of Muslims sells. The idea that Muslims are actively trying to subvert western democracy is a warning that people heed, sometimes with horrific consequences. ln fact, we now know that the claims of western demographic decline and supposedly astronomical birth rates of Muslims are a staple in the modern white nationalist movement, usually framed in terms of an invasion, cultural replacement or white genocide.
Indeed, shortly after our complaints were dismissed, the very article that we alleged was hateful was specifically quoted in the manifesto of a white supremacist, who then went on to kill 77 people in Norway in 2011. He justified his actions and his violence as a form of resistance against the inevitable Muslim takeover that Steyn and others were warning against.
This idea of a Muslim takeover of the west has also played prominently in the motivations of the killing of Muslims in Quebec and New Zealand.
Particularly after the deadly attack in New Zealand, even those who were most ardently in support of repealing section 13 following our complaints have paused to reconsider. Professor Richard Moon, for example, was a thought leader in the call for the repeal of section 13. He was commissioned by the government to write a report regarding section 13, and in that report he recommended repeal.
He has since had the opportunity to revisit the Maclean's complaints in a very recent blog. In it, Professor Moon expressly acknowledges that the outcry over our complaints was unwarranted. He states that in light of the rising tide of violence against Muslims, it is not surprising that Steyn's rhetoric has been cited by those who wish to cause Muslims harm.
The truth, then, of the Maclean's complaints, and the controversy surrounding them, is that the Muslim community attempted to use section 13 to call out, 12 years ago, the very same hateful propaganda of a mass Muslim conspiracy theory that we are seeing as influencing mass murder today. The unfortunate lesson that I take out of my experience with the Maclean's case is that we, as a society, were not able to get ahead of the rhetoric at that time and call it out for what it is.
Section 13 does not unduly restrict freedom of expression. It creates a tool to identify and address speech which harm far outweighs any potential benefit. This is in line with our societal values. In Canada, as opposed to other jurisdictions, we simply do not recognize an unlimited right of free expression. Rather, we recognize that legitimate restrictions may be placed on all rights and freedoms, including freedom of expression in a free and democratic society. Hate propaganda is a harm that needs to be confronted, since it shuts down dialogue by making it difficult or impossible for members of vulnerable groups to respond, thereby stifling discourse.
Since the repeal of section 13, communities have been left open to attack. It is my first recommendation to this committee, therefore, that section 13 be reinstated.
However, my experience with the Maclean's complaint leaves me to believe that section 13 alone is insufficient. Section 13 requires individuals to do the heavy lifting of making and carrying complaints. In addition to the financial and time commitment in making and carrying on complaints, those who initiate complaints are vulnerable to personalized targeting. When we made our complaints, for example, as law students just beginning our careers, we were called "legal jihadists", "terrorists", "sock puppets", and accused of using the tools of western democracy to dismantle it.
Instead of seeing Steyn's portrayal of Muslims in the west bent on subverting democracy as the dangerous trope that it is, our actions and complaints as Muslims were viewed through this very lens. We were accused of using democratic tools, including section 13, to subvert western values.
Confronting hate speech is in everyone's interest. That burden should not be placed on the shoulders of a few. It aligns with a better democratic system by ensuring that all voices are included. We cannot afford to download the entirety of the financial and emotional burden of standing up to hate onto vulnerable groups.
My second recommendation is therefore that the committee considers the creation of a body which could intake complaints and carry them forward. I want to be clear that such a body should not disallow individuals and communities from taking personal carriage of complaints where they elect to do so, but should be seen instead as an alternative and complementary channel.
I wish to conclude my remarks by stressing that reinstating section 13 is a vital first step towards combatting online hatred, but it is insufficient in and of itself. We need more tools and partnership amongst all industries, communities and civil society in order to address the problem effectively. The technology and reach of the Internet makes Canada a far different place from when we initiated complaints against the printed Maclean's magazine in 2008. We need creative solutions in response, which should include but not be limited to the reintroduction of section 13.
I thank you for your attention, and I look forward to answering your questions in our next segment.
Mr. Chair, I think what we're trying to do is ascertain and come to some solutions on a very important issue. I think driving at the heart of the witness's substantive testimony is much more important than trying to ascribe whether an individual witness, in this context or any other, shares the opinions of anyone she may or may not have attended a rally with. Let's leave it at that.
I want to say thank you to all three of you for being here.
I want to say a specific welcome to Professor Emon, who is also a constituent and a member of the law faculty at my alma mater. I want to champion you and hold you up for the important work you have done on combatting Islamophobia, which has been a pressing matter, not just for the past two years in Parliament but going on for about two decades now, in the wake of 9/11.
Let's get to the substance of the matter, section 13. We've heard a lot about section 13. I have limited time, probably about five minutes and 20 seconds left right now.
Section 13 does not right now contain a definition of hate. It does not right now contain a threshold requirement. It also has a subsection (3), which exempts the service provider or the telecommunications network from any liability for the human rights violation.
Do you have any comment on those three provisions? Does it need a threshold of what constitutes an organized campaign? Does it need a definition of hatred? Should some sort of liability, in the human rights parlance, attribute to the Internet service provider or the telecommunications provider or the social media company as such?
That's open to all three of you.
Yes. I will make four points very quickly.
The first, in terms of liability, is the question of what USMCA will allow. There is a question mark over whether the CDA, Communications Decency Act, section 230, is embedded within the USMCA, which could potentially make it hard for Canada to deal with liability. That's still to be determined a bit, but I want to put that out there.
Second, we're now dealing with a different kind of Internet where we have both public and private. In terms of private groups, for example, we would need to say, think about what that message is that's forwarded to thousands of people. That's why I think it's very complicated to think about threshold.
The third point is that threshold is complicated because within the Internet, as people at Facebook and other social media companies will say, there are questions of volume versus intensity. If you reach 20 people, but those 20 people go and do something, do you weigh that against something reaching 100,000 people who don't really do anything with it? That's a very complicated question that I think needs to be left to case law.
Fourth, to re-emphasize what I said in my testimony, only a very, very narrow amount of hate speech is going to be dealt with through law. There are also broader categories of harmful speech. That's why I gave suggestions that were not necessarily specifically legal but rather whole-of-government approaches to try to deal with some of these issues.
Let me go back and think about a whole-of-government approach. On the one hand, I would simply endorse many of the comments that Ms. Mithoowani has already remarked upon regarding section 13.
I wanted to clarify my invocation of the financial action task force implicitly linking online hate to the promotion of terrorism. While that will strike some folks as a stretch, I do want to bring a critical race lens to this analysis. Thus far, as we've been talking about online hate, we're really mostly talking about white supremacists and white extremist hate promulgation against minorities, racialized or religious ones.
In bringing a racial lens to this analysis, we have to ask ourselves whether or not we can also begin thinking about these online hate promoters as also promoting terrorism. That's why I bring up the special recommendations of the FATF. The FATF has a special category called designated non-financial businesses and professions in which there is no reference to social media organizations. I would simply suggest taking a look at that.
In terms of focusing on civil society grids, it's not my experience thus far in working with a number of Muslim civil society groups that there has been an inflation of attacks. What we do have, rather, is a better appreciation of how those attacks are understood and felt within the context, within a very thick, enriched context.
One of the limitations of law is that it has a tendency to flatten our experiences. Part of the challenge here and part of what we're trying to create at the institute in combatting Islamophobia is a thick narrative around what these attacks mean, how they're understood and how they resonate as hate.
I don't think that you get an inflation by reference to civil society groups in these communities. I think what you have is a racialized and particularized framework that gives meaningfulness to these attacks of hate and therefore allows us to bring them within the legibility of any legal framework.
I actually testified before the international grand committee and was there to hear those hearings, so I'm very much in tune with that. One part of the puzzle is that international coordination, of which Canada is a key part as a co-chair. That committee has done a really good job of bringing together MPs from 14 different countries that represent over 400 million people, and still Mark Zuckerberg and Sheryl Sandberg did not appear.
Let me say four brief things. The first is that in the German case it was a big fine that really enabled the social media companies to come to the table and start enforcing German law. Beforehand they said that they couldn't comply, but when big fines were on the table, all of a sudden they actually could.
The second part of this is that to handle the volume, they're simply going to need more content moderators. While some things are picked up by artificial intelligence, the reality is that most of this is done by humans. Just as a sidebar to flag, given the pretty awful labour conditions under which these people operate, which we in Canada should be concerned about from a human rights perspective—this is very psychologically burdensome work, and we have some evidence from journalists and others about how difficult this work is and how much PTSD the content moderators experience—the companies are going to have to pony up a lot more money to work with that.
The third element of this is that we need to find out where the content moderators who work on Canada are located. We don't even know that kind of basic information. My guess is that none of them are in Canada. They don't have any contextual knowledge about Canada, for example, about what is language that denigrates indigenous people or other marginalized groups in the Canadian context. That's another pretty simple thing on which we could ask for clarification. We can try to provide more context.
The fourth part of this then is the question of transparency and figuring out what we as Canadians need to know and whether that is under audit. I suggest that there are also very, very basic questions about how much of the hate speech we see in transparency reports and through social media companies is happening in Canada. The part of the German law that everybody, including Article 19 and other free speech organizations, praises is the transparency report that the NetzDG law mandates. That's something that everybody agrees on, regardless of where they are on the political spectrum. I think that's certainly something Canada can take away, and I can provide very specific suggestions on what we could look for from those transparency reports. It would be much more meaningful than is what is in the NetzDG ones or in the broad global ones the companies release.