You're going to testify first, after I introduce all the other witnesses, because we do not want to lose the video conference connection.
In the room with us, as an individual, we have Ms. Elizabeth Moore, educator and advisory board member of the Canadian Anti-Hate Network and Parents for Peace. Welcome.
From the Alberta Muslim Public Affairs Council, we have Mr. Faisal Khan Suri, president, and Mr. Mohammed Hussain, vice-president of outreach. Welcome.
From the Friends of Simon Wiesenthal Center for Holocaust Studies, we have Mr. Avi Benlolo, president and CEO. Welcome.
The rules are eight minutes per group.
We're going to start with the Windsor Islamic Council and the Windsor Islamic Association. I understand they are splitting their time.
Please go ahead. The floor is yours.
Good morning, honourable MPs. We would like to thank the members of Parliament for allowing us to give our perspectives on online hate on behalf of the Windsor Islamic Council and the Windsor Islamic Association, of which I am the public relations director. Lina Chaker is from the Windsor Islamic Council.
Good morning to everyone.
The problem is victims of online hate. Internet use is growing year by year and will continue to do so in the generations to come. Just as we have regulated other technologies, including television, radio, movies, magazines, and other communication platforms, we cannot ignore the Internet. The harm of online conversations transcends the digital world. We don't need to cite violent events or even the most recent attack in New Zealand to prove that online hate has real-world consequences.
Our community centres are filled with troubled youth facing negative peer pressure, social anxiety, and mental health issues. The overall international Muslim community has been shaken twice over the past couple of years by terrorism, just as other communities have been. These terrorists clearly built their Islamic knowledge from misinformed online sources that spew hate.
We have our own Canadian example from January 29, 2017, in Quebec, with evidence that motivation was driven by online hate sites.
To prevent and respond to online hate, we believe there are three important actions the Government of Canada can take.
Number one is to set strict standards and guidelines for social media companies to self-regulate their content. Number two is to more readily enforce legislation that criminalizes both online and off-line hate speech. Number three is to increase awareness about public reporting and responding to this type of behaviour online.
The first action is to impose strict self-regulation standards and penalties for social media companies. Other countries have developed strategies to impose regulations and protocols for social media companies to self-regulate the content of hate speech on their sites. For example, Australia and Germany now penalize social media sites that fail to remove hateful content with financial charges or even imprisonment.
Alternatively, some countries such as Sri Lanka...[Technical difficulty—Editor] ...social media to stop the spread of misinformation and hate. Canada should consider policies of the kind that have been adopted in Australia, Germany and even Sri Lanka to enforce the removal of hateful content and combat terrorism.
We recognize that there may be difficulties in regulating online content. However, our country currently regulates other forms of online content such as child pornography, and anti-spam legislation does exist.
Similar to this, there has to be an effort to combat online hate. For the individuals who try to bypass such regulations, we should combat that by not allowing companies to provide individuals with VPNs or other IP-blocking programs.
Nuimber two is to introduce effective legislation to penalize those who incite hatred. In addition to penalizing social media companies for not taking down hateful content, we must penalize Canadians who spread hateful messages, whether online or off-line. Although we currently have tools to do so, such as section 319 of the Criminal Code, our community feels that they are not adequately utilized and thus cannot encompass online hate crimes.
In fact, we had an unfortunate local example here in Windsor, Ontario. An individual was spraying graffiti all over the city, on the posts and bus stop signs, inciting hatred and harm to Muslims specifically.
These acts weren't recognized as hate crimes under section 319, which makes our community pessimistic about the prospects of encompassing online hate speech. This individual had a misdemeanour and no other charges were pressed against him.
Recognizing this, we believe that section 13 of the Human Rights Act was a vital piece of legislation that was dedicated to online speech. However, it can be amended or restructured to be more effective. We recognize that section 13 was not heavily utilized before it was repealed. However, we do not find this to be a convincing reason not to reintroduce it.
Online hate can be responsible for other types of actions in our society, including verbal attacks against women with hijabs, trying to do harm to people of a visible minority and inciting the physical confrontations that have happened in several supermarkets, shopping areas and malls in our country.
Thus, we are not limiting the discussion of section 13, but hope that any legislation introduced to combat hate will readily be enforced for the betterment of our multicultural Canadian society. The frequency with which a piece of legislation is used should not be the basis on which we decide whether it exists or not. Rather, it should highlight to us that most people still do not know what to do when faced with online hate.
We recommend that there be more education on the consequences of promoting hate. While recognizing that education tends to be a provincial mandate, it is our believe that the Government of Canada can play a vital role. This leads us into our third and final point: educating the public on how to report incidents of hate.
My colleague went over the first two action points that we believe the Government of Canada can take by introducing regulations for social media companies and legislation to regulate those who are spreading online hate. I will cover the third point, which is that we believe that victims of online hate need to be more educated so that they know what to do when they are faced with it.
We grew up with teachers telling us how to respond to bullying on the playground. That's not really effective for the online world. They taught us that sticks and stones can break your bones, but words don't really hurt you. Unfortunately, in today's world, we learn that words can not only hurt you psychologically but can also lead to criminal activity and even terrorism.
I want you to think about the last time you tried to report an online hateful comment. Assuming that the process for reporting the post was user-friendly and noticeable—that is, you actually saw the button that says “report”—where did it lead? Did you have to personally follow up and check to see if it was taken down? How many times? Did you have to forward it to your friends and convince them to also try to report it? How many of us continue to experience and see online hate, despite the continued reports?
We have a couple of recommendations for the government to enforce so that social media companies will better create mechanisms for us to be able to help them regulate the content.
The first is to make it easier to report hateful content. Currently, for example, Facebook doesn't have a “report” button; it has a “give feedback” button. It's not as visible.
Second, hasten the time between the reporting of a post and its examination. As we know, time moves much faster in the virtual space than it does off-line. These processes should be receptive to that.
Third, social media companies should provide the person who reported the harm with an update and provide them with information about other resources, including law enforcement, and such resources as the human rights commission.
Fourth, social media companies should examine software and other algorithms that direct users to violent content and share that with government authorities so that the government can also help find and eliminate violent extremist material.
Finally, social media companies should produce tools that help us, and help users, differentiate between credible information and fake news.
As we have been talking about, there are two kinds of content online that can lead to a lot of violence. One is actual hate and the other is misinformation. We believe the Government of Canada can support and fund community initiatives of digital media literacy to help youth and adults alike be able to differentiate between misinformation and credible information as a method of responding to hate. There is a variety of programming that successfully teaches both generations how to differentiate between real and fake news, making them less susceptible to being influenced by hateful messages. This is essential, given the industry of hate and fake news. Moreover, teaching media literacy skills empowers youth to control their own narrative of their identity and to respond to the negative messages with positive ones.
In conclusion, as Prime Minister Jacinda Ardern said, freedom of speech is not advocating murder, and it's also not spreading false or hateful content. We thank the Government of Canada for considering the important consequences of online hate and applaud the right honourable Prime Minister for signing the Christchurch call in Paris recently, where he took the effort to tackle this issue of violent online content. However, there is more to be done.
To summarize, we urge the government to combat online hate in three ways: first, by setting strict standards and guidelines for social media companies to regulate their content; second, by more readily enforcing hate speech legislation, be it online or off-line; and last, by increasing the public's awareness about how to report and respond to online hate.
Thank you for your time and consideration.
I want to start by thanking the committee for the opportunity to speak today. It is certainly a privilege that I never thought would be afforded to me as a former extremist. I really appreciate the opportunity to be here.
I would like to provide a bit more context about who I am and how my views about online hate have been informed.
In the early nineties, I was a member of and spokesperson for the extremist group the Heritage Front, which at the time was the largest hate group in Canada. They acted as an umbrella organization for the racist right at the time. They brought in the Church of the Creator and the KKK, among other organizations. Most troubling, they were trying to do what the so-called alt-right is trying to do today, which is to make inroads into the mainstream and to try to have a veneer of legitimacy on top of the hatred.
I should add that Wolfgang Droege, who was the leader of the Heritage Front, was convicted of many offences prior to starting the organization, including air piracy, the attempt to overthrow a friendly nation and drug offences, which I believe included possession. He managed to influence people, despite this veneer of wanting to be more mainstream and trying to make connections with the Reform Party. His followers committed a wide array of offences of their own, which included hate crimes offences, assault, and targeted and unrelenting harassment of anti-racists.
I feel very fortunate that I was able to leave that terrible world of hatred behind. Since 1995, I've been working with non-profits, educators and law enforcement to raise awareness about the dangers of hate groups. I'm currently on the advisory boards for the Canadian Anti-Hate Network and Parents for Peace, which is an American organization that provides support for families of radicalized individuals.
Back in the nineties, when I was an extremist myself, I quite literally communicated hate by telephone. Also, prior to leaving, I helped prepare materials for the Internet. They were back-issue articles from the Heritage Front's magazine and they ended up posted on what would become Freedom-Site, which was one of Canada's first white supremacist websites. That website, which was run by Mark Lemire, in 2006 was found to contain material that violated section 13 of the Human Rights Act.
I feel fortunate that I never personally got in trouble with the law, but I do realize that it was a very real possibility. I understand that a sample size of one has limited value, but I should say that section 13 did moderate my behaviour. When I was working on the hotline, I was very aware of the fact that friends who were working on hotlines very similar to the Heritage Front hotlines were facing charges under section 13, and it made me more careful. I did not engage in or indulge the unrestrained hatred that I certainly felt inside. I do understand, with the benefit of hindsight, that what I was communicating was still hateful, but it was definitely not as hateful as it would have been in the absence of such legislation.
The methods that are used today to communicate hatred are definitely more sophisticated and exceptionally more accessible than what we had available to us in the nineties. As an analog kid, I have to say that it frightens me that young people today could have their life trajectories altered by watching one YouTube video or interacting with one Twitter account or one Reddit account.
Racist extremists have always networked with like-minded individuals across borders in other countries, but we now have an environment where the transmission of hate knows no borders or language barriers—or even time differences, frankly.
To fully understand what is at stake, I think it's imperative to consider not just the words and images that are put in front of you but the emotions that created those words. Hatred is intoxicating, it's all-consuming and, in my opinion, it's a contagion that when embraced crowds out not only other moderating emotions but also any sense of reason and connection to one's fellow human beings.
I want to read a quote from R. v. Keegstra from the Supreme Court in 1990. Hatred is:
emotion of an intense and extreme nature that is clearly associated with vilification and detestation...
Hatred...against identifiable groups...thrives on insensitivity, bigotry and destruction of both the target group and of the values of our society. Hatred...is...an emotion that, if exercised against members of an identifiable group, implies that those individuals are to be despised, scorned, denied respect and made subject to ill-treatment on the basis of group affiliation.
...hate propaganda legislation and trials are a means by which the values beneficial to a free and democratic society can be publicized.
With more people being exposed to hateful ideas and emotions than ever before through social media and online content, and with the very troubling rise of hate-motivated crime in Canada, I'm quite heartened that the government is revisiting the inclusion of incitement of hatred in either the Canadian Human Rights Act or the Criminal Code.
The introduction of Canada's digital charter shows promise in developing a thoughtful and measured template for how Canadians can expect to be treated as digital innovation continues to expand. However, I wish to challenge the committee to consider that the government's responsibility to Canadians should not end with the adoption of these measures. Unless effective and ongoing training is provided to everyone responsible for implementing these laws, including judges, Crown prosecutors and police, victims will continue to feel that they are not heard and that justice remains elusive.
As an example, just last week I heard from a member of my local community who wanted to report anti-Semitic graffiti that they found. The responding officer was not at all sympathetic, and because the swastika that was found was misshapen, he wrote it off as a collection of L's. That is not a responsible response to the community.
Speaking as a former extremist and as a woman and a mother who is raising a child in an interfaith Jewish-Christian family, I think Canadians urgently need you to respond boldly and to lead us into an era in which we can expect that our children will be treated with respect and dignity, both online and in the real world. I think we also have a responsibility to the international community to do what we can to limit hatred that may impact identifiable groups in other nations because, as I said, borders mean nothing in the digital world. It is unfortunately no accident that the Christchurch shooter had Alexandre Bissonnette's name on one of his weapons.
The endgame of hatred is always violence and death, and that game starts with incitement, words and images that we find on the Internet.
The introduction of legislation to address the early stages in the progression of hate is both right and necessary. Canada's values of peace, diversity and inclusion are being eroded by the unrelenting promotion and communication of hate online. It is time, if not past time, to send a strong message to racist extremists that their hatred and targeting of identifiable groups is not just unacceptable but unlawful.
As I stated earlier, I have experienced first-hand the moderating effects of such laws and regulations. I think it's time that we do the right thing to rein extremists in before anyone gets hurt or loses their life.
I would add, if I have a moment, very briefly in response to what the earlier speakers had mentioned, that when it comes to reporting online hate, I think platforms need to have more transparency when they respond to people. I have experienced myself being targeted as a former extremist online and receiving hatred, and when I report it, if I get any response back at all, it is, “We have found that they did not violate terms of service” or “We have found that they have violated terms of service”, but there's no additional information to say in what ways they've precisely violated terms of service. There is no mention of what measures have been taken, whether the account has been suspended or whether that suspension is temporary or permanent. I think online platforms owe it to the people who are victims to have more transparency in what they are doing and in saying whether this account is going to be monitored, going forward, for any additional infractions.
Thank you very much for your time today.
Thank you, Mr. Chair, for having us.
My name is Faisal Khan Suri. I'm the president of the Alberta Muslim Public Affairs Council, or AMPAC. I'm joined here by my colleague Mohammed Hussain, who is VP of outreach.
Today's topic of discussion is not only an important one but an absolutely necessary one. With all the events we are seeing in Canada, throughout the world, and especially within Alberta, it definitely warrants our being here, collaborating on this effort and sharing our thoughts. Thank you again to this committee for inviting us and allowing us to share our thoughts.
I'll just give you a snapshot of AMPAC.
We're dedicated to championing civic engagement and anti-racism efforts within the province of Alberta. We focus on advocacy work, implementing strategies around media relations, community bridge-building, education, policy development and cultural sensitivity training.
AMPAC envisions a province where deep equality exists for all Albertans, including Muslims, in a political and social landscape that is respectful and harmonious for people of all faiths and backgrounds.
To get to the gist of things, to state it quite mildly, online hate influences real-life hate. I could be quite blunt about this. Online hate is an enabler, a precursor and a deep contributor to not just real-life hate but also to murder.
We've seen a lot of recent tragedies happen across the world. In January 2017, the Quebec City mosque killer, Alexandre Bissonnette, gunned down six Muslim men in execution style when he came into the mosque with two guns and fired more than 800 rounds. The evidence from Bissonnette's computer showed he repetitively sought content about anti-immigrant, alt-right and conservative commentators; mass murderers; U.S. President Donald Trump; and the arrival of Muslim immigrants in Quebec.
In October 2018, white nationalist Robert Bowers murdered 11 people and injured seven more at the shooting inside the Tree of Life synagogue in Pittsburgh. This was an attack that appeared to have been motivated by anti-Semitism and inspired by his extensive involvement in white supremacy and alt-right online networks.
In March 2019, a lone gunman armed with semi-automatic weapons burst into the mosque in Christchurch, New Zealand. This white nationalist, in what was a gruesome terrorist attack, was broadcasting live on Facebook and Twitter, and 51 worshippers were killed.
There are so many more examples we could provide that show the accessibility of online hate and how it's affecting the real-life hate we are witnessing today.
I think it's absolutely critical, if not fundamental, to embark on such studies as this and to look a lot further into this issue with a deep thought process in place.
Online hate is a key factor in enforcing hate in all forms—Islamophobia, anti-Semitism, radicalization, violence, extremism and potentially death. This is why we must take immediate action to work on prevention, monitoring and enforcement.
In order to combat online hate, AMPAC has come up with three recommendations. Number one is to employ artificial intelligence on online materials to identify any form of hate speech. Number two is to reopen the Canadian Human Rights Act for a comprehensive review. Number three is to have transparency and accountability for social media platforms.
Allow me to delve a little further into the first recommendation, employing artificial intelligence on online materials to identify any form of hate speech.
Right-wing extremist groups are using social media platforms such as Facebook and Twitter to create and promote their groups, share messages, assemble people and more. The question is, how can we remove their access, block IP addresses or even discover these types of groups? There are some tools being used today, such as text analysis, to combat online hate, but these groups are becoming much smarter, and they're using images such as JPEGs to help deter that monitoring.
While we are happy to see that the new digital charter incorporates elements of an approach that involves industry, we believe that the government must itself fund innovative technological solutions to track online hate and aid in developing artificial intelligence that can combat it.
The AI technology needs to be comprehensive so as to encompass text analysis and languages, and so as cover all forms of social media that are used to facilitate online hate. We believe that there is space in Canada, especially within Alberta, to build that capacity.
Our second recommendation, to reopen the Canadian Human Rights Act for a comprehensive review, is quite near and dear to our hearts.
The moment freedom of speech or freedom of expression puts another group, organization or individual in any form of danger, it can no longer be justified as freedom of speech or expression. This is now freedom of hate, which has no place in the Canadian Charter of Rights and Freedoms, nor in any pluralistic society that we live in. It has been far too long since the Canadian Human Rights Act has been revisited.
Keep in mind the following: For the last few years, hate has led to the murder of innocent civilians. Also keep in mind the importance of reviewing how online access and other media have been used to propel such hate and extremist perceptions.
AMPAC recommends not simply revisiting and reviving section 13, but reviewing the Canadian Human Rights Act in its entirety. The review itself needs to consider facts on the rise of Islamophobia, anti-Semitism, xenophobia and all other forms of hate. Questions need to be asked in terms of what determines hate and how we can bring enforcement into the picture with respect to the Charter of Rights and Freedoms.
Part of our third recommendation that we talked about is transparency and accountability for social media platforms. While we're pleased with the signing of the digital charter, we think that there is a lot more to be done in terms of regulating social media companies. We recognize that social media platforms have been trying to curtail hate speech through reporting options, but there is a lack of accountability in what follows that reporting, which in turn minimizes any sort of enforcement. Social media platforms such as Facebook, Twitter and YouTube must be held accountable by government authorities for reporting the data and for any follow-up measures.
We're quite aware of the challenges that such regulations can bring to freedom of expression related to this recommendation, but we believe in a statement that New Zealand's Prime Minister Jacinda Ardern gave. Her persistence to control the amplification of online hate is not about curbing freedom of expression. I will quote some of her words. She says, “...that right does not include the freedom to broadcast mass murder.” She also says, “This is not about undermining or limiting freedom of speech. It is about these companies and how they operate.”
Working alongside social media companies, holding them accountable, and imposing some form of financial repercussions or other necessary measures are part of this recommendation. We hope to see a requirement for online platforms to be transparent in their reporting come to light with this initiative.
To end, I'll go back to the key factors that are priorities for us: to look at prevention, monitoring and enforcement. Today the recommendations that we've talked about—implementing a comprehensive artificial intelligence tool that spans major social media platforms, implementing language-text-image analysis, reopening the Canadian Human Rights Act for an extensive review, reviving section 13 and holding social media platforms accountable for sharing data—are just the initial steps that we believe can help to curb online hate.
With a 600% increase in the amount of intolerant hate speech in social media posts from November 2015 to November 2016, I can only try to fathom or understand where those statistics are today.
Additionally, with the clear evidence of online hate, including the horrific killing of innocent people, there is absolutely no greater time than the present to action immediate government-legislated change. We cannot allow hate to inflate any further. We most certainly cannot allow any more lives to be taken.
I'd like to end this by echoing the statement of : “Canadians expect us to keep them safe, whether it’s in real life or online....”
Thank you so much.
Good morning, everyone. Thank you very much for having us here today and for actually doing this. This is very important work that you're all doing.
I'd like to begin my statement by first telling you a little bit about our institution. We're an international human rights organization. We have a network of offices worldwide, monitoring and responding to anti-Semitism, fighting hate and discrimination and promoting human rights. The organization has status with the United Nations, UNESCO, the OSCE and many other notable global organizations. Additionally, the Simon Wiesenthal Center has won Academy Awards and developed museums. We are currently building a human rights museum in Jerusalem.
ln Canada, we have won the Canadian Race Relations Foundation's award for our tolerance training workshops on the Tour for Humanity and in the classroom. We educate about 50,000 students each year, including those in law enforcement, faith leaders and teachers.
The organization has been tracking online hate for more than two decades. Twenty years ago, online hate was primarily found on websites. They were fairly easy to track, document and, in some cases, bring down through the help of Internet service providers. In fact, we used to produce an annual report called “Digital Hate” in the early days.
Section 13 of the Canadian Human Rights Act allowed us to bring down several online hate sites simply by bringing them to the attention of the ISP. Our ability to sanction hate sites became limited when section 13 was repealed in 2013. We lost an invaluable tool that provided a red line for the public. If that tool was in existence today, it's unlikely that anti-Semitic websites based in Canada, like the Canadian Association for Free Expression or Your Ward News and others, would so easily find a home on Canadian servers.
The advent of social networking sites like Facebook, Instagram, Twitter and the like introduced a tsunami of hate into the social sphere. According to one study, roughly 4.2 million anti-Semitic tweets were posted and reposted on Twitter between January 2017 and January 2018. Conversely, according to Statistics Canada's 2017 hate crime report, there were 364 police-reported cyber-hate crimes in Canada between 2010 and 2017. Of those, 14% were aimed at the Jewish community.
I'm telling you this because this number is actually really low. You'd be surprised hearing this number, but it's low. I think it's low, given this recent Leger Marketing poll that showed that 60% of Canadians report seeing hate speech on social media. That would mean something like 20 million Canadians have witnessed hate online.
Moreover, through our own polling, the Friends of Simon Wiesenthal Center found that on average across the country, 15% of Canadians hold anti-Semitic attitudes. That represents about five million Canadians. That's kind of the low end of that threshold; in Quebec, that number surges to an incomprehensible 27%.
Social networking platforms must be held to account for allowing online hate to proliferate. We note that these platforms have begun banning white supremacist and extreme terror groups. This is certainly one step forward. However, since they are operating in Canada, we must demand that platforms conform to our Criminal Code, specifically section 318 on advocating genocide, subsection 319(1) on publicly inciting hatred, and subsection 319(2) on wilfully promoting hatred.
lt's possible that Canada requires a CRTC-like office with a mandate to regulate online content and specifically ensure that online hate is curtailed. Indeed, one CRTC mandate is to “protect” Canadians. The CRTC says, “We engage in activities that enhance the safety and interests of Canadians by promoting compliance with and enforcement of its regulations, including those relating to unsolicited communications.” It's in their mandate.
That appears to be consistent with our interest here to limit the proliferation of hate online in accordance with Canadian law.
The Christchurch Call to Action to eliminate terrorists' and violent extremists' content online is a positive step forward. However, it must be implemented by Canada with concrete tools. Friends of Simon Wiesenthal Center recommends the following actions that could help stem the promulgation of hateful acts against all communities through online platforms.
One, reinstitute section 13 of the Canadian Human Rights Act to make it illegal to utilize communications platforms to discriminate against a person and/or an identifiable group.
Two, the section should as well make platforms and service providers liable for ensuring they are not hosting hate websites and moderating their online social networking feeds. Fines should be imposed and criminal sanctions should be placed on violators.
Three, expand Statistics Canada's mandate to collect and share hate crime statistics from across the country. At the moment, Canadian policy-makers and organizations are mostly guessing. This is where I get back to those police numbers. We really are guessing at the extent of hate online and beyond. We need better information collected across the country to make better policy.
On that point, I held a hate crimes conference last fall and I invited Statistics Canada. It was the first time they attended a hate crimes conference with police units from across the country. I was shocked that this hadn't happened before.
Fourth is to improve police capacity and ability to track and respond to hate crime. Through our research, we discovered an inconsistency of hate crime units across the country. Some cities lack the resources to implement and deploy hate crime investigators, as you just heard. Last fall, we initiated the hate crimes conference. I'm repreating myself.
This country is lacking a best-practices model for policing hate crimes and understanding hate crimes and understanding the law around hate crimes and collecting and delivering that information to Stats Canada, which will in turn deliver that information to the policy-makers.
Number five is to improve communication between the provincial attorneys general as well as police when it comes to investigating and prosecuting hate crime and hate speech offenders. This will require additional training for prosectors and police officers so that victims of hate speech crime feel their needs are addressed.
We have specific examples that I can get into later about the mishandling in how the prosectors are working with the police and the disjointed communication between them in finding hate crime criminals and prosecuting them.
Number six is education. This is, for us institutionally, one of the most important elements. Education on responsible usage of social networking sites and websites is required now more than ever. We dedicate literally millions of dollars a year to deploying our educational programs to bring that to students. We have, for example, cyber-hate and cyber-bullying workshops, where we aim to educate students.
Even going to a website about the Holocaust is one example. How do you know which website is legitimate? How do you know which one is fake? Further education needs to happen in schools across the country so the students, the young people, the next generation will understand what hate speech and hate crime really are and be able to differentiate.
Finding a balance between protecting free speech and protecting victims of hate is essential. Our freedom and democracy must be protected. At the same time, we must recognize that there are victimized groups that need protection too, and leaving the issue to the marketplace will bring about unpredictable consequences.
Even The Globe and Mail admitted in an editorial last week that times have changed since the Supreme Court of Canada struck down a law in 1992 that made it a crime to “spread false news”. The Globe says, “Much has changed since then. Mr. Zundel printed and handed out crude pamphlets”, whereas today the same hateful message can be viewed by millions of people at once and inspire violent action.
We know this. The recent terror attacks in New Zealand, Sri Lanka, San Diego, Pittsburgh, etc., must motivate government and civil society to take immediate action. Terrorism can be prevented with the right placement of instruments, instruments that include a combination of enhanced legal measures, advanced monitoring and prevention, increased resources for law enforcement and hate crime units, and broader educational programs that promote tolerance, compassion and good citizenship.
We hope the committee makes recommendations for immediate amendments to the Canadian Human Rights Act to end incitement of hatred on online platforms.
Thank you very much to all of our witnesses today.
We're looking at the digital charter, and all of you have mentioned strong enforcement and real accountability when it comes to social media platforms. Today on the Hill we have an international grand committee that's looking into citizen rights and big data. We have a real challenge, because we have Mark Zuckerberg refusing to even come to the committee. He's in contempt of Parliament, essentially, because he refuses to come before this international committee, and therein lies the biggest part of our challenge.
If the big digital players don't respect what we're trying to do in our respective legislatures around the world, how can this end up being meaningful? It's a very significant challenge. It's really going to require, I think, all of our countries to call them on the carpet and tell them that they are responsible.
When you hear about the numbers online, the percentages that you've all raised here, it's just mind-blowing. That, in and of itself, is a very serious challenge when we can't even hold them accountable to what we're trying to put forward. We can put forward what we think will be important legislation, but if they don't adhere to it, where are we?
I want to go back to something.
Lina and Sinan, thank you for being here from my local community of Windsor-Essex. I appreciate your being here by video conference today.
I want to go to something that Lina said when talking about that real-life experience. I wonder what this looks like on the ground when you're trying to combat online hate or you see something and you think, “Is this hate? What is this?” You start to have those conversations among others to try to get them to stop it as well.
I also wonder if you can speak to the impact of having that burden on you and your community and in particular on young people. I know you do a lot of work with youth. What is the impact of this responsibility that's now on their shoulders to battle this every day when they're seeing things online?
Thank you. Good morning.
Please allow me to acknowledge that we are gathered here this morning on land held by the Algonquin people who are the original stewards of this territory, which they never ceded. As representatives of over one million Canadians of African descent, many of whom were displaced by the transatlantic slave trade and colonialism, the Federation of Black Canadians is of the belief that Canadians must continuously do such land acknowledgment as part of the national reconciliation with indigenous sisters and brothers.
Allow me to begin by thanking the Standing Committee on Justice and Human Rights for inviting me to address you this morning. My name is Dahabo Ahmed Omer, and I'm the stakeholders' lead on the board of directors of the Federation of Black Canadians. I'm here to speak to you in favour of amending the Canadian Human Rights Act. This is to provide legislators, law enforcement and marginalized communities with more effective instruments and mechanisms to stem the explosion of hate crimes and terrorism.
As you're probably aware, there has been a horrendous spike of hate crimes in Canada. Stats Canada just recently released the latest report on police-reported hate crimes in Canada, which shows a 47% increase in reported hate crimes. Black Canadians not only constitute the group most targeted by hate crimes by race and ethnicity, but the recent increase in hate crimes has been largely, although not exclusively, a consequence of more hate crimes targeting people of African descent.
If you're a black Canadian and you happen to be a Muslim and a woman and a member of the LGBTQ+ community, there is an even greater risk of being targeted by hate crimes. This intersectionality of hate is poorly understood and is also a very important part of the equation. Based on the federal government's 2014 “General Social Survey: Canadians' Safety”, we now know that over two-thirds of people targeted by hate crimes do not report them to the police. The most often reported explanation for this is that they get a sense that if they do report the crime to the police, the report will either not be taken seriously or the accused will not be punished.
From a black Canadian perspective of communities suffering from over-policing, carding and other forms of racial profiling, that fear becomes even more heightened. Even right here in the nation's capital, there was recently confusion with the Ottawa Police Service over whether or not the municipality has an actual hate crime unit. This feeds into the perception of law enforcement's indifference.
It is important for the federation to stress that this explosion of hatred that has been described so far actually mirrors the proliferation online. CBC's Marketplace recently revealed a whopping 600% increase in hate speech by Canadians online. We also know that over 300 white supremacist groups are operating in Canada, using the web not only to promote hate and concoct deadly attacks but also to infiltrate our trusted public institutions.
It should therefore come as no surprise that a 2018 Angus Reid poll showed that 40% of Canadians feel that white supremacist attitudes are a cause of great concern.
Hate is currently undermining public safety for marginalized communities such as mine while also threatening national security. This is made clear by a recent report by the military police criminal intelligence section that reveals white supremacist infiltration of the Canadian Forces by paramilitary groups that use the web to recruit and spread hate.
With terrorist attacks on the Centre Culturel Islamique de Québec; recent vandalization of black, Muslim, Jewish and Sikh places of worship; and the global context of coordination among white supremacist groups worldwide, more and more Canadians of all backgrounds believe that the time is now for Parliament to act more forcefully and deliberately against hate, which undermines public safety and transnational security.
Canadians expect their Parliament to take stronger action to prevent hate crimes that threaten public safety across the country.
The Federation of Black Canadians is aware that there is a tension between respecting freedom of expression, as protected under section 2(b) of the charter, and regulating hate speech online, as well as the prospect of technical solutions to reporting and monitoring hate speech or designating legitimate source and news sources, yet based on the lived experience of so many people across Canada who look like me, the federation believes that the lack of civil restrictions on dissemination of hate communicated over the Internet, the most prevalent and easily accessible mechanism of public communication, is a matter of grave concern.
The Canadian Human Rights Act stripped of section 13 is not a tool for the 21st century. When one considers that almost all Canadians under the age of 44 communicate online, that's why it's imperative that all political parties and independents come together in the spirit of consensus to restore section 13 of the act, which constitutes the only civil hate speech provision in Canada explicitly protecting Canadians from broadcast hate speech on the Internet.
The burden of proof required by section 319 of the Criminal Code is so high that, in and of itself, it leaves the most vulnerable populations, including black Canadians, subject to the proven harms associated with hate speech without providing a viable mechanism for recourse.
This becomes yet another systemic barrier to the inclusion, well-being and safety of black Canadians, among so many other groups targeted by hate. While the right to freely express oneself is fundamentally essential to a functional democracy—and trust me when I say this, because my country of origin is Somalia—the protection of the minority communities from the real harms associated with hate speech and online hate is demonstrably justified in a free and democratic society. It is only when Canadians feel safe, protected and respected within our society that Canada can flourish and advance as a democracy.
Committee members, the Mosaic Institute is grateful for the opportunity to participate in your deliberations on online hate. We recognize that your time is limited, and that you must be selective about the organizations you invite to appear. Thank you for including us.
Mosaic is a Canadian charitable institute that advances pluralism in societies and peace among nations. It operates through track two diplomacy and brings together people, communities and states to foster mutual understanding and to resolve conflict.
Over the years, we have convened Chinese and Tibetan youth leaders on peaceful co-existence on the Tibetan Plateau, we have assembled Sinhalese and Tamil representatives on reconciliation after the Sri Lankan civil war, and we have called together survivors of genocides to combat future global atrocities.
Fundamentally, our mission is to break cycles of hatred and violence by building empathy and common ground between peoples at strife. We have therefore seen first-hand how the speed and reach of social media have made it both a means of bringing us all together and a weapon to set us all at one another's throats.
The stakes are unutterably high. In our work with the Rohingya people, it has become clear to us that social media played a determinative role in spreading disinformation, fomenting hatred and coordinating mass slaughter, ending with the deaths of at least 10,000 innocent people and the ethnic cleansing of at least a million more. Canada is not Myanmar. Nevertheless, the ability of Parliament to contain and combat online hatred and incitement will quite literally decide whether people live or die.
It should go without saying that in a just and democratic society, there is no higher ideal, no greater ethic, no more sacrosanct imperative than freedom of expression. Peace, order and good government; liberté, égalité, fraternité; life, liberty and the pursuit of happiness—all are impossible without free public discourse. Freedom of expression becomes meaningless if it does not include freedom to offend, freedom to outrage and quite frankly, freedom to make an ass of oneself, although I'm sure that never happens in Parliament.
Voices: Oh, oh!
M. Akaash Maharaj: Any abridgement of freedom of expression must, therefore, be only the barest minimum necessary to preserve the dignity and security of citizens.
We believe that Canadian laws defining illicit hate speech are sufficient for that purpose, and the scope of proscribed speech need not and should not be expanded further. Legal, regulatory and social media frameworks fall short, not in defining hate but in identifying it and quarantining it before the virus spreads and wreaks its damage.
We do not underestimate the scale of the challenge that legislators and social media firms face. During the two and half hours set aside for this hearing, there will be 54 million new tweets and 4.7 billion new Facebook posts, comments and messages.
For your consideration, here are our recommendations.
First, social media firms must, either voluntarily or under legal compulsion, adhere to a set of industry standards on the speed with which they review reports that posts violate Canadian anti-hate laws or their platforms' own terms of service. For example, the European Union standards require firms to review a majority of reports within one day.
Second, social media firms should be required to have specific conduits to prioritize complaints from trusted institutions about offending content. A complaint from a children's aid society, for one, should be treated with immediate concern.
Third, there must be financial consequences for firms that fail to remove illegal content within a set period—penalties severe enough to make the costs of inaction greater than the costs of action. Germany's network enforcement act sets fines as high as 50 million euros when illegal posts stay up for more than 24 hours.
Fourth, social media firms should be required to publish regular transparency reports providing anonymized information on, among other issues, the performance of their machine learning systems at automatically intercepting proscribed posts; the speed with which firms respond to complaints from victims, trusted institutions and the public at large; and the accuracy of their responses to complaints as measured by a system of third party random sampling of posts that have been removed and posts that have been allowed to stand.
Fifth, social media firms must be more forthcoming in revealing the factors and weightings they use to decide what posts are prioritized to their users. They must give users greater and easier control to adjust those settings. Too often social media platforms privilege content that engages users by stoking fear and hatred. A business model based on dividing our communities should be no more acceptable than one based on burning down our cities.
Sixth, Parliament should enact the necessary appropriations and regulations to ensure that CSIS and the Communications Security Establishment have both the mandate and the means to identify and disrupt organized efforts by hostile state and transnational actors who exploit social media to sow hatred and polarization amongst Canadians in an effort to destabilize our nation.
Seventh, Parliament should consider legislative instruments to ensure that individuals and organizations that engage in incitement to hatred bear vicarious civil liability for any violent and harassing acts committed by third parties influenced by their posts.
Eighth, the federal government should fund school programs to build young Canadians' abilities to resist polarization and hatred, and to cultivate critical thinking and empathy. The best defence against hatred is a population determined not to hate.
Finally, especially in this election year, I would put it to you that parliamentarians must lead by example. Everyone in this room knows that the guardians of our democracy are not ministers but legislators. We look to you to stand between our leaders and the levers of power to ensure that public office and public resources are used only in the public interest. More than that, we look to you to be the mirror of our better selves and to broker the mutual understanding that makes it possible for a vast and pluralistic society to thrive together as one people and one country.
During the upcoming campaign, you and your parties will face your own choices on social media: whether to campaign by degrading your opponents, whether to animate your supporters through appeals to anger or whether to summon the better angels of our natures. Your choices will set the tone of social media this summer more decisively than any piece of legislation or any regulation you might enact. I hope you will rise to the occasion.
Good morning. Thank you very much for having me here today. I appreciate the invitation.
My name is Brad Galloway. I'm working as a research and intervention specialist with the Organization for the Prevention of Violence, which is located in Edmonton, Alberta. My main goals there are to take part in up-and-coming research, specifically on the far-right extremist movement in Canada, and more specifically, as of recent times, looking at the online dynamics of far-right extremism.
I often weave in my own personal lived experiences with the far right in Canada, as I spent 13 years within that movement in Canada, mostly at the beginning, in the offline context. However, I spent about 10 years operating also in the online context, so I know a lot about this online activity from an insider's perspective. I've used a lot of my experiences in taking part in some academic research as of recent times.
I'm also working with Life after Hate, which is another group that is similar to the Organization for the Prevention of Violence. We're looking at doing interventions and helping other people leave extremist movements. Some of those initiatives will definitely include looking at ways to build on online intervention strategies to intervene with people, and also providing resources for people who want to leave these types of movements.
It is my belief that communities are formed on shared ideas, experiences and cultures. ln order to distinguish and define themselves, groups compare themselves to others in positive and negative ways. It is in the latter that problems might arise.
A healthy, culturally diverse society is one that respects, accords dignity to and even celebrates the differences between cultures and communities. However, when groups start to distinguish and compare themselves in a negative manner to other groups on grounds such as race, religion, culture, ethnicity and so on, there is a potential for destructive and abiding conflicts. This leads to an us-versus-them mentality.
lt is in this sense that hate and extremism are interrelated phenomena that exist along a continuum of behaviours and beliefs that are grounded in this us-versus-them mindset. The perpetuation of associated rhetoric can create an environment where discrimination, harassment and violence are viewed by individuals as not only a reasonable response or reaction but also as a necessary one. When this is left unchecked, deepening, sometimes violent divides within society can undermine who we all are as Canadians and fray the social fabric of the country.
For the last 30 years, technology—first telephones and later the Internet—has played a crucial role in the growth of the white supremacist movement throughout Canada. Early versions of hate speech online in the 1990s and 2000s were being distributed through automated calls and websites. For example, the Heritage Front, a white supremacist group, had automated computerized calls spouting racist information. Other examples included the Freedom-Site network and the white civil rights hotline.
Beginning in 1996, we then saw the emergence of online discussion forums such as Stormfront, which notably was one of the first white supremacy websites and is still very active today. Stormfront was the first of this series of online far-right platforms and was used to communicate and organize.
Today we see more activity on social media sites, such as Facebook, Twitter and Gab, though most of these conventional forums still exist and are often used in conjunction with the new platforms, inclusive of apps. Often content removal or regulation are suggested to mitigate such sites and platforms. I would say that they both have their upsides, but they are very much faced with many challenges, both legal and ethical.
More with regard to the present, extremist groups and individual influencers promote social polarization and hate through available technology and are highly adaptive to pressing demands by law enforcement, governments and private social media companies.
Further, online hate speech is highly mobile. I would argue that these hate groups, and organized hate groups specifically, are using this mobility to further their transnational hate movements. Even when this content is removed, it finds expression elsewhere. Individual influencers are adaptive at finding new spaces.
If content is removed, it often re-emerges on another platform or under the name of a different user. Often the rhetoric and the networks move from established networks, where counter-speech can occur and where journalists and law enforcement are able to easily track their activity, onto platforms where detection is more challenging and where what are often termed “counter-narratives” are harder to deploy.
There are a multitude of examples, both domestically and internationally, of individuals who are promoting hate being kicked off one major platform—for instance, Facebook—only to move to either another major platform such as Twitter, or any host of smaller platforms, such as Gab or Telegram. Today’s online space is a more dynamic, immersive and interactive multiplatform online space than has ever previously existed, when there were only a few forums or a few telephone lines.
Influencers and propagators of hate distribute through multiple interlinked platforms. This new dynamic has demonstrably had an ability to mobilize hate-based activism and extremism, especially for lone-actor, violent extremists such as those who perpetrated the Tree of Life synagogue and Quebec City mosque attacks. The individuals who carried out these attacks did not necessarily engage directly with ideological influencers or a networked group, but they were mobilized based on the hate they felt and the sense of crisis they saw stemming from an opposing group.
What is the solution? I don't think there's any golden ticket solution. However, we believe that ultimately the first step in prevention and countering the propagation of hate speech and extremism is awareness, beginning with a better understanding of the nature of hate crimes and hate incidents online and off-line. We need better data on who is most targeted by hate and what the intersectional dimensions of targeting are—as in black, woman, Muslim who wears the hijab—and where these things take place. We need data on whether certain public spaces, like public transit, or certain public platforms, such as Facebook or Twitter, are more conducive to hate speech and harassment.
In order to do this, there needs to be more incentive for victims of hate crimes to come forward. Often there is stigmatization, fear and skepticism around reporting a hate incident to the police. These issues need to be constructively challenged and mitigated through a multisectoral approach.
A recent example that I found is the proposed bill SB 577 in the state of Oregon, where they are also dealing with a rapid increase in hate crimes. This new bill requires law enforcement agencies to refer alleged hate crime victims to a new state hotline that is staffed by the Oregon Department of Justice, which connects callers to local mental health services, advocacy groups and other useful resources for crime victims. This allows victims to be in a safe, understanding environment while moving forward with a multitude of resources to address their hate experiences. It provides victims with some more resources and could increase reporting.
Online parallels are easy to imagine. Already some American non-profits are creating online resource hubs for people who have been doxed and had their personal information exposed. These resources could be repurposed and redeployed to address the issue we’re talking about today.
Many witnesses have likely discussed the legal challenges associated with changes to legislation. With the time I have left, I would instead like to touch upon some efforts that could occur further upstream of hate speech that don’t require legislative change.
Thank you to the committee for having us here. I'm here with my colleague Sukhpreet Sangha, and we'll be speaking interchangeably today.
Very quickly, the South Asian Legal Clinic of Ontario is a not-for-profit legal aid clinic that works with low-income South Asians across the province. We do poverty law, which includes a large volume of immigration, human rights, employment law, housing law and income security law.
From that casework we also look at the trends in what we hear from our clients and pick up on larger advocacy pieces around Ontario and around the country that are impacting the work we see on the ground, so our comments are directly related to our front-line work.
Approximately 30% of SALCO's legal casework raises issues of systemic racism and discrimination. We've worked on cases on access to service, housing, employment, policing, immigration, and we've worked within the larger justice system framework on these issues.
Our law reform work has addressed the growing inequities faced by racialized communities, inequities that intersect with multiple identities such as gender, faith, socio-economic status. Our work has addressed how those things intertwine in that world of online hate speech.
We've had a chance to speak on these issues at the United Nations. We are part of Ontario's anti-racism directorate consultation committee and worked on the legislation to embed that in Ontario law. We sit on the Toronto Police Service's anti-racism advisory panel. We've worked with the federal government on a national anti-racism strategy. We've debuted in Quebec and on test cases at the Supreme Court on the ability to wear the niqab, and we are currently sitting on a coalition of community leaders in Ontario that's looking specifically at dealing with hate crime and the rise of white supremacy.
No doubt you've heard from everybody today that obviously in recent years, we've seen a definite rise in hate speech in the public discourse. There's no doubt that social media platforms and the Internet have played a significant role in spreading that hate speech.
Globally and domestically, we know that online hate has been a catalyst for violence against Muslim, Jewish, black and indigenous communities.
I want to say quickly that a lot of the discourse, a lot of the discussion, now is around Islamophobia and anti-Semitism, but the data shows that anti-black hatred is prolific in Canada, as is anti-indigenous hatred. I believe that those two communities go largely unreported, so it's important, I think, for this committee to take notice of those particular historical communities and the hatred they continue to face to this day.
Last week I met with a Muslim client who came to our office begging us to help get some of their remaining family members out of Sri Lanka. Why? It was because the others had been killed in an attack on the Muslim community in Sri Lanka, which was incited by online hate. The connection there is real. We have people here who are connected to people globally and we see the impact of online hate.
To be frank, I want to tell this committee that I have personally received a significant amount of hate email, social media hatred threatening me, and in one case threatening my family and my children and threatening our organization.
Yesterday I spoke to a Sikh colleague who had received an open message on the website of his organization calling him a “towel head” and telling him that his community should be deported and that he deserves to die.
I don't bring these things to the committee to be shocking, but to tell you that this is what we feel and see. The audacity and frequency with which people now spew hate online shows us that we have failed to control online hate. There is truly little in the way of real and accessible mechanisms in Canada to hold people accountable.
I'm going to turn it over to Sukhpreet.
I'll be speaking about combatting online hate, especially through the lens of the criminal law.
Of course, we all understand that we must be expedient, bold and effective in the way in which Canada responds to the growth of online hate.
To that end, regarding the criminal law, the committee is aware and it has been mentioned earlier that the Criminal Code currently contains two main provisions that can be used to charge persons accused of committing online hate crimes: sections 318 and 319. While section 318 prohibits advocating genocide, section 319 more broadly prohibits public incitement of hatred and the wilful promotion of hatred and is thus the likelier charging section in cases of online hate crime. The use of section 318 is further limited by the fact that proceedings under it may not be instituted without the consent of the Attorney General.
Subsection 319(2) creates a hybrid offence that criminalizes:
Every one who, by communicating statements, other than in private conversation, wilfully promotes hatred against any identifiable group
The maximum punishment available, if proceeding by indictment, is two years of imprisonment.
However, it is troubling that the Attorney General's consent must also be obtained to institute proceedings under that section. As the popularity of using the Internet as a forum for spreading hate only continues to increase, Parliament should reconsider whether this requirement for the Attorney General's consent places undue limits on the prosecution of online hate crimes. Requiring this consent to proceed with online hate crime prosecutions creates an unnecessary additional barrier to these charges being pursued by legal authorities.
Alternatively, individuals committing certain types of online hate crimes can also be charged with more generic Criminal Code offences, as is sometimes done, under sections regarding uttering threats and criminal harassment, and the hate motivation of the crime can be considered an aggravating circumstance on sentencing under subparagraph 718.2(a)(i).
Another concern with the current usage of the code to address online hate crime is the often fraught relations between some members of racialized communities and the police. lt is more broadly acknowledged now that systemic racism is a significant problem within our criminal justice system, which creates an access-to-justice barrier for members of those same communities when they are subjected to hate crimes, when their main avenue for dealing with that is the police. lt can be reasonably difficult for racialized persons who have experienced being targeted by police through programs such as carding to then have to seek assistance regarding hate crimes from members of that same police force.
Further, the historical and ongoing overreliance of the criminal justice system on punishment through penalties such as imprisonment and fines also produces a deficiency when dealing with hate crimes, both online and off. Punitive sanctions, such as those traditionally meted out by the criminal justice system, do little to confront or change the attitudes and beliefs that motivate hate crimes. As such, a more meaningful remedy could lie in community-based programs that seek to address the motivators and the thinking that underlie hate crimes, in a genuine attempt at anti-racism and anti-oppression education.
Further, problems with prosecuting hate crimes through the criminal justice system are revealed by the fact that police solved just 28% of hate crime incidents in 2017, as shown by new Statistics Canada analysis, which I'm sure most members have seen. By comparison, among all Criminal Code violations, excluding those in traffic, 40% were solved by police in that same year, 2017. Hence, even when the hurdle of reporting to police is cleared by victims of hate crime, the chances of success are 12% lower than with other types of offences.
We'll make two recommendations regarding the criminal law.
First, as the popularity of using the Internet as a forum for spreading hate only continues to increase, Parliament should reconsider whether codifying requirements for the Attorney General's consent places undue limits on the prosecution of online hate crimes. Requiring this consent to proceed with online hate crime prosecutions creates an unnecessary additional barrier to these charges being pursued by legal authorities. I will note that this requirement for consent is also in section 320.1, which was raised earlier at this committee today.
The second recommendation is that the government should look at creating civil and community-based mechanisms to address online hate that do not engage the criminal justice system. In our position, the criminal law is not the most effective mechanism to address online hate. A non-criminal administrative mechanism could provide a more accessible alternative. Systemic racism within the criminal justice system makes it disproportionately ineffective for racialized communities.
I'll pass it back.
Okay. I'm going to go really quickly.
I won't repeat what everyone has said on section 13 of the Canadian Human Rights Act. What I will say is that I worked using that section, and it was effective to combat online hate. The section, as it's written, does include a recognition that hate includes computer and online communication.
We call for a re-enactment of that section. We can, in this committee, study what the issues were, procedurally, and why it was ineffective. There's a lot of nitty-gritty. We can take best practices from what we learned about things that didn't work and put in what did. I had, I think, at least five successful section 13 cases; none went to hearing. Just the use of section 13 resolved those issues.
Lastly, what we want to speak about most is that we cannot do any of this work piecemeal, such as by changing a section in the Criminal Code or adding a section. What we really need is some sort of national anti-hate strategy. We need a strategy that talks about online hate, social media platforms, the Internet and how they collect data, and makes it mandatory.
A lot of data has been quoted here, but the reality is that the data we have does not even touch on the reality of online hate. Most people don't report it; most people don't go to the police, as my colleague just said, so we don't know the real picture in Canada. We continue to fail repeatedly on how we collect data on this issue. We continue not to push social media outlets to collect data.
We also need a strategy—and someone said it before—around education. A national strategy, directorate, secretariat or whatever you want to call it ensures a commitment and a continuity of this work, regardless of what happens, and that we're looking at this within the larger picture. You see the work on online hate and education; you see the bigger picture. We would call on this committee to look at doing that.
The other thing I want to say quickly is that I would urge this committee not to come back with a recommendation that we study this more. As was the case with forced marriage, we had people from the U.K. come in to say, “You don't need to study it. You have enough anecdotal information in front of you that you should act now.” I feel very much the same way on this issue.
Thank you very much to all of the witnesses for coming today and sharing your perspectives, your experiences and your advice to the committee.
I'm very interested, Ms. Sangha, in the idea of education over enforcement and having conversations in the public square.
You referenced the clearance rate for police forces and online hate crimes. My colleague, Mr. MacKenzie, served as a police chief, and reminded me that they clear 100% of the drunk-driving cases that get put in front of them. We could run a R.I.D.E. program every day, and, unfortunately, we're still going to be catching impaired drivers.
I very much like the idea of education. When we had witnesses here a few weeks ago, I asked a few of them questions about having the conversation in the public square, and bringing groups together, so we're not dealing with this in silos within communities. We all share a common purpose in this, and that is to end hate. When hate migrates online, it seems to proliferate more quickly, but if we address it right at the root cause....
Do you have any examples of where this has been done—where different faith or ethnic groups have been brought together on a large scale, with some result, or perhaps your colleague does?
It's a great question, because it's a question about accessibility. I dealt with it as a lawyer who is the ED of the South Asian Legal Clinic and has a whole heap of privilege behind me, so I can blast back, right? I can call the police and they'll take my call. I can do a number of things that I know my clients could never do.
I was able to counter it with my own commentary. With some of the comments, though, you just leave. We've become emotionally fatigued by it, to the point that you have to put your hands up and say, “I'm not going to engage”, because the engagement just leads to a snowball effect. The truth is that the emotional psyche of people who do this work with people from different communities—all of us—is harmed. I see it with my kids in particular.
The reality is that what we wanted to talk to you about today is that we don't have accessible solutions. That's why, when people talk over and over about civil remedies, administrative remedies, services that people can access for free to combat this and moving things out of criminalization, we are talking really about accessibility and access to justice. We want to look at those back-end mechanisms for people to combat that individual hate, but we also want you to think about the front-end piece, right? Why is it okay? Just as quickly as we went in this direction of online hate being okay, we can go in the direction of it not being okay. That takes our will. It takes our will to do that, and it takes our will to stand up, but it is really difficult.
One thing that I didn't get to say was that in Ontario we had an incredible ruling two weeks ago against two anti-Muslim advocates who went after the founder of Paramount Fine Foods. He got an award of $2.5 million in civil court. When I think about that case, I think, “If the clients I'd seen had the resources to access civil remedies, imagine the message we could send around civil liability for these cases.” I think about how we do have test case funding now, and we do have the court challenges program. Is there an opportunity at the federal level to expand those programs to have this kind of work done?
I am a part of this coalition called the Justice for Abdirahman Coalition. We've been doing work for almost three years now, and a lot of that work has been towards creating more transparency and accountability with our police services.
One of the things we found with our communities is that you can't have someone who is within the police service as the person that marginalized and vulnerable communities are dealing with.
A lot of times one of the recommendations that we've made as a group is to create a civilian group whose composition would be members of the community, members who have been impacted by police interactions negatively, individuals who have done research in this field, individuals who are young black men who have faced discrimination and racial profiling with the police service. This council or this entity would really have a neutral and objective role and responsibility. It would allow the communities that are affected by these hate crimes to feel more comfortable in coming up to them. These would be advocates, such as myself, individuals you can relate to, you can connect with, who you know will not use the information you've given them against you.
A lot of times members of our community feel as though there will be a reprisal. They feel as though the police service has access to their information. They know their address and their licence plate numbers and things like this, so there is a fear in going to police services. A council or a committee made up of members of the community—grassroots organizations, really—would be the stakeholders and would be the ones reaching out to communities to say, “I can be your advocate. I can be your liaison, and you can trust me.” That neutral party would be the hand that would hold the community and the police and relay that information.
Inadvertently what it does is create that trust between the community and the police if they see that this body, as a neutral body, is able to play that advocacy role and balance role. I think that would start to create some of that trust between the police and the community that's not currently there.
I'm going to go very quickly.
You're all doing amazing work. Thank you.
Thank you particularly, Mr. Galloway, for bringing a voice that we haven't heard very often.
Shalini, we're very proud to always have SALCO at the committee.
Mr. Maharaj, I will pick up on where you left off and just say that it is critical, not just this year, but any year, that parliamentarians exercise discipline and appropriate behaviour. I will say that I am troubled when we have senators of this Parliament question white supremacy and its presence. I'm also troubled by reports today that we have elected officials potentially making announcements about immigration policy in front of hotels that were the site of arson attacks in Toronto. I'll leave it at that.
I have a question for all four of you that relates to section 13 of the CHRA. It's a bit specific because I'm a bit of a specific lawyer and we like to get into the weeds a bit.
The specific aspects are that the old version of section 13 had an exemption for the telecommunication provider. Do you think that should remain, or do you want more accountability for the telecommunication provider and the social media platform?
Second, can we quell the free speech antipathy by simply having a rider in there, which may be superfluous, saying that nothing in this clause is meant to derogate from the constitutionally protected right to freedom of expression?
Third, do we need a definition of “hatred” incorporated into it? This was the suggestion by Irwin Cotler, a previous attorney general, in a private member's bill.
Fourth, should we have some sort of threshold for what constitutes the type of hatred that would trigger section 13 so that we don't get single instances but more of a mass-orchestrated attack?
If all four of you could opine on all or any parts of those, that would be terrific. Thank you.
You know, I already went first. You guys are going to make me do it again. Okay, no worries.
You asked a lot of different questions, and I wrote them all down.
In terms of the exemption, I don't think there should be one. I think social media platforms and telecommunication companies should be just as responsible as individuals. We're putting so much responsibility and accountability on individuals who put messaging online, but it should also be on those who should be monitoring that and reporting it and who should also be doing that data collection, because according to a lot of the information we heard today, we don't even know sometimes what constitutes hate. I think the telecommunication companies should be doing a lot of that monitoring and should not be provided an exemption.
In terms of the definition of “hatred”, I have a definition. I don't think everyone would have that same definition. The human part of me would say that if someone looks at me and says that because I'm a Muslim woman and I'm black, I'm inferior to them, that constitutes hate for me, from just being human and what I feel, but if we're going to put it into terms that everyone understands, I would say—and I wrote this down—hatred is predicated on destruction. Hatred against identified groups, therefore, thrives on insensitivity, bigotry and destruction of both targeted groups and the values of our society. Hatred in this sense is the most dangerous emotion and contradicts reason, an emotion that if exercised against members of these identified groups, implies that those individuals are despised, scorned, denied respect and made subjects of ill treatment.
That would be my definition. I think it captures the human side of it, but also, I want to say, the legislation piece, because words matter, and when we talk about hatred and about hate crimes, they always start with words.
I think defining hatred is key, and I thank you for asking that question.