What I will share today is informed by a number of different research projects, ranging from a study on the use of social media by anti-violence non-profits to investigations of gender-related programming practices in popular social media platforms and in mobile phone apps designed to prevent sexual violence.
One issue I've encountered relates to terminology. Many terms in this area, as you know, have histories, and this baggage enters the room when we use the term. For some, “violence against women” evokes the deep-seated racism, ableism, heterosexism, and cissexism that taint early iterations of the women's movement. For others, “gender-based violence” can be problematic because it has been employed by some as a way of neutralizing the differences between men's and women's experiences of sexual violence.
In my research with non-profits, I've heard that some organizations prefer to avoid umbrella terms altogether. Instead, they narrowly focus on what they are doing at that particular moment. It may be transmisogyny one day and consent the next. This approach is seen as more genuine and honest since it has the capacity to focus on the intersections arising out of a particular situation while resisting the impulse to include everything within one label, thus obscuring the specific ways in which power operates.
As we know, violence against young women and girls occurs in settings that blend off-line and online elements, but when we focus on technology as part of this mixture, it's important to ask questions about design, in addition to questions about how people are using technologies. Still, we have to be clear that technology itself is not a cause of the violence that people experience. That's what we would call “technological determinism”, whereby technology is taken out of a social context, seemingly appearing out of thin air, and blamed for society's ills. At the same time, it's possible to focus on technological development and design since these processes aren't simply technical but are social too.
My research interests centre around questions of design and begin with the premise that technology is not neutral. I explore values and norms that become embedded in technology by designers, programmers, stakeholders, and other actors in processes of technological development.
I think particularly interesting and important for the committee's study are the ways in which technological design is a social and political act that has recursive consequences for society; that is, design decisions can, often inadvertently, solidify social relations. For example, of the 215 mobile phone applications designed to prevent sexual violence that my colleague Amy Hasinoff and I examined, the vast majority reinforce prevalent rape myths by placing the responsibility for preventing sexual violence on the victim. Only four apps out of that 215 target perpetrators, and there is an assumption that strangers are the most likely perpetrators.
Since technological design and development processes are never just technical or social, they're a viable target for policy intervention. There are a number of issues here to discuss.
First, software has many layers. Some are more visible to us as users. Think of Facebook and its blue-and-white interface. Then there are others, such as the database where Facebook collects information about each user. I have argued that software has the capacity to conceal the ways in which it enacts violence. Think about the changes to Facebook's user interface in 2014. Suddenly, people were able to identify beyond the traditional categories of “men” and “women”. They could be two-spirit, genderqueer, gender questioning, etc.
In my study, I discovered there was a difference between the progressive moves that the company made on the surface of the software, moves that worked towards dismantling oppressive conceptions of gender as binary—that there are only men and women in the world—versus the decisions they made in deeper layers of the software, layers inaccessible to most of us. To accommodate this modification they made on the surface, programmers developed a way for the software to translate these non-binary genders into a binary categorization system by focusing only on the pronoun that a user selects.
We know that people with non-binary genders experience disproportionate levels of discrimination and violence. A 2014 study from the Canadian Labour Congress, cited by the ongoing federal strategy on gender-based violence, notes that rates of intimate partner violence for transgender participants are almost twice as high as those for women and men: 64.9% lifetime prevalence rates were recorded. We also know, from the U.S. context, that transgender women of colour are targets of violence at even higher rates than their white counterparts, making up most of the murders committed against transgender people.
While the act of misgendering someone is often experienced as violence in and of itself, it's also symptomatic of the broader social systems that contribute to transphobia. What I'd like us all to consider, then, is the ways in which programming practices can be violent by reproducing and calcifying dominant regimes of gender control. Concealing this violence, by, for instance, storing that gender as “female” for someone in the database who has indicated on the surface that they are gender queer but happen to prefer the pronoun “she”, is a cause for concern, particularly when that gendered information does not simply remain in the database but is accessed by other sets of users like advertisers and marketers. So while social pressure may have led to the surface, superficial modification, it was a corporate logic that motivated Facebook to design their software in a way that misgenders users.
We're also witnessing mergers between different social media platforms, such as when Facebook picks up Instagram. This has led to an exchange of data between different platforms, so one platform doesn't even have to collect identifiers any more if it can access them from another platform. Digital delegation means being asked to sign up for Instagram through Facebook, and your Facebook information is used to do that. With my colleague Oliver Haimson, I have examined popular social media platforms to determine both how gender has been programmed into user interfaces and how gender has been programmed into spaces designed for advertisers, the advertising portals. We argue that social media platforms have become intermediaries in a bigger ecosystem that includes advertising and web analytics companies.
As a result, though, social media platforms get entrusted with a lot of control over how gender and other identifiers are categorized, and these design decisions are shaping how the public and the advertising industry understand identity. These systems they are building are like another layer of society that could promote progressive social change but instead is reifying inequalities.
I want to try to translate this into two quick points. First, the technology sector is well known for its lack of diversity, and that impacts who is making things and who designers think the user is. It's not only about adding women to the sector and stirring. Funding education that targets engineering and other related disciplines, that is informed by feminist, queer, race, and even disabilities studies lenses, is needed to open up the design process. Finally, incentives for the technology sector to support social change objectives in their design and ongoing development of technologies could also be helpful.
Thank you very much for the invitation. It's a privilege to be here, and I'm delighted that you're undertaking this study. I'm really curious to see what comes out of it and quite encouraged by the process itself.
For the past 20 years, a large part of my research agenda has been looking at how kids use network technologies, how they experience them, and what their perspectives about those uses and experiences are. It's really grounded in my belief that good policy should be founded on a solid understanding of those lived experiences, because I think the policies we're trying to enact are designed to provide young people with the support they need to successfully navigate the network world.
When I was thinking of what I could contribute in my 10 minutes before we get to questions, three things came to mind, and I think these are three things that the girls and young women whom I've spoken to over the last 20 years would want you to know, or would want you to take into consideration.
The first one is surveillance isn't a solution to cyber-violence or cyber-harassment; in fact, surveillance makes things worse for them, makes it harder for them to navigate through this online world. Unfortunately, if you look back at how we've responded to a lot of these policy questions, surveillance has been a standard response.
My research partner Jane Bailey and I, a number of years ago, started a review of all of the interventions before Parliament whenever kids and technology were mentioned. So starting right back from the information superhighway forward—if any of you are old enough to remember as I do—we started with this really strong narrative that kids are savvy, natural technology users, and that they're innovators and they're going to create wealth.
The lesson we draw from that is not to regulate the technology, because that will shut down innovation. But at the same time as we were advancing through this policy arc over the past 20 years or so, we started to talk about kids as being “at risk”. Kids were at risk of seeing offensive content; they could see pornography online. The solution was to put them under surveillance to make sure they wouldn't.
Then we talked about kids being at risk because they've naive. They get out into these technological platforms, and they don't really understand the bad things that can happen to them. The solution was to put them under surveillance.
Lastly, especially once we started talking about behaviours like sexting, we started to talk about kids being at risk because kids are badly behaved, so we have to put them under surveillance because we need to protect them from their own choices.
Now, from the kids' points of view this just doesn't work. From their point of view, the main problem with surveillance is that the lesson of surveillance is that adults don't trust them. They don't trust them to use their judgment; they don't trust them to make mistakes and learn from them. What they glean from this is that they can't trust adults. We've rolled out surveillance through schools and through public libraries. We're encouraging parents all the time to make sure they have their kid's Facebook password and rifle through their accounts. All of these strategies, which were designed I think in a well-intentioned way to help children, have backfired, because they have eroded the relationships of trust that are at the heart of our being able to help kids confront cyber-harassment and cyber-misogyny when they occur. I have all sorts of research findings to support this, stories of kids saying “just when this terrible thing happened to me, I couldn't go to my teacher, because then I knew the cops would be called in, and I can't trust adults not to go crazy, because they don't understand my life.”
I think that's a really important lesson. Surveillance isn't a solution. Surveillance really complicates things and makes it harder for girls and young women to cope with cyber-harassment and misogyny.
I think the second thing that they would like to say, and this really resonates with Rena's comments about design, is that the problem isn't them; the problem is the environment, and we adults are the ones who are responsible for the design of that environment.
Kids, for example, often complain that adults force them to use network technologies, and they really resent it. So, again, if you think about how we often talk about kids, we say they're natural; they're savvy; they love technology; they're online all the time. Doing research over the last 20 years with kids all across the country, we have heard very different stories. We've heard that technology actually often causes them a lot of problems.
For example, I was talking to a group of youth in Toronto just this past weekend at the CCLA, and the first question they asked was, “How can we tell our school to stop forcing Microsoft tablets on us? Now, I have to do all of my science work in class on this darn tablet, and I don't like it.” They felt it was a bad way to learn. They're actually right. All sorts of research indicates that computing technology actually reduces learning outcomes, but what they were worried about was that the commercial design of that technology made disclosure the default. As soon as they used it, they had no control over the information they inputed into that tool.
They knew that this information then made them more visible to their peers and to their teachers in ways that they are uncomfortable with. It's the lack of privacy they experience in network spaces that makes it harder for them to navigate through all of the cyber-misogyny and the harassment that exist in those spaces, and it actually sets them up for conflict with peers.
They also find that the lack of privacy built into the environment means that they are held to account for every mistake they make. It's harder for them to figure out what is and what isn't acceptable behaviour. It tends to magnify bad behaviours and silence good behaviours in really strange ways. That's the second thing. The problem is the environment. Look at the design.
I think the third thing they would want to say is that if you're going to take these seriously, move away from surveillance as a knee-jerk response and critically analyze the environment. Then start examining the commercial agenda behind the technology and think about how that commercial agenda plays into and magnifies stereotyping cyber-harassment and cyber-violence.
When I sit down with kids, they bring up misogynist trolling. Slut shaming is a huge part of the problems they face online, along with threats of rape and other kinds of sexualized violence. When I ask where they think that's coming from, they very readily point the finger at mediatization. They say the online environment that they learn and play in, that they connect with their grandmother in, is wallpapered with gender stereotypes through ads, videos, and audio files that are everywhere. They know that's part of a commercial model where everything they do online is constantly collected about them and fed back into those images and intensifies the effect of those stereotypes.
Certainly the visual nature of the environment or the media makes it much harder for girls to resist those stereotypes. We live in an age of cheat days, where five days of the week you're supposed to not eat, and then two days of the week you're allowed to have meals, which is one of the things that is coming up in public schools among girls. The girls we've talked to tell us they try to conform, at least to some extent, to these very narrow stereotypical ways of performing gender. If they don't, they are subjected to incredibly harsh judgment from their peers, and that grows into conflict, which grows into harassment and threats.
When they find that it gets to the point where they need someone to help them and they go to adults, they are judged by the adults because they've broken the rules about disclosure: “Well, you shouldn't have posted that picture. What were you doing talking to your friend about that and using that language on the Internet?” Their argument is that the whole environment is designed to make them do that. All of the incentives in that environment are for them to disclose information, to portray a certain kind of femininity, to perform according to a particular kind of identity as a girl, whether they're a learner or hanging out with friends, or just trying to find out what the adult world is like.
Given Rena's comments about the importance of layers and how that database level is so key, and how software can conceal how we as a society enact violence, I think this problem is only going to be magnified by big data algorithms that sort kids into categories for commercial purposes. We already know that those algorithms intensify inequalities. They hide these biases and sources of inequality in the algorithm, and once they're there, it's very hard to hold anybody to account.
If we look at these three things that I think girls and young women would want me to say on their behalf, I think part of the solution has to be taking responsibility for creating public spaces that are not commercialized, places where kids can gather for social interaction, for learning, and for exploring the world.
Ironically, I think before we passed the Personal Information Protection and Electronic Documents Act, the federal government actually demonstrated a lot of leadership in this regard. These were places like SchoolNet, and public access points for rural and impoverished populations. These initiatives were equality-driven and value-driven, and they were designed to promote a healthy networked public sphere. Once PIPEDA was passed, all of that funding was pulled.
I think as you listen to all of this different information and talk to different intervenors, I would urge you to keep in mind that the role of government is to create conditions that provide equal access to free speech and to support a public sphere where community norms are both articulated and respected in ways such that we hold each other to account for violence and discrimination.
Welcome back, everyone. I hope you have a good session.
My thanks to the witnesses for their presentations.
I would first like to turn to Ms. Bivens.
Based on your online comments, the monitoring practices for misogyny on social media are part of your interests and current projects. We hear a great deal about those female social media professionals or users who become the targets of misogynist behaviours and receive some of the most hateful comments that they have ever read or heard. Those behaviours are seen particularly on social media.
Just think of the recent Gamergate controversy. Many women have channels on Twitch or YouTube and they are all trying to participate in healthy social discussions online or they simply want to do something they love. Could you address this trend in particular? In fact, there is a strong link, though not exclusive, with women who try to break into fields, professions or recreational activities that are men's turf right now, such as the jobs of sports commentators and analysts, the video game industry or online gaming.
Do you think this trend has something to do with the current vitriol on social media when people try to overcome social and cultural obstacles?
My question has another part. Is this cyberviolence really different from other forms of harassment and intimidation, or is this simply a new medium that enables people to continue to perpetrate these crimes relatively easily under the cover of a degree of anonymity?
That's my first question. It is long.
I'll preface this by picking up on one of the earlier questions. A lot of this is age-driven. If you look at human development, kids up to about 11 and 12 tend to form their sense of identity through their relationship with their family. Once they hit 12 or 13, things start to shift a bit, generally speaking. The usual path is that we're then trying to break away from the family, get out into the world, explore different identities, and find out who we want to be as an adult. It's fraught with difficulty and lots of mistakes.
To a certain extent, that's also a performativity. One of the reasons you see so many 13- to 22-year-old kids and young adults hyper-performing is that they are developmentally predisposed to try on different identities, get them out there, see what the reaction is, and then retreat into a private space to figure out if that works for them or not.
I think the thing you've raised is that when you do this in a commercialized surveillance space, then certain kinds of identities are privileged—hypersexualized identities, for example. With the eGirls data, and similarly with the work we've been doing on the eQuality Project as well, kids tell us that instead of finding a whole range of ways of being a girl in network spaces, there's just this very narrow hypersexualized identity that's available to them, and performing it is almost protective—i.e., “I have to have a friend on my friends list who does it, or I have to do a little bit of it, because if I don't, I'm trolled.” Then they have to deal with all this incredible negativity.
I think it's interesting to see how the technology does interface with these very old stereotypical concerns around gender and problems of equality. Especially with the eGirls data, girls would tell us things like, “You know, when I'm at school, I don't feel pressure to have the makeup on and do the hair and all this type of thing, but I have friends who went online, just took pictures of the way they normally look, and got attacked immediately.” They were told they were fat and they were told they were ugly.
It's very heterosexist; it's very normative; it's very gendered, and it's very misogynistic. When they're online, they're very careful about performing in a particular way.
As well, our data actually is drawn from a really diverse group of girls. I agree with Rena on everything she said about intersectionality. It's really important to understand how race plays out with gender and how socio-economic status plays out with gender, yet all of our diverse participants indicated that they had to negotiate with this. To go back to my opening comments, they point the finger at the media stereotypes that are embedded everywhere. It's easier for them to push against the stereotype in the real world. Once you're online, it's really hard.
I'm going to go back to the data. What we hear typically when kids raise these kinds of issues is that they hate the term cyber-bullying. They felt that the term cyber-bullying has really done them a disservice. What they say is, “Call it what it is; it's violence. Call it what it is; it's misogyny and racism.” There's a range of responses, and their concern is that we tend to use a police response all across that range of behaviours.
I'm going to give you a very quick example. Two young women, 13 years of age, are best friends in Toronto. One goes on vacation for March break; one doesn't. They're back at school and they're texting each other and one of them says on a social media platform, “Ha, ha, I'm darker than you”, and they're both sent to the principal's office and accused of racist bullying because they're both Jamaican-Canadian and both happen to be black. They look at that and they say, “That is not cyber-bullying; that's stupid. That was my best friend who got a tan.” Often the school response is tied into bringing in the police officer who works at the school, blah, blah, blah.
They feel there's this whole range of behaviours, with uncivil discourse in the middle, and then serious risks or threats of violence and rape. On the one end they feel we overuse the criminal response, and on the other end they feel we underuse the criminal response. I teach criminal law and I still can't figure out why the police don't think that a rape threat is criminal harassment, because it sure looks like it to me.
It reminds me of years ago when we were trying to respond to domestic violence differently. One of the things we did was to work with police officers and say, “No, actually, you have to respond to that. Nobody gets a free bye with that.” I don't think we've used the tools we have in place very well, and I think we would make progress if we created initiatives that helped us talk with police in particular about how criminal harassment and uttering threats apply to the kinds of trolling comments we see in cyberspace.
We have years of experience through counselling and teaching, and all these professionals in place who actually have social relationships with these kids. We have parents who are seeking help for kids who are having mental health issues as well. It seems to me that's where your solution is going to be. Again, a technological fix is often very awkward and interferes with those relationships.
I was talking to Rena before we started about Safer Schools Together, a new Canadian company. About 126 school boards across the country have bought services from this company. They give the name of every kid in the school, and then a robot program goes out and grabs anything that this child has posted on the Internet and uses algorithms to find out whether or not they're at risk of mental health.
One of the things they're using in England to determine this is whether they've posted emo rock lyrics on social media, which every 13-year-old does at least 12 times a day. It generates a report for the principal and the police. That can't replace those rich experiences and relationships that you describe. Those are where the solutions are.
Typically, when you look at kids at risk for any form of violence, there have been multiple reports to CAS, and there have been multiple attempts to intervene. We're not failing these children because we don't know who they are; we're failing them because we don't have enough money in mental support for kids. We're failing them because we don't take any of their concerns seriously. We throw them out there on the Internet and expect them to navigate this commercialized space all on their own. It seems to me that the technology to that is irrelevant. It's the relationships that matter.
Often when people say, “What can parents do? What is the most protective thing I can do? You don't want me to spy on my kid, but what should I do? I'm terrified.” Many parents are. Have dinner with your kids. That is the single most important protective factor, having dinner with your family at night, not in the car on the way to soccer, but actually sitting down.
We need to get off the technological wagon and remember that we have all sorts of experience in dealing with these kinds of problems. What we really need to talk about is where we're putting our resources, into building those technologies, so we can innovate and create wealth, or into providing mental health services for kids, so they can grow and thrive.
Thank you very much for the introduction, and good afternoon, everyone.
Thank you so much to the committee for this opportunity to speak with you this afternoon. It is truly an honour to be here in the beautiful Algonquin territory. We have travelled from Coast Salish territory, where it's just as sunny as here, and we're so glad to be joining you today.
We're really glad to be having this conversation around violence against young women and girls, and talking about cyber-violence against women. We are very interested in this conversation and this work, in part because as an organization we have committed to the work on any violence against women.
I took a moment to revisit the Royal Commission on the Status of Women of Canada, and to think of where we are in 2016, and think about where cyber-violence against women fits into the effort toward addressing women's equality in Canada.
In thinking about cyber-violence against women and girls we, of course, want to recognize and understand the relationship of violence against women generally, how it is ubiquitous, and an epidemic, and endemic, and enshrined, we think, in the very making of Canada as a nation. In unravelling the matter of cyber-violence against women and the implications and manifestations for young women and girls, we then think about it in terms of one thread of multiple threads, that are woven together, and that speak, in a very real way, to the extent to which girls and women can have equality in Canada.
As we heard earlier today, but also in other sessions that the committee has had, we are talking about the Internet. I like to think about the Internet as yet another environment, a new frontier if you will, in which we are certainly experiencing tremendous opportunities for awareness-raising, for connection, for information, for engagement, for community, and for expression. It is also a place where certain problematic aspects of human behaviour are flourishing. It's a challenge for us when we are thinking about how to address cyber-violence against women, recognizing that we are still in a big way wanting to address violence against women in the broader sense. It's always a caution to separate out this thread without looking at the context, and to hold that context.
We have spent some time, certainly at Battered Women's Support Services, looking at media literacy and recognizing the role of media literacy, in terms of advertising and print and news, and the relationship to media, and we want to support young people in having some critical analysis. Through some work we were doing around media literacy work, we ended up speaking with many women, young women, who wanted to talk about their experiences of cyber-violence. We ended up doing some research with women who were accessing our services around cyber-violence against women, and the ways in which they were experiencing violence online, and then also the way that abusive partners were using the online environment to perpetrate more harassment and to inhibit their sense of themselves.
In Vancouver, unfortunately, we've had a rash of sexual assaults by strangers, a number of random sexual assaults that have happened. It has created this level of fear in women throughout the city, and it gives us some very good information about how violence against women and the very nature of it subjugate women as a gender, and create the sense of not being safe in the public environment. That is certainly a piece that we cannot discount in terms of the online environment. When we are seeking to address violence against women, a critical component is recognizing that this is an environment in which these behaviours are flourishing.
There is always an effort, of course, to look at the rule of law and law enforcement when we're talking about these kinds of behaviours. We like to think more broadly in terms of addressing some of these problems, and we don't think we should be focusing all of our efforts on the law. We should be very careful about how much we put on the line to look for community-based responses.
We have some very important and, I think, promising practices that are looking at how to support young people regarding how to navigate this environment, how to bring an element of respect to relationships, how to provide support for survivors, and also how to teach boys and men their responsibility to moderate not only their own behaviour but the behaviour of their peers.
I'd like to turn it over now to my colleague Rona Amiri, who will talk about some of that work.
The three areas we think are the most promising in terms of practice are our core training for men to end violence, our community engagement, and our programs for youth to end violence. Our men ending violence core training is basically core training that's been designed for men, specifically to provide men with sufficient knowledge and analysis around gender violence so that they're able to be positive male leaders within the community.
We also critique well-known men who are doing this work. It's important to make sure they're staying on track and they are getting evaluations from women's organizations and women who are doing this work.
Through our community engagement program, basically we engage different communities like the Downtown Eastside in Vancouver and different first nations communities. This is kind of a long-term engagement with a process that includes training, raising awareness, prevention, and intervention.
Lastly, I'd just like to speak to our youth ending violence program, because that is the program that I coordinate. Youth ending violence is basically a violence prevention program. It's peer-led, so youth facilitate workshops for youth on dating violence, gender violence, and cyber-violence. Activities are hands-on. We do group work. They do a lot of learning of definitions and that kind of stuff. This is really important because often, I think it was mentioned earlier, the term bullying is used for gender violence, so it's important to have that gendered analysis.
I speak to teachers when I go into schools, and we know there are very gender-neutral programs right now around dating violence. When we go in there, they thank us because we have some experts who can speak to the topic and it's not just their responsibility. We're looking at gender, which is very important, of course.
I've also had times in workshops when young women have come up to us at the end and said they were experiencing cyber-violence online and they didn't know that wasn't okay or that they could talk to somebody about what was going on. Following that, we were able to provide them with services or connect them with battered women's support services—of course we have a lot of front-line services—as well as connect them with teachers or counsellors in their school so that they were able to know that this is an issue and there are things they can do to stop it or prevent it.
That's been my experience.
Good afternoon, Madam Chair, honourable committee members, and my brilliant colleagues from the Battered Women's Support Services.
Thank you so much for this invitation to address the Standing Committee on the Status of Women and to discuss an issue that's both deeply personal and professionally concerning, that is, cyber-violence against women and girls.
I remember quite clearly the shift to online and social media-based communication and the rise of the Internet. When I was in sixth grade, ICQ and MSN Messenger became the norm in communication with friends and peers. As well, this opened up a whole new world of access. It also became a platform to widely share rumours, gossip, and hateful comments with such a large audience.
When I was in grade 10, LiveJournal rose in popularity. This platform allowed for increased expression through online journaling and blogging and a place to connect with people with similar interests across the globe, but it also opened the door to public bullying, increased judgment, and intimidation. In the first year of my undergraduate degree, Facebook was launched. Facebook offered a space to connect with peers, share photos, and keep in touch with friends in different places around the world, but Facebook continues to lead to increased breaches of privacy and the failure to take reports of harassment and violence seriously.
The Internet and social media present a very complicated landscape for young people to navigate. While advances in technology offer extended opportunities to engage with the world, a whole new realm of tools to perpetuate and cover up violence are at the fingertips of every single one of its users.
Cyber-violence and cyber-misogyny are pervasive issues in the technologically advanced culture we live in, but to be quite clear, the patriarchal surveillance of women and girls took place long before the Internet and social media facilitated its ease. Not only do women, trans people, and other marginalized genders live in fear in their homes, workplaces, public spaces, schools, and the institutions meant to protect them, educate them, heal them and deliver justice, now they—we—live in fear in cyberspace too.
Cyberspace is increasingly where people work, shop, connect with each other, play, and learn, and violence and oppression can and do happen there quite often. Much of the violence that happens online is sexualized and rooted in misogynistic gender norms, racism, ableism, homophobia, transphobia, classism, and colonial violence. Not surprisingly, cyber violence is often directed to and experienced specifically within the spaces that are created by these populations to speak out against and share their experiences of violence and oppression and social justice advocacy.
My understanding of cyber-violence and cyber-misogyny comes from my work as youth programs coordinator at YWCA Halifax and my involvement with YWCA Canada's Project Shift advisory team. Through this role, I manage Safety NET, a provincial strategy to address cyber-violence against young women and girls. We spoke to over 200 young people and 20 service providers across the province to learn directly from them what violence looks like when it happens online, how we can better support survivors of online violence, and how we can contribute to lasting systemic change.
In the aftermath of Saint Mary's University's rape chants going viral, Dalhousie school of dentistry's “Gentlemen's Club”, and the assault and subsequent death of Rehtaeh Parsons, cyber-violence is a particularly pressing issue for us to address in our region.
Although cyber-violence, particularly against women and girls, is a pervasive problem, it is not well understood by the general public, service providers, and policy-makers. I'm so pleased to share what we have learned from our Safety NET project and promising practices that can help prevent and address online gender-based violence as identified primarily by youth.
I will preface this by saying that radical ideas lead to radical change. To truly address online violence and all forms of gender-based violence, we need to work towards cultural shifts that will fundamentally change the way that we see and the value that is placed on women, trans people, and other marginalized genders.
We need a sustained and long-term investment and true engagement from all stakeholders, including a willingness to change systems that aren't working.
I feel so hopeful that we are on the right track with the federal strategy to address gender-based violence that was launched this summer, and through this committee's study on violence in the lives of women and girls.
Four key recommendations came through the Safety NET needs assessment:
The number one thing that was identified in the province was the need for youth-led cyber-violence education and community programming. This means truly valuing the experiences and perspectives of youth, and young women specifically, and centring these voices in community-based grassroots programming, as well as talking explicitly about the systemic issues that drive cyber-violence.
In my opinion, much of cyber-violence education is failing specifically because it does not do these things. Young people need the space to discuss and learn among themselves, and teach each other about staying safe online while still actively engaging in the culture and all it has to offer. Public education, awareness, and research about what cyber-violence is specifically, its prevalence, its impacts, and its consequences were also identified as key needs.
Both youth and community partners spoke of the need to work with key stakeholders, especially in justice and education, to develop trauma-informed systems of responses for survivors of cyber-violence. In particular, victim-blaming responses and reactions that advocate for simply disengaging from technology and social media should be avoided because they cause so much harm.
Last, governments and community organizations should work with social media and media-based outlets to develop guidelines and protocols that offer better protection for users. Sustained advocacy that develops buy-in from these companies is a necessary component to building safer online communities.
Again, many thanks for the invitation to engage in this conversation with you about cyber-violence. I look forward to our discussion, and I very much appreciate that online violence is being recognized in such a formal way as an inhibitor to equity for women and girls.
I will end my comments with the sentiment that while the Internet may be an instrument used to maintain and facilitate oppressive violence, it is also a tool that can help us fight against it and advocate for a safer and more empowering world for women and girls in all of their intersecting identities.