Committee
Consult the user guide
For assistance, please contact us
Consult the user guide
For assistance, please contact us
Add search criteria
Results: 1 - 100 of 117
View Dave MacKenzie Profile
CPC (ON)
View Dave MacKenzie Profile
2019-06-04 16:16
When we have that definition and somebody puts something on YouTube that may come from a movie or a television show, it seems to me as though, at times, those topics would be part of the broadcast. If those show up, what happens?
Colin McKay
View Colin McKay Profile
Colin McKay
2019-06-04 16:16
If those show up and they are flagged for review—a user flags them or they're spotted by our systems—we have a team of 10,000 who review videos that have been flagged to see if they violate our policies.
If the context is that something is obviously a clip from a movie or a piece of fiction, or it's a presentation of an issue in a particular way, we have to carefully weigh whether or not this will be recognized by our users as a reflection of cultural or news content, as opposed to something that's explicitly designed to promote and incite hatred.
View Dave MacKenzie Profile
CPC (ON)
View Dave MacKenzie Profile
2019-06-04 16:17
A couple of weeks ago a group of youngsters attacked and beat a woman in a park. I believe only one was 13; I think the rest of them were young. It showed up on the news. Would that end up in a YouTube video?
Colin McKay
View Colin McKay Profile
Colin McKay
2019-06-04 16:17
Speaking generally and not to that specific instance, if that video were uploaded to YouTube, it would violate our policies and would be taken down. If they tried to upload it again, we would have created a digital fingerprint to allow us to automatically pull it down.
The context of how a video like that is shown in news is a very difficult one. It's especially relevant not just to personal attacks, but also to terrorist attacks. In some ways, we end up having to evaluate what a news organization has determined is acceptable content. In reviewing it, we have to be very careful that it's clear to the viewer that this is part of a commentary either by a news organization or another organization that places that information in context.
Depending on the length and type of the video, it may still come down.
View Christine Moore Profile
NDP (QC)
How quickly are you able to remove a video that has already been removed and been modified, for example, using sound that goes faster. They do that. I've often seen that from my daughter. There are people producing Paw Patrol a little faster so that it is not recognized by the system and they are able to publish their video.
In terms of hate videos, are you able to quickly remove a video that has already been removed once and has been modified just to avoid those controls?
Colin McKay
View Colin McKay Profile
Colin McKay
2019-06-04 16:33
Yes, we are.
I recognize the example you described. I've seen that as well. That is one of the challenges, especially immediately after a crisis. We're seeing content being uploaded and they are playing with it a little bit to try to confuse our systems.
What we do, particularly in the case of hate content and violent content online, is to tighten the standards within which we identify videos so that we're taking them down even more quickly.
Even in the context of Paw Patrol, I think your daughter will likely find that if she goes back to the same channel two weeks later, they may not have the Paw Patrol content because it will have been recognized and taken down.
View Christine Moore Profile
NDP (QC)
Okay.
I would like to know a little bit more about the process of reviewing flagged videos, and who reviews them when it's not done by a computer.
Also, are the workers reviewing these videos provided with any services, because having to listen to these kinds of things all the time causes a lot of distress to people? What services are you providing to these workers to make sure they do not go crazy from listening to all of these things all the time?
Colin McKay
View Colin McKay Profile
Colin McKay
2019-06-04 16:34
To begin with the process itself, as I mentioned, especially in the context of hate content, we are dealing with such a quantity that we rely on our machine learning and image classifiers to recognize content. If the content has been recognized before and we have a digital hash of it, we automatically take it down. If it needs to be reviewed, it is sent to this team of reviewers.
They are intensely trained. They are provided with local support, as well as support from our global teams, to make sure they are able to deal with the content they're looking at and also the needed supports. That is so that as they look at what can be horrific content day after day, they are in a work environment and a social environment where they don't face the same sorts of pressures that you're describing. We are very conscious that they have a very difficult job, not just because they're trying to balance rights versus freedom of expression versus what society expects to find when online, but also because they have the difficult job of reviewing material that others do not want to review.
For us, whether they're based in one office or another around the world, we are focused on giving them training and support so they can do their job effectively and have work-life balance.
View Iqra Khalid Profile
Lib. (ON)
How long does it take you to remove something once it's reported or flagged to you? What's the specific timeline?
Colin McKay
View Colin McKay Profile
Colin McKay
2019-06-04 16:37
It varies, depending on the context and the severity of the material.
We've already had examples in our conversation today about whether or not it's commentary or it's news reporting, or it's actual video of a violent attack. In the context of the Christchurch attack, we found that there were so many people uploading the videos so quickly that we had to accelerate our artificial intelligence review of the videos and make on-the-fly decisions about taking down video, based on its being substantially similar to previous uploads.
In that process, the manual review was shortened extremely because we were facing a quantity.... In a case where there's broader context to be considered, there's still a commitment to review it quickly, but we do need a process of deliberation.
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:07
Thank you, Mr. Chairman.
I'm going to direct my first question to the Facebook representatives. I'm sure you're aware that one of the principal concerns of members of this committee has been that deceptive information, deliberately and maliciously spread through the tools created by social media companies, are a harm to democracy, and this disinformation is used to undermine senior politicians and public figures, public institutions and the political process.
With that in mind, could Facebook explain why it has decided not to remove the video of Nancy Pelosi that presents a distorted impression of her to undermine her public reputation? The reason I think this is so important is that we're all aware that new technology is going to make the creation of these sorts of fake or manipulated films much easier. Perhaps you could explain why Facebook is not going to take this film down.
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:08
Thank you, Mr. Collins.
I'm happy to explain our approach to misinformation a bit more clearly for this committee.
First, I want to be clear that we are taking action against that video—
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:08
I'm sorry, Mr. Potts, we haven't got much time. I'd like you to answer the question you've been asked, not give a statement about Facebook's policies on misinformation or what else you might have done. I want you to answer the question as to why you, unlike YouTube, are not taking this film down?
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:08
I know you're down ranking it. Why aren't you taking the film down?
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:08
It is our policy to inform people when we have information on the platform that may be false, so they can make their own decisions about that content.
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:08
But this is content that I think is widely accepted as being fake. YouTube has taken it down. The fact-checkers that work with Facebook are saying it's fake, yet the video is allowed to remain and that video being there is far more powerful than any legal disclaimer that may be written under or over it.
Why won't you say that films that are clearly fake and are independently verified as being fake, that are there to deceive people about some of the most senior politicians in your country, will be taken down?
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:09
We are conducting research on our inform treatments. That is the treatment that shows that something is fake. For example, if someone wanted to share this video with their friends or if they have already shared it or when they see it in a newsfeed, they receive a message that says it's false.
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:09
Facebook accepts that this film is a distortion, doesn't it?
Kevin Chan
View Kevin Chan Profile
Kevin Chan
2019-05-28 11:09
Neil, you're closer to this, but my understanding is that the video in question has been slowed down. Is that what this is?
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:09
It's manipulated film to create the distorted impression that Nancy Pelosi was somehow impaired when she was speaking. That's what has happened and that's why YouTube has taken the film down and that's why there has been a general recognition, including by independent fact-checkers who work for Facebook, that this film is distorted and creates a distorted impression of the third most senior politician in America.
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:10
As you mentioned the fact-checkers, we work with over 50 fact-checkers internationally that are—
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:10
This is not in question. The fact-checkers recognize it's fake. You're saying it can stay there. Do you not see that what Facebook is doing is giving a green light to anyone in the world who wants to make a distorted or fake film about a senior politician, or maybe in the future use deepfake technologies to do it, and know that whatever happens Facebook won't remove the film?
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:10
I think you're asking a philosophical question, sir. Should we remove or should we inform people that it is fake? We have taken the approach to inform people that it's fake, so they can understand why that video is on the platform and what other independent parties have considered this to be. They have considered it to be false and now they see this on our platform if they go to share it. All these questions, I think are very thorough questions, but it allows people to make their own decision and it allows them to tell others it is false.
You mentioned that the video is slowed down, which by all accounts and the fact-checkers have said it is, but I think there are many different cases where videos are slowed down and that would perhaps not be a warrant for this committee.
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:11
The issue here is to say that if someone is making a film, or slowing down a film or manipulating a film, to try to create the false impression that a senior public figure is not fit for office, then that is an attempt to undermine them and the office they hold.
This is not a question of opinion. This is not a question of free speech. This is a question of people manipulating content in order to undermine public figures, and my concern is that to leave that sort of content up there, when it is indisputably fake, indisputably false and distorted, and to allow permission for this content to be shared with and promoted by other users is irresponsible.
YouTube has removed this content. I don't understand why Facebook doesn't do the same.
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:11
Sir, I understand your concerns, but I think your questions are the right ones and that they show the complexity of this issue and also show perhaps that the approach we are taking is working. You don't hear people—
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:12
Sorry, but with all respect, what it shows is the simplicity of these issues, the simplicity that another company has taken, recognizing the same issues, the simple action to say that this is clearly fake, it's clearly distorted, it's there to undermine senior public figures and it actually shouldn't have a place on the platform. It shouldn't be part of your community.
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:12
Your opinion is right, and I obviously respect the opinion of YouTube as an independent company, but we're not hearing people talk about this video as if it were real. We're hearing people discuss the fact that it is fake and that it is on the platform, so on the question of whether we have informed people that this is a fake video, yes, we have. I think that is the predominant speech right now. Whether it's the conversation we're having right now, whether it's on the news or others, people understand that this video is fake and they can make further decisions from there.
Damian Collins
View Damian Collins Profile
Damian Collins
2019-05-28 11:12
My concern about this is that it sets a very dangerous precedent. Your colleague Monika Bickert said last week to CNN that basically Facebook's policy is that any political content, any disinformation content in relation to politics will not be taken down, that there would be notes put up for users so they could see that the facts are disputed, but it will never be removed.
If you're going to allow your platform to be abused in this way by people producing disinformation films targeted at users to try to interfere with democracy and the best you're going to do is just put a flag on it to say some people dispute this film, I think that is a very dangerous precedent.
Keit Pentus-Rosimannus
View Keit Pentus-Rosimannus Profile
Keit Pentus-Rosimannus
2019-05-28 11:52
All right. Thank you.
My second question will be about fake news or fake videos. What is your policy towards fake news, for example, deepfake? Will they be removed or will they just be marked as deepfake videos?
Derek Slater
View Derek Slater Profile
Derek Slater
2019-05-28 11:52
The issue of deepfakes.... Thanks for that question. It's a really important emerging issue. We have clear guidelines today about what content should be removed. If a deepfake were to fall under those guidelines, we would certainly remove it.
We also understand this needs further research. We've been working actively with civil society and academics on that.
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:53
Thank you.
We are also investigating and doing research on this policy, to make sure we are in the right place. Currently, we would identify it as being fake and then inform our users, but we constantly—
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:53
We constantly iterate our policies, and we may update those policies in the future, as they evolve. We are working with research agencies and people on the ground to understand how these videos could manifest. As I mentioned before, if these videos, or any type of misinformation, led to real world harm—or off-line harm, I should say—we would remove that content.
Carlos Monje
View Carlos Monje Profile
Carlos Monje
2019-05-28 11:53
We also share concerns about deepfakes. If we see the use of deepfakes to spread misinformation in a way that violates our rules, we'll take down that content.
Keit Pentus-Rosimannus
View Keit Pentus-Rosimannus Profile
Keit Pentus-Rosimannus
2019-05-28 11:54
My last question will come back to what Mr. Collins asked in the beginning, about the fake video of Nancy Pelosi. Let's say a similar video were to appear, only the person there was Mr. Zuckerberg. Would that video be taken down, or just marked as a fake one?
Voices: Oh, oh!
Keit Pentus-Rosimannus
View Keit Pentus-Rosimannus Profile
Keit Pentus-Rosimannus
2019-05-28 11:54
Sorry for the loud laughter. If a video similar to what has been airing, picturing Nancy Pelosi, appeared with Mark Zuckerberg, would you remove that fake video, or would you just mark it as fake news?
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 11:54
If it was the same video, inserting Mr. Zuckerberg for Speaker Pelosi, it would get the same treatment.
Sun Xueling
View Sun Xueling Profile
Sun Xueling
2019-05-28 12:47
I would like to come back to the example of Sri Lanka, with Mr. Neil Potts.
Mr. Potts said earlier that he was not aware that the incendiary videos had been sent up to Facebook. I would like to highlight that in an article in The Wall Street Journal, Mr. Hilmy Ahamed, Vice-President of the Muslim Council of Sri Lanka, said that Muslim leaders had flagged Hashim's inflammatory videos to Facebook and YouTube using the services that were built into your reporting system.
Can I just confirm with Mr. Potts that if you were aware of these videos, Facebook would have removed them?
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 12:47
Yes, ma'am. That's correct. If we are aware, we would remove the videos. With this specific case, I don't have the data in front of me to ensure we were put on notice, but if we were aware, these videos would have been removed.
Sun Xueling
View Sun Xueling Profile
Sun Xueling
2019-05-28 12:47
Thank you.
Similarly, can I then confirm in the same spirit that if Facebook were aware of a video being falsified—and we discussed the case of Nancy Pelosi earlier—Facebook would then carry a clarification that they are aware the video is falsified?
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 12:48
When one of our third party, fact-checking partners.... We have over 50 now who are international, global and that adhere to the Poynter principles. They would rate that as being false. We would then put up the disclaimer. We would also aggressively reduce that type of information and also signal to users who are not only engaging with it but trying to share, or have shared, that it has been rated false.
Sun Xueling
View Sun Xueling Profile
Sun Xueling
2019-05-28 12:48
Your disclaimer would actually say that you understand that the information contained is false.
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 12:48
It would have a link or a header to the article from one of the fact-checkers that disputes and claims it is false.
Sun Xueling
View Sun Xueling Profile
Sun Xueling
2019-05-28 12:48
This would be circulated to all who would have seen the original video.
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 12:48
If you are engaging with it currently, you would see it, whether at the bottom in your newsfeed or perhaps on the side if you are on desktop. If you have shared it, you would be informed that there has been an action disputing the validity of the content.
View   Profile
2019-05-28 13:18
Mr. Potts, earlier you had said in answer to my colleague's question that you were not aware of being told prior to or after the bombings in Sri Lanka that it, in fact, had been flagged to Facebook prior to that. Do you recall that?
View   Profile
2019-05-28 13:18
This was a serious and horrific massacre where videos were carried on Facebook for some time and, as you said, it's hate speech in clear breach of your policies. The alleged bomber himself, in fact, had a Facebook account, which I'm sure you know. I'm surprised that you didn't check, because that's something that I would have thought Facebook would want to know.
They would have wanted to know how it is that, with videos having been spread on Facebook, this was something that Facebook had missed, if at all you had missed it. I'm surprised you are not aware today as to whether or not this was something flagged to Facebook. Can you confirm that?
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 13:19
For the specific video, sir, I'm happy to follow up after this hearing, and to come back to show you exactly that timeline. When we were made aware of the attack of the individual, we quickly removed—
View   Profile
2019-05-28 13:19
I know. That's not my focus. Why it is that today, about two months after the event, you still don't know if, prior to the attack, Facebook had been made aware of the existence of the videos? I would have thought that, if you were to have us believe that the policies you now have in place—AIs and other mechanisms for the future.... If I were Facebook, I would have wanted to know how this was missed. I'm surprised that you don't know, even today.
Neil Potts
View Neil Potts Profile
Neil Potts
2019-05-28 13:19
That is correct. We do a formal after-action process where we review these incidents to make sure that our policies—
Pablo Jorge Tseng
View Pablo Jorge Tseng Profile
Pablo Jorge Tseng
2018-10-16 11:12
A few months ago, Ryan and I wrote an article entitled “What Can The Law Do About ‘Deepfake’?” The article provides an overview of the causes of action that may be taken against those who create and propagate deepfake material across the Internet, including across social media platforms.
Some of the causes of action include those related to defamation, violation of privacy, appropriation of personality, and the Criminal Code. However, the article did not focus on how deepfakes may influence elections, or how we as a nation can limit the effects of such videos on the outcome of an election.
We hope to use our time here today to further our thoughts on this very important topic. Our opening statement will be structured as follows: one, provide an overview of some other legal mechanisms that are available to combat deepfake videos in an election context; two, provide an overview of potential torts that are not yet recognized in Canada but have the potential to be; and three, discuss whether deepfakes really are the problem or just another example of a greater underlying problem in society.
Ryan Black
View Ryan Black Profile
Ryan Black
2018-10-16 11:13
From the outset, we want to ensure that the appropriate focus is placed on the roles that users, platforms and bad actors themselves play in propagating social media content. Well-intended platforms can and will be misused, and deepfake videos will certainly be a tool used in that malfeasance.
The true bad actor, though, is the person creating the false media for the purpose of propagating it through psychological manipulation. As Dr. Wardle alluded to, the data is valuable, and platforms generally want technology to be used properly. They assist law enforcement agencies with upholding relevant law, and develop policies intended to uphold the election's integrity. They also allow for the correction of misinformation and the sourcing of information.
A recent example in Canada is Facebook's Canadian election integrity policy, which is posted on the Internet.
I'll turn it over to Pablo to discuss the legal remedies relevant to today's discussion.
Pablo Jorge Tseng
View Pablo Jorge Tseng Profile
Pablo Jorge Tseng
2018-10-16 11:14
Focusing on elections, we wish to highlight here that Parliament is forward-thinking in the fact that in 2014, they introduced a provision to the Elections Act directed to the impersonation of certain kinds of people in the election process. While such provisions are not specifically targeted at deepfake videos, such videos may very well fall within the scope of this section.
In addition, there have been examples in our Canadian case law where social media platforms have been compelled through what courts call Norwich orders to assist in the investigation of a crime committed on that social media platform. For example, a social media platform may be compelled by a court to reveal the identities of anonymous users utilizing the services of that social media platform. That is to say that legal mechanisms already exist and, in our experience, law-abiding third parties subject to such orders generally comply with the terms thereof.
There is also room for our courts to expand on common law torts and for governments to codify new ones.
In general, laws exist in common law and statute form. It is important not to lose sight of the fact that governments have the ability to create law; that is, governments are free to come up with laws and pass them into force. Such laws will be upheld, assuming that they comply with certain criteria. Even if they do not necessarily comply with those criteria, there are certain override provisions that are available.
An example of codification of torts is British Columbia's Privacy Act, which essentially writes out in statute what the cause of action of appropriation of personality is.
Today we are flagging two other torts for discussion: unjust enrichment and the tort of false light.
With regard to unjust enrichment, such tort has generally been upheld in cases involving economic loss suffered by the claimant. However, it is reasonable to argue that the concept of losses should be expanded to cover other forms of losses that may not be quantifiable in dollars and cents.
Regarding the tort of false light, such tort exists in some states of the United States. Canada, however, does not recognize this tort just yet. However, the impact of deepfake videos may cause Canadian courts to rethink their position about the tort of false light. Even if this tort of false light does not exist in common law, it is very well within the power of the provincial government to enact the tort into statutory code, thereby creating its existence via statutory form.
Ryan Black
View Ryan Black Profile
Ryan Black
2018-10-16 11:17
In our article, we explore copyright tort and even Criminal Code actions as potential yet sometimes imperfect remedies. We note that deepfake, impressive and game-changing no doubt, is likely overkill from manipulating the public. One certainly would not need complex computer algorithms to fake a video of the sort routinely serving as evidence or newsworthy.
Think back really to any security footage you have ever seen in a news incident. It's hardly impressive fidelity. It's often grainy or poorly angled, and usually only vaguely resembles the individuals in question.
While deepfake might convincingly place a face or characteristics into a video, simply using angles, poor lighting, film grain, or other techniques can get the job done. In fact, we've seen recent examples of speech synthesis seeming more human-like by actually interjecting faults such as ums, ahs, or other pauses.
For an alternative example, a recent viral video purportedly showed a female law student pouring bleach onto men's crotches on the Russian subway to prevent them from the micro aggression of manspreading, or men sitting with legs too splayed widely apart. This video triggered an expected positive and negative reaction across the political spectrum. Reports later emerged that the video was staged with the specific intent to promote a backlash against feminism and further social division in western countries. No AI technology was needed to fake the video, just some paid actors and a hot button issue that pits people against each other. While political, it certainly didn't target Canadian elections in any conceivably actual manner.
Deepfake videos do not present a unique problem, but instead another aspect of a very old problem worthy of consideration certainly, but we do have two main concerns about any judicial or legislative response to deepfake videos.
The first is overspecification or overreaction. We've long lived with the threat that deepfake poses for video in the realm of photography. I'm no visual effects wizard, but when I was an articling student at my law firm more than a decade ago, as part of our tradition of roasting partners at our holiday parties, I very convincingly manipulated a photograph of the rapper Eminem replacing his face with one of our senior lawyers. Most knew it was a joke, but one person did ask me how I got the partner to pose. Thankfully, he did not feel that his reputation was greatly harmed and I survived unscathed.
Yes, there will come a time when clear video is no longer sacred, and an AI-assisted representative of a person's likeness will be falsified and convincingly newsworthy. We've seen academic examples of this already, so legislators can and should ensure that existing remedies allow the state and victims to pursue malicious deepfake videos.
There are a number of remedies already available, a lot which will be discussed in our article, but in the future of digitally manipulable video, the difference between a computer simulation and the filming of an actual physical person may be a matter of content creator preference, so it may, of course, be appropriate to review legal remedies, criminal offences, and legislation to ensure that simulations are just as actionable as physical imaging.
Our second concern is that any court or government action may not focus on the breadth of responsibility by burdening or attacking the wrong target. By pursuing a civil remedy through courts, particularly over the borderless Internet, it will often be a heavy burden to place on the victim of a deepfake, whether it's a woman victimized by deepfake revenge pornography, or a politician victimized by deepfake controversy. It's a laborious, slow and expensive process. Governments should not solely leave remedy entirely to the realm of victim-pursued legislation or litigation.
Canada does have experience in intervening in Internet action to varying degrees of success. Our privacy laws and spam laws have protected Canadians, and sometimes burdened platforms, but in the cybersecurity race among malicious actors, platforms and users, we can't lose sight of two key facts.
First, intermediaries, networks, social media providers, and media outlets will always be attacked by malicious actors just as a bank or a house will always be the target of thieves. These platforms are, and it should not be forgotten, also victims of malicious falsehood spread through them just as much as those whose information is stolen or identities falsified.
Second, as Dr. Wardle alluded to, the continued susceptibility of individuals to fall victim to fraud, fake news, or cyber-attack speaks to the fact that humans are inherently not always rational actors. More than artificial intelligence, it is the all too human intelligence with its confirmation bias, pattern-seeking heuristics, and other cognitive shortfalls and distortions that will perpetuate the spread of misinformation.
For those reasons, perhaps even more than rules or laws that ineffectively target anonymous or extraterritorial bad actors, or unduly burden legitimate actors at Canadian borders, in our view governments' response must dedicate sufficient resources to education, digital and news literacy and skeptical thinking.
Thanks very much for having us.
View Frank Baylis Profile
Lib. (QC)
Thank you.
I'll start with you, Ms. Wardle. What I'd like to do first of all is put some nomenclature around all the different things that are going on. You've used “misinformation,” “disinformation,” and “malinformation”. Mr. Black and Mr. Tseng have used “deepfakes”, deepfake videos. Do they fit into one of your three categories?
Claire Wardle
View Claire Wardle Profile
Claire Wardle
2018-10-16 11:40
Yes, I would argue that deepfakes are an example of false information disseminated to cause harm, so that would be disinformation. Misinformation might be that my mom sees that deepfake later and she reshares that. She doesn't understand that it's false. My mom's not trying to cause harm. These things can shift as they move through the ecosystem.
View Frank Baylis Profile
Lib. (QC)
Mr. Tseng and Mr. Black, would that go along with how you see this concern about deepfakes?
Ryan Black
View Ryan Black Profile
Ryan Black
2018-10-16 11:41
Largely, it does. I do agree that it's definitely a form of false information, but to attribute malice to it.... Some deepfakes are done for parody or for humour. There will almost certainly be a Hollywood version of deepfakes used to transplant actors' faces. There will be legitimate uses of deepfake, but in the news sphere or in the social media sphere, there certainly is a vulnerability that it would be used for malicious purposes. I tend to agree that it's definitely a form of falsification, just like a tricky camera angle or an edit could be disinformation as well.
View Frank Baylis Profile
Lib. (QC)
What is the difference between a deepfake and just a regular fake, or a fake video and a deepfake? Could you explain that to me?
Ryan Black
View Ryan Black Profile
Ryan Black
2018-10-16 11:42
Actually, I found an article through search engines that Dr. Wardle participated in, in Australia, which explains it very well. I would encourage people to hit their favourite search engine to find it.
Basically, it learns details from a series of images that are publicly sourced or sourced through other means. It learns details about the face and then uses deep-learning techniques—they're algorithmic and not logic in nature—to learn how the face interacts as it moves. Then, using a transplant victim.... If I were to take a video of Pablo here and I had enough video that had been pumped into the deepfake learning engine, I could just put my face onto Pablo's and very convincingly make Pablo look like he's talking while I'm moving.
View Frank Baylis Profile
Lib. (QC)
View Frank Baylis Profile
Lib. (QC)
The way we're going with this concept of deepfake, every kid's going to be doing this with their friends and making these videos, if I understand what you're saying. It's going to be that easy to do, right?
Ryan Black
View Ryan Black Profile
Ryan Black
2018-10-16 11:43
It's a technology of very limitless application, and will be used for more than faces. It will be used for full bodies. At some point it will be used for transplanting entire things or other characteristics. It could be used for voice just as easily as for face as well.
View Charlie Angus Profile
NDP (ON)
Thank you.
Mr. Black and Mr. Tseng, on the issue of deepfakes and the legal powers, under the Copyright Act in Canada, we have notice and notice, as opposed to notice and takedown. There has been push-back on imposing notice and takedown, because they say that you could be unfairly interfering with someone's rights, that you could be unfairly targeting a competitor.
On the question of deepfakes, are there specific legal things that we have to look at in terms of its effect on say, upending an election?
What are the legal parameters? If someone has been the subject of a deepfake, they could go the libel route. There are a number of traditional mechanisms in place that may be sufficient. But if it happens in the middle of an election, it could upend the democratic system.
Are there specific remedies that would be better able to address the threat of a deepfake, and upending elections?
Ryan Black
View Ryan Black Profile
Ryan Black
2018-10-16 12:03
I'm not sure that deepfake technology would be an appropriate target for any specific action, only because it is one in a very large belt of tools available to people who are trying to manipulate people through social media. Through the ways that both of the other speakers have spoken about, our brains are kind of wired to heuristically solve problems that we can't possibly logically solve because there's so much information being thrown at us at all times.
I worry, truthfully, more about the intent of misinformation and disinformation. I truthfully worry more about that than the specifics of deepfake video. This is only because—again, I go back to my security camera footage—you don't need to have a very sophisticated video or fake video to convince people that something's happened. You don't need to have a very convincing photo to convince people that something's happened. You can use a real image just as easily as you can use a fake image.
Jason Kee
View Jason Kee Profile
Jason Kee
2018-10-04 11:00
Thank you, Chair.
My name is Jason Kee. I'm public policy and government relations counsel at Google Canada.
We appreciate the opportunity to participate in your study of remuneration models for artists in creative industries.
As Google, our approach to remuneration and revenue is a partnership model. In this model, creators—such as publishers, artists, producers or app developers—create and supply the content while we provide distribution and monetization, including technical infrastructure, sales teams, transaction and payment systems, and business support, etc. We then share in the resulting revenue, with the majority of the revenue going to the creator every single time. Under this partnership model, we only earn revenue when our partners earn revenue, so it is in our interest to ensure that our partners are successful.
Google offers a wide variety of platforms for different types of creators. We offer publishers advertising platforms that allow them to monetize their content by hosting ads on their sites and apps, and share in that revenue.
Google Play, our online store, provides a massive global audience for developers and other content partners, with developers sharing in 70% or more of the revenue. Google Play Movies & TV offers shows or movies for rental or purchase, while Google Play Music offers unlimited ad-free access to over 40 million songs for a monthly fee, or through a more limited ad-supported tier. These are each fully licensed services that remit royalties to rights holders in accordance with licensing agreements and provide a significant source of revenue to our content partners.
Last is YouTube, Google's global online video platform. YouTube has over 1.9 billion monthly logged-in users, and over a billion hours of video are watched each and every day. At YouTube, our mission is to give everyone a voice and show them the world. This is the true power of YouTube—that with just a camera and an Internet connection, anyone of any age from any walk of life can participate, have a voice and build a global audience.
And they do. Over 400 hours of video content are uploaded to YouTube every single minute, making it one of the largest living collections of human culture ever assembled. These uploads represent virtually every imaginable type of video content, from home videos and user-generated content to high-end film and television content. Through platforms like ours, more people around the world are able to think of themselves as authors, artists and creators.
YouTube is also a “lean-in” interactive experience, where creators interact directly with a community of engaged and passionate fans who share and comment and contribute. This personal, direct connection that YouTube creators share with their fans makes them more authentic and relatable and distinguishes YouTube from other platforms.
Canada has a very large and vibrant YouTube creator community that producers high-quality and engaging content that is being enjoyed in high numbers, both domestically and globally. Canada is one of the top exporters of content on YouTube. Globally, on average, 50% of a creator's watchtime comes from outside their own country, but for Canadian creators, over 90% of watch time comes from outside of Canada. This is higher than any other country on the platform, and demonstrates that we actually produce very exportable content.
In the past year, Canadian channels have seen their watchtime grow 45%, and channels making six figures or more in revenue are up 24% over last year. Canadian success stories are numerous and many of these creators have grown sufficiently large and sophisticated to employ teams of business managers, researchers, camera operators, editors and others, effectively becoming small production studios.
Canada also has a large community of up-and-coming creators—YouTube's creative middle class—including a range of Quebec creators, who predominantly produce French-language content that performs very well in Quebec and French-language markets.
Canadian broadcasters and producers are increasingly partnering with YouTube and leveraging the platform to reach new international audiences. We partnered with the Canada Media Fund on Encore+, a YouTube channel featuring classic Canadian content that is no longer aired, such as The Littlest Hobo and Wayne and Shuster. We've provided live streaming services for major events, including the last Tragically Hip concert, The JUNO Awards,The Canadian Screen Awards, and APTN's Indigenous Day Live, among others, extending the reach of these Canadian moments. We work closely with these partners to help them maximize their opportunities on the platform.
View Michel Picard Profile
Lib. (QC)
I wanted to ask the representatives from St. John Ambulance this question.
If the devices are so easy to use, then would it be possible to provide simple training through a video recording on a web site?
It's true that the RCMP site is rather full and diverse, but there could be an RCMP video on the safety aspect. The training would provide enough information to ensure that those who are afraid to use this device are assured that if they ever need to use it they should be able to without any problem.
View Michel Picard Profile
Lib. (QC)
Should we put a training video on your website or any other website for the general population to learn how to use these?
Jamie Solesme
View Jamie Solesme Profile
Jamie Solesme
2018-10-02 16:50
I'm not familiar with what's currently available in the realm for the public. You're talking about public education. I'm not certain what's there now.
Sophie Prégent
View Sophie Prégent Profile
Sophie Prégent
2018-06-07 9:01
Our third recommendation is to treat performances incorporated in music videos as musical performances and not as cinematographic performances. Currently, once a performer authorizes the incorporation of his or her performance into a cinematographic work, including a video clip, he or she automatically waives his or her copyright for that use. For example, a performer whose performance is captured on video and is also audio-recorded may only exercise copyright or receive equitable remuneration when his or her sound performance is dissociated from the video.
Yet, a video clip is neither more nor less than a song with images. No song, no video! I do not know anyone who watches a YouTube music video of a song on mute. That person is in fact watching the song. In such a case, depriving the artist of his or her rights is absurd. In our view, it is imperative that Canada ratify the Beijing Treaty on Audiovisual Performances and extend the exclusive and moral rights of performers in the sound recording industry to all performers.
That brings me to our fourth request.
The definition of sound recording must be changed so that the songs used in movies or TV shows are also covered by fair remuneration. The definition of sound recording which is contained in the act is problematic, since it excludes soundtracks of cinematic works broadcast at the same time as the film. This situation deprives performers of significant revenues, in addition to being discriminatory, since authors and music composers enjoy equivalent royalties for the use of their works. In 2012, the legislator recognized the same rights for performers in the sound recording industry as those of the authors. It is therefore difficult to understand that discrimination still exists.
Fifth, it is necessary to find ways to compensate performers for the use of their performances on the Internet. Quebec artists know that revenues from the streaming of their works are ridiculously low, even for their most popular songs.
The problem is in fact twofold. Firstly, revenues for non-interactive and semi-interactive webcasting are subject to a tariff set by the Copyright Board of Canada. This tariff is almost 11 times lower than the one in effect in the United States for the same period.
Revenues for webcasts of on-demand music content such as Spotify or Apple Music are subject to contractual arrangements between artists and producers that provide for the recovery of production costs before the payment of royalties to artists. Given the small sums generated by album sales as well as webcasting on demand, performers obviously too often find themselves deprived of royalties from this commercial exploitation of their performances.
View Pierre Nantel Profile
Ind. (QC)
Thank you, Madam Chair.
Andrew, I was browsing through the videos of the National Music Centre. I was wondering if you have video recordings of artists coming to enjoy your vintage equipment. You were referring to Émile Bilodeau. If I were you, I would love to see k.d. lang use that console in that Wurlitzer.
Andrew Mosker
View Andrew Mosker Profile
Andrew Mosker
2018-04-19 10:40
Yes, we have recordings of all the artists who have come to record in our studios. Since March, I believe that you can find recordings of more than 20 artists on our website. I will check that. We always do a commentary about the artists who come to record in our studios.
We also organized a major exhibition on k.d. lang. It opened in July 2017 and will end in June 2018. She has not recorded in our studios, but we have at least held an exhibition in her honour.
Next we will be starting a new exhibition on another Canadian artist.
Michelle Van De Bogart
View Michelle Van De Bogart Profile
Michelle Van De Bogart
2017-11-28 12:04
It is on our website, yes.
If I may, I would say that we refocused our inreach a couple of years ago. What we found when we refocused that was an increase in the number of requests for elder-assisted hearings. This is something we have discussed continuing, so we are now looking at doing continuous inreach to the institutions to continue those conversations.
Brian Myles
View Brian Myles Profile
Brian Myles
2016-09-29 11:26
Ladies and gentlemen, thank you. It's a real pleasure to be here before you today.
As my colleagues said earlier, the situation is serious. I think that, if given the opportunity, every media owner would tell you pretty much the same thing today, which is that our traditional revenue from print is decreasing and digital revenue isn't offsetting the losses. Let me be clear: I wouldn't go back to the paper era. We aren't dinosaurs here. The digital revolution is fantastic, but for our business model, it means that we have traded analogue dollars for digital cents. We are failing to achieve a stable business model.
Our recommendations are in two parts: measures that provide direct assistance and measures that provide indirect assistance. The first, which is perhaps the most important, is an indirect measure. If the government can't help us, it could stop hurting us and use the advertising budgets at its disposal to fund our media, the national media of Quebec and of the rest of Canada.
The federal government currently invests about half a million dollars in its advertising in Canadian newspapers. Ten years ago, that amount was $20 million. For us, that drop from $20 million to half a million dollars is brutal.
Where has all the government advertising gone?
It's no big mystery. In fact, the investment in 2014-15 in digital platforms was some $19 million. That $19 million or, if we round up, that $20 million is basically money inherited by American giants like Google and Facebook.
So the first recommendation is, of course, to make a significant and lasting increase to government advertising investment in our media. In addition, we think advertisers who are still brave enough to support the press here should benefit from tax credits for their advertising investment in our platforms. When I say “platforms”, that includes our printed pages, but also our screens, as we can now all be found on tablets and cell phones.
It would also be very important to update the Copyright Act. European countries are ahead of Canada and the United States on this. Here, we have let this entity called GAFA, or the giants Google, Amazon, Facebook and Apple, bleed our content dry and monetize it. It's an exodus of revenue, a major fiscal exodus. Improving the Copyright Act would make it possible to negotiate agreements and obtain royalties when our content is used on these major platforms.
We are also asking to be treated like all other media. These days, in the digital world, a screen is a screen. We need to consider that print media on digital platforms will also sometimes have a video and be on the Internet. For now, we don't have access to any assistance programs. Programs managed by the Canada Media Fund and Telefilm Canada aren't available to us. If we want to develop a video offering on our mobile site to reach new clients, young people, we have to pay for it ourselves. We don't have access to any tax credits, any assistance, direct or indirect. That the case for Le Devoir and all coalition members at the moment.
We think that payroll tax credits for hiring qualified journalists, and tax credits for creating applications would enable us to continue our digital shift. We don't expect ongoing assistance from the government. We aren't asking to be dependent on it. We think that transitional help would let us continue the activities we've already started and to pay journalists. In fact, information has a price, a value. But this value is that of brains, the intelligence of the people we hire and who are in the field to bring back quality material. These credits would certainly give us some breathing room, some time to get our business models in place.
Lastly, we pay GST on our products and QST in Quebec. We are asking both levels of government, Quebec City and Ottawa, to coordinate to exempt print media from the GST and QST. Of course, this measure would alleviate the problem a little. You can see for yourselves and around you that in the cultural arena, freebies are widespread, particularly among new information consumers. There are limits to what we can charge for subscriptions. We have a pay wall at Le Devoir.
We are one of the rare media that is successful in having our subscribers pay for quality information. We are well aware that we are stretching their flexibility to the limit by constantly increasing prices. A tax exemption would give us some manoeuvring room. The book industry in Quebec is exempt from the QST. Canadian magazines benefit from tax exemptions and have had access to the Canada Periodical Fund.
View Dan Vandal Profile
Lib. (MB)
You are an independent online magazine. Do you do any video production at all as part of your magazine?
Michelle Hoar
View Michelle Hoar Profile
Michelle Hoar
2016-09-27 11:45
We did do a small amount. Most of our reporting is text-based, but we do some video.
Len Garis
View Len Garis Profile
Len Garis
2016-04-18 15:48
Yes, we certainly would. Thank you very much.
Just to reintroduce myself, I'm Len Garis, the fire chief, but I am also appointed as the city's emergency planner under the Emergency Program Act of the Province of British Columbia,. So there are two pieces of contacts here that are fairly important to us.
When I describe Crescent Beach, a seaside community in Surrey, it's important to know that it's about 142 acres, with about 403 properties, and home to about 1,250 people full-time. That number swells during the summer; as I said, it's a seaside resort community.
I would like to point out that Crescent Beach has two access roads from the the beach, which are intersected by the rail line at grade. The primary route runs along Beecher Street and Crescent Road. As noted, there is a map in my presentation. The secondary route is McBride Avenue.
Due to their proximity, being approximately 500 metres apart, both access points have a tendency to be blocked by passing trains. Again, the map will point that out to you. It shows two proposed emergency exit access points, from our conversations with the BNSF and the city.
The geography of Crescent Beach takes the rail tracks along the coastline of Boundary Bay and Mud Bay at about 4.5 kilometres of the portion of tracks.
For some time, Crescent Beach residents have petitioned that the rail line be moved away from the coastline, citing concerns about dangerous goods being transported too close to the community, along with the inconvenience of having eight to 10 blockages a day, which last between six and 10 minutes.
In December 2007, a mechanical failure forced BNSF to apply its emergency brake at Crescent Beach, resulting in all road access blockage of about two hours.
After this incident, the Crescent Beach Property Owners Association approached the mayor and council and requested immediate action to prevent the community from being isolated or stalled by this train. To help this access concern, Surrey Fire Services, RCMP, and ambulance services worked with BNSF to create a document called the stopped train protocol , and my understanding is that you will be receiving this shortly.
Through this protocol, when a public request for emergency services is received, the emergency provider notifies the respective rail company to either stop or delay the train. The stopped train protocol also provides a process to follow a train breakdown block at critical at-grade crossings, such as those into the community of Crescent Beach.
In October 2010, the city contracted an independent engineering consultant to investigate the matter of emergency access routes to the community of Crescent Beach, should these two access points be blocked again by the train. The study investigated a number of options, but as it turned out at the end of the day, they believe that it was proved to be too complex and costly.
In November 2012, a short time after the stopped train protocol was implemented, another BNSF train breakdown occurred, blocking access to Crescent Beach. During this incident, the stopped train protocol was not adhered to, nor were the Transport Canada regulations requiring any stopped train to be blocked longer than five minutes, to provide unimpeded access to vehicular traffic. This incident resulted in a comprehensive isolation of the community for 30 minutes. Investigation by a BNSF trainmaster later revealed that there had been a communications breakdown.
As a result of the second incident, the mayor and council, the Surrey emergency program, and the RCMP essentially felt a loss of credibility with the residents about their ability to deal with this critical safety issue. We we had put protocols in place to try to alleviate this.
Over the following years, both access roads in and out of Crescent Beach were blocked by a BNSF train on a number of occasions. On June 26, 2014, there was a failure and a blockage for 45 minutes. On August 2, 2014, at 09:35, a mechanical failure resulted in a BNSF train blocking Beecher access for more than 10 minutes, and McBride access for three hours. On January 5, 2015, a mudslide at mile post 125.7, one mile south of McBride Avenue resulted in a BNSF train blocking both access points for three hours and four minutes. On February 18, 2016, a fallen tree across tracks south of McBride resulted in a BNSF train blocking the points again for an hour and 39 minutes.
Following the January 5 incident, a complaint letter was sent to Transport Canada, which responded by saying there was not enough evidence to support the complaint or to proceed with it.
To help mitigate that, the City of Surrey installed CCTV cameras, as well as an electronic monitoring system, first at the Crescent and Beecher Street crossing, and then at the McBride Road crossing. The intent was to collect visual, time-stamped evidence in order to provide Transport Canada with documentation and proof, and to pre-empt any emergencies that were occurring in the community that we knew in real time.
The CCTV cameras monitor and record all rail traffic in contravention of the rail operations rules, specifically rule 103(d), which reads:
no part of a movement may be allowed to stand on any part of a public crossing at grade, for a longer period than 5 minutes, when vehicular or pedestrian traffic requires passage.
Following the installation of the CCTV camera, the incident on February 18 was recorded and is currently under investigation.
It is important to note that from April 1, 2011 to March 31, 2016, there have been 228 calls for emergency service in this community, and in the past few years we have seen several incidents where a stopped train protocol should have been exercised but was not.
Further, on these occasions it appears that BNSF was in violation of Transport Canada's rail operating rules. However, the city has had no indication from Transport Canada that any sanctions or consequences have been applied in order to alleviate this problem and try to encourage them to follow the rules that are in place.
It is the City of Surrey's view that BNSF and Transport Canada have failed to recognize the seriousness of the Crescent Beach community's becoming completely isolated whenever a BNSF train blocks these two access roads.
This creates an elevated life risk, should there be a request for emergency services in the community of Crescent Beach.
That is my statement. Thank you.
Gregory Percy
View Gregory Percy Profile
Gregory Percy
2016-04-18 16:41
Thank you.
First of all, GO Transit is part of Metrolinx. It's a crown agency of the province. Metrolinx would include UP Express, our new airport rail link; GO Transit; and Presto. I will speak on behalf of those also.
Safety is very important not just to the industry, but also certainly to major players like VIA Rail Canada and GO Transit and UP Express. We have this commitment to safety. It's part of the GO and UP culture. That's to the benefit of our customers, employees, contractors, and also for the communities through which we operate. We actually own 80% of our operating network, so that's a responsibility we take very seriously. We have been fortunate to have a great safety record since our inception of 1967, and we look forward to continuing that.
We embed the commitment to safety in our passenger charter, which was precedent-setting when introduced some six years ago. We have an explicit set of promises in terms of the safety we provide to our customers, and we look forward to continuing that process. It's one of those things where you just can't take your eye off the ball. You have to keep reinvesting time and energy to make sure safety matters.
Lastly, we have a safety management system also, as required by Transport Canada. That gets updated every year, and we make sure we live up to those commitments also.
On our community contribution, we do education outreach. We reach into the schools. That's really important. It's important to get to the kids before they get to the tracks. We take that very seriously. We did something in the last year that was quite unique—I believe a precedent to the industry. We partnered with ConnexOntario, another agency of the province, and put up signs at all of our level crossings, stations, and bridges, basically for a health line for those who are desperate with mental health issues. We have had some feedback that it has saved lives as well. We're quite proud of that, and we're hoping that it rolls out to the industry. We already have some interest through CUTA and CN, but we'd like to roll that out right across Canada.
Many of the things we do centre on safety and customer service, but safety is always first, whether in terms of how we build our crew shifts or how we build our equipment. We have been one of the first to embrace the in-cab, video-audio recorders. We have started changing our fleet over, and as soon as they're changed over, we turn them on. We expect to have our fleet turned on by the end of this year. We think this is a very important step forward for us, and we hope that the industry does it also.
As for some of the other areas, we look internationally to see the appropriate best practices of other agencies, not just in North America but outside North America, to see what the right things are, any of which we can reverse-engineer into our operation.
With respect to dangerous goods, for example, we have made some recommendations through the Canada Transportation Act panel. I won't go over those. Many of you may already have read some of them. As an entity that owns 80% of its network, we have an obligation, and it's a fairly unique one, to host such trains carrying dangerous goods. So we've had some early conversation with Transport Canada on what those obligations are.
In terms of other things we do, because we own our operating network, or 80% of it, we wound up contracting to Transport Canada to regulate us on our own network. They felt it was outside their jurisdiction, so we actually chose...and we just renewed for two more years with Transport Canada, inviting them to come onto our corridor, inspect our operating crews, inspect our equipment and the actual right of way. We think this is a good step forward. We don't think self-regulation is the level of safety that we want. We think strong safety is good public policy, and we and our behaviour support that.
For our own corridors we don't wait until we're told what the minimum safety level is in terms of lights, bells and gates. We actually go to maximum protection at all our level crossings.
We are very quick to react should there be any state of disrepair. We think this is very important.
The industry responds to slow orders. Should there be any track specific issues, it's important to react quickly to those also. Of course, the industry tries to do that.
Marc Beaulieu
View Marc Beaulieu Profile
Marc Beaulieu
2016-04-18 16:47
With proactive measures, we monitor and measure the performance of our locomotive engineers thousands of hours a month. We monitor every train speed, every day, with our PSI system that monitors all of our high-speed corridor trains. We measure speed every day.
Our performance compliance and rules monitoring system results are at 99.8%.
We also work closely with all industry partners through the Railway Association of Canada to implement all safety measures necessary.
We've implemented a new passenger specific training program for locomotive engineers that was implemented last September, focusing more on human factors, with the help of human factor experts.
We participate in studies of locomotive voice and video recording with the Transportation Safety Board to try and implement whatever we can to assist us in our safety management system.
I have more. I could keep going if you like.
View Angelo Iacono Profile
Lib. (QC)
Thank you, Madam Chair.
Thank you, gentlemen, for being here today to answer some of our questions. I will be sharing my time with Mr. Badawey.
My first question is for Marc Beaulieu.
The Transportation Safety Board identified the risk of collisions between passenger trains and vehicles, particularly in the rail corridor between Quebec City and Windsor, as one of the greatest risks to safety in the federal transportation system.
Do you think video and voice recorders should be installed in cabins and locomotives, in addition to technical data recorders, as a way to make the system safer?
Marc Beaulieu
View Marc Beaulieu Profile
Marc Beaulieu
2016-04-18 17:07
Thank you for your question.
Right now, we are taking part in an in-cabin video and voice recorder project, in partnership with the Transportation Safety Board.
Our locomotives are already equipped with event recorders that can be checked remotely, by computer, to ascertain the circumstances surrounding an accident. I firmly believe that the addition of in-cab video and voice recorders to our locomotives would significantly improve our safety management system.
View Angelo Iacono Profile
Lib. (QC)
You said it already exists. Is it used strictly when an accident occurs or all the time, in other words, from the moment the train leaves the station?
Marc Beaulieu
View Marc Beaulieu Profile
Marc Beaulieu
2016-04-18 17:08
I'm talking about two different things. First, there is the video and voice recording system. Second, there is an event recorder, which has been in place since the 1980s and can be used to obtain reference data and information on the train's behaviour at any time. The information is also available in the case of accidents or internal investigations by Transport Canada or the Transportation Safety Board of Canada.
View Angelo Iacono Profile
Lib. (QC)
Are all of your locomotives and cabins already equipped with that system?
Marc Beaulieu
View Marc Beaulieu Profile
Marc Beaulieu
2016-04-18 17:08
Yes. They are all equipped with event recorders that provide us with all relevant data on the train's operation, such as speed and braking information. They also tell us things like whether the lights were on and whether the horn was sounded as the train approached a level crossing, in accordance with regulations.
View Angelo Iacono Profile
Lib. (QC)
I have a quick question, again for Mr. Beaulieu.
What is VIA Rail's position on the use of inward-facing cameras? You mentioned that you have video cameras there in case of an event, but what about constant control, constant viewing of what's happening in the cabin?
Marc Beaulieu
View Marc Beaulieu Profile
Marc Beaulieu
2016-04-18 17:15
I think locomotive voice and video recorders would bring a very important element to being able to manage safety. The current laws and regulations in place, and the current pilot project being led by the Transportation Safety Board, will take us to a different level. With that pilot project, with the participation of Transport Canada and the other industry partners, there will be a decision made from a law and regulation perspective on how locomotive and video and voice recorders will be used in the future.
I will leave that final decision to the regulators.
View Luc Berthold Profile
CPC (QC)
The information you provide to us matters. I wish I would've thought to ask the representatives of the other rail companies who appeared before the committee that same question.
My next question is for Mr. Percy.
Correct me if I'm wrong, but you said that you had cameras equipped with audio in your locomotives. Is that correct?
Gregory Percy
View Gregory Percy Profile
Gregory Percy
2016-04-18 17:20
I'll give you a quick summary. We've had externally facing cameras for probably 15 years. We have just started installing inward-facing cameras, with audio, in the last few months. We have a fleet of, say, 75 locomotives, and we have probably about 10% done. We've chosen to turn those on as soon as each locomotive is completed.
View Luc Berthold Profile
CPC (QC)
One of the things the committee has heard is that conductors may be uncomfortable doing their job knowing that they are on camera. We even heard that it could lead to more problems.
How did workers and unions react to that? Did you experience any problems in cases where cameras had been installed and were in use?
Gregory Percy
View Gregory Percy Profile
Gregory Percy
2016-04-18 17:21
We provide our rail service through a third party called Bombardier. They do both the operations and the maintenance. We ensured that they did speak to their union so they would understand what our intentions were.
We have yet to use them to manage the crews. That's not a method that we would choose to use. What we have them there for is to understand how incidents happen and to identify the role of the operating crew should an incident happen where we suspect their behaviour may have some cause in the accident.
Results: 1 - 100 of 117 | Page: 1 of 2

1
2
>
>|
Export As: XML CSV RSS

For more data options, please see Open Data