Committee
Consult the user guide
For assistance, please contact us
Consult the user guide
For assistance, please contact us
Add search criteria
Results: 1 - 15 of 245
Colin McKay
View Colin McKay Profile
Colin McKay
2019-06-04 16:07
Thank you, Chair.
Thank you to all members of the committee for the opportunity to speak with you today.
I don't mind the delay. It's the business of Parliament, and I'm just happy to be a part of it today.
As the chair just mentioned, my name is Colin McKay, and I'm the head of government affairs and public policy for Google in Canada.
We, like you, are deeply troubled by the increase in hate and violence in the world. We are alarmed by acts of terrorism and violent extremism like those in New Zealand and Sri Lanka. We are disturbed by attempts to incite hatred and violence against individuals and groups here in Canada and elsewhere. We take these issues seriously, and we want to be part of the solution.
At Google, we build products for users from all backgrounds who live in nearly 200 countries and territories around the world. It is essential that we earn and maintain their trust, especially in moments of crisis. For many issues, such as privacy, defamation or hate speech, local legislation and legal obligations may vary from country to country. Different jurisdictions have come to different conclusions about how to deal with these complex issues. Striking this balance is never easy.
To stop hate and violent extremist content online, tech companies, governments and broader society need to work together. Terrorism and violent extremism are complex societal problems that require a response, with participation from across society. We need to share knowledge and to learn from each other.
At Google we haven't waited for government intervention or regulation to take action. We've already taken concrete steps to respond to how technology is being used as a tool to spread this content. I want to state clearly that every Google product that hosts user content prohibits incitement to violence and hate speech against individuals or groups, based on particular attributes, including race, ethnicity, gender and religion.
When addressing violent extremist content online, our position is clear: We are agreed that action must be taken. Let me take some time to speak to how we've been working to identify and take down this content.
Our first step is vigorously enforcing our policies. On YouTube, we use a combination of machine learning and human review to act when terrorist and violent extremist content is uploaded. This combination makes effective use of the knowledge and experience of our expert teams, coupled with the scale and speed offered by technology.
In the first quarter of this year, for example, YouTube manually reviewed over one million videos that our systems had flagged for suspected terrorist content. Even though fewer than 90,000 of them turned out to violate our terrorism policy, we reviewed every one out of an abundance of caution.
We complement this by working with governments and NGOs on programs that promote counter-speech on our platforms—in the process elevating credible voices to speak out against hate, violence and terrorism.
Any attempt to address these challenges requires international coordination. We were actively involved in the drafting of the recently announced Christchurch Call to Action. We were also one of the founding companies of the Global Internet Forum to Counter Terrorism. This is an industry coalition to identify digital fingerprints of terrorist content across our services and platforms, as well as sharing information and sponsoring research on how to best curb the spread of terrorism online.
I've spoken to how we address violent extremist content. We follow similar steps when addressing hateful content on YouTube. We have tough community guidelines that prohibit content that promotes or condones violence against individuals or groups, based on race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity. This extends to content whose primary purpose is inciting hatred on the basis of these core characteristics. We enforce these guidelines rigorously to keep hateful content off our platforms.
We also ban abusive videos and comments that cross the line into a malicious attack on a user, and we ban violent or graphic content that is primarily intended to be shocking, sensational or disrespectful.
Our actions to address violent and hateful content, as is noted in the Christchurch call I just mentioned, must be consistent with the principles of a free, open and secure Internet, without compromising human rights and fundamental freedoms, including the freedom of expression. We want to encourage the growth of vibrant communities, while identifying and addressing threats to our users and their broader society.
We believe that our guidelines are consistent with these principles, even as they continue to evolve. Recently, we extended our policy dealing with harassment, making content that promotes hoaxes much harder to find.
What does this mean in practice?
From January to March 2019, we removed over 8.2 million videos for violating YouTube's community guidelines. For context, over 500 hours of video are uploaded to YouTube every minute. While 8.2 million is a very big number, it's a smaller part of a very large corpus. Now, 76% of these videos were first flagged by machines rather than humans. Of those detected by machines, 75% had not received a single view.
We have also cracked down on hateful and abusive comments, again by using smart detection technology and human reviewers to flag, review and remove hate speech and other abuse in comments. In the first quarter of 2019, machine learning alone allowed us to remove 228 million comments that broke our guidelines, and over 99% were first detected by our systems.
We also recognize that content can sit in a grey area, where it may be offensive but does not directly violate YouTube's policies against incitement to violence and hate speech. When this occurs, we have built a policy to drastically reduce a video's visibility by making it ineligible for ads, removing its comments and excluding it from our recommendation system.
Some have questioned the role of YouTube's recommendation system in propagating questionable content. Several months ago we introduced an update to our recommendation systems to begin reducing the visibility of even more borderline content than can misinform users in harmful ways, and we'll be working to roll out this change around the world.
It's vitally important that users of our platforms and services understand both the breadth and the impact of the steps we have taken in this regard.
We have long led the industry in being transparent with our users. YouTube put out the industry's first community guidelines report, and we update it quarterly. Google has long released a transparency report with details on content removals across our products, including content removed upon request from governments or by order from law enforcement.
While our users value our services, they also trust them to work well and provide the most relevant and useful information. Hate speech and violent extremism have no place on Google or on YouTube. We believe that we have developed a responsible approach to address the evolving and complex issues that have seized our collective attention and that are the subject of your committee's ongoing work.
Thank you for this time, and I welcome any questions.
Jim Balsillie
View Jim Balsillie Profile
Jim Balsillie
2019-05-28 8:34
Thank you.
Co-chairs Zimmer and Collins and committee members, it's my honour and privilege to testify today.
Data governance is the most important public policy issue of our time. It is cross-cutting, with economic, social and security dimensions. It requires both national policy frameworks and international coordination.
Over the past three years, Mr. Zimmer, Mr. Angus and Mr. Erskine-Smith have spearheaded a Canadian bipartisan effort to deal with data governance. I'm inspired by the seriousness and integrity they bring to the task.
My perspective is that of a capitalist and global tech entrepreneur for 30 years and counting. I'm the retired chairman and co-CEO of Research in Motion, a Canadian technology company that we scaled from an idea to $20 billion in sales. While most are familiar with the iconic BlackBerry smartphone, ours was actually a platform business that connected tens of millions of users to thousands of consumer and enterprise applications via some 600 cellular carriers in more than 150 countries. We understood how to leverage Metcalfe's law of network effects to create a category-defining company, so I'm deeply familiar with multi-sided, platform business model strategies, as well as with navigating the interface between business and public policy.
I'll start with several observations about the nature, scale and breadth of our collective challenge here.
Disinformation and fake news are just two of the many negative outcomes from unregulated attention-based business models. They cannot be addressed in isolation. They have to be tackled horizontally as part of an integrated whole. To agonize over social media's role in the proliferation of online hate, conspiracy theories, politically motivated misinformation and harassment is to miss the root and scale of the problem.
Second, social media's toxicity is not a bug—it's a feature. Technology works exactly as designed. Technology products, services and networks are not built in a vacuum. Usage patterns drive product development decisions. Behavioural scientists involved with today's platforms help design user experiences that capitalize on negative reactions, because they produce far more engagement than positive reactions.
Third, among the many valuable insights provided by whistle-blowers inside the tech industry is this quotation: “The dynamics of the attention economy are structurally set up to undermine the human will”. Democracy and markets work when people can make choices aligned with their interests. The online advertisement-driven business model subverts choice and represents a foundational threat to markets, election integrity and democracy itself.
Fourth, technology gets its power through control of data. Data at the micro-personal level gives technology unprecedented power to influence. Data is not the new oil. It's the new plutonium—amazingly powerful, dangerous when it spreads, difficult to clean up and with serious consequences when improperly used. Data deployed through next generation 5G networks is transforming passive infrastructure into veritable digital nervous systems.
Our current domestic and global institutions, rules and regulatory frameworks are not designed to deal with any of these emerging challenges. Because cyberspace knows no natural borders, digital transformation's effects cannot be hermetically sealed within national boundaries. International coordination is critical.
With these observations in mind, here are my six recommendations for your consideration.
One, eliminate tax deductibility of specific categories of online ads.
Two, ban personalized online advertising for elections.
Three, implement strict data governance regulations for political parties.
Four, provide effective whistle-blower protections.
Five, add explicit personal liability alongside corporate responsibility to affect CEO and board of director decision-making.
Six, create a new institution for like-minded nations to address digital co-operation and stability.
Technology is disrupting governance and, if left unchecked, could render liberal democracy obsolete. By displacing the print and broadcast media in influencing public opinion, technology is becoming the new fourth estate. In our system of checks and balances, this makes technology coequal with the executive, the legislative bodies and the judiciary.
When this new fourth estate declines to appear before this committee, as Silicon Valley executives are currently doing, it is symbolically asserting this aspirational coequal status, but is asserting this status and claiming its privileges without the traditions, disciplines, legitimacy or transparency that check the power of the traditional fourth estate.
The work of this international grand committee is a vital first step towards redress of this untenable current situation. As Professor Zuboff said last night, we Canadians are currently in a historic battle for the future of our democracy with a charade called Sidewalk Toronto.
I'm here to tell you that we will win that battle.
Thank you.
Jim Balsillie
View Jim Balsillie Profile
Jim Balsillie
2019-05-27 19:14
Thank you very much, Mr. Chair. I will take less than that because I will be giving formal comments to the committee tomorrow.
Mr. Chairman and committee members, it's my honour and privilege to testify today to such distinguished public leaders. Data governance is the most important public policy issue of our time. It is crosscutting with economic, social and security dimensions. It requires both national policy frameworks and international coordination.
In my testimony tomorrow, I will give more description, and then I will end with six specific recommendations. I will spend a couple of minutes today speaking to one of the recommendations that I would like to bring forward to the group, which is that you create a new institution for like-minded nations to address digital co-operation and stability.
The data-driven economy's effects cannot be contained within national borders. New approaches to international coordination and enforcement are critical as policy-makers develop new frameworks to preserve competitive markets and democratic systems that evolved over centuries under profoundly different technological conditions. We have arrived at a new Bretton Woods moment. We need new or reformed rules of the road for digitally mediated global commerce, a world trade organization 2.0.
In the aftermath of the 2008 financial crisis, the Financial Stability Board was created to foster global financial co-operation and stability. A similar global institution, say, a digital stability board, is needed to deal with the challenges posed by digital transformation. The nine countries on this committee plus the five other countries attending, totalling 14, could constitute the founding members of such a historic plurilateral body that would undoubtedly grow over time.
Thank you.
Taylor Owen
View Taylor Owen Profile
Taylor Owen
2019-05-27 19:19
Thank you, co-chairs Zimmer and Collins, and committee members, for having me. I have to say it's a real honour to be here with you and amongst these other panellists.
I'm particularly heartened, though, because even three years ago, I think a meeting like this would have seemed unnecessary to many in the public, the media, the technology sector and by governments themselves. However, now I would suggest that we're in an entirely different policy moment. I want to make five observations about this policy space that we're in right now.
The first point I want to make is that it's pretty clear that self-regulation and even many of the forms of co-regulation that are being discussed have proven and will continue to prove to be insufficient for this problem. The financial incentives are simply powerfully aligned against meaningful reform. These are publicly traded, largely unregulated companies, which shareholders and directors expect growth by maximizing a revenue model that is itself part of the problem. This growth may or may not be aligned with the public interest.
The second point I want to make is that this problem is not one of bad actors but one of structure. Disinformation, hate speech, election interference, privacy breaches, mental health issues and anti-competitive behaviour must be treated as symptoms of the problem, not its cause. Public policy should therefore focus on the design and the incentives embedded in the design of the platforms themselves.
It is the design of the attention economy which incentivizes virality and engagement over reliable information. It is the design of the financial model of surveillance capitalism, which we'll hear much more about, which incentivizes data accumulation and its use to influence our behaviour. It is the design of group messaging which allows for harmful speech and even the incitement of violence to spread without scrutiny. It is the design for global scale that is incentivized in perfect automation solutions to content filtering, moderation and fact-checking. It is the design of our unregulated digital economy that has allowed our public sphere to become monopolized.
If democratic governments determine that this structure and this design is leading to negative social and economic outcomes, as I would argue it is, then it is their responsibility to govern.
The third point I would make is that governments that are taking this problem seriously, many of which are included here, are all converging I think on a markedly similar platform governance agenda. This agenda recognizes that there are no silver bullets to this broad set of problems we're talking about. Instead, policies must be domestically implemented and internationally coordinated across three categories: content policies which seek to address a wide range of both supply and demand issues about the nature, amplification and legality of content in our digital public sphere; data policies which ensure that public data is used for the public good and that citizens have far greater rights over the use, mobility and monetization of their data; and competition policies which promote free and competitive markets in the digital economy.
That's the platform governance agenda.
The fourth point I want to make is that the propensity when discussing this agenda to over-complicate solutions serves the interests of the status quo. I think there are many sensible policies that could and should be implemented immediately. The online ad micro-targeting market could be made radically more transparent, and in many cases suspended entirely. Data privacy regimes could be updated to provide far greater rights to individuals, and greater oversight and regulatory power to punish abuses. Tax policy could be modernized to better reflect the consumption of digital goods and to crack down on tax base erosion and profit-sharing. Modernized competition policy could be used to restrict and roll back acquisitions and to separate platform ownership from application and product development. Civic media can be supported as a public good, and large-scale and long-term civic literacy and critical thinking efforts can be funded at scale by national governments, not by private organizations.
That few of these have been implemented is a problem of political will, not of policy or technical complexity.
Finally, though, and the fifth point I want to make is that there are policy questions for which there are neither easy solutions, meaningful consensus nor appropriate existing international institutions, and where there may be irreconcilable tensions between the design of the platforms and the objectives of public policy.
The first is on how we regulate harmful speech in a digital public sphere. At the moment, we've largely outsourced the application of national laws as well as the interpretation of difficult trade-offs between free speech and personal and public harms to the platforms themselves: companies that seek solutions, rightly in their perspective, that can be implemented at scale globally. In this case, I would argue that what is possible technically and financially for the companies might be insufficient for the goals of the public good, or the public policy goals.
The second issue is liable for content online. We've clearly moved beyond the notion of platform neutrality and absolute safe harbour, but what legal mechanisms are best suited to holding platforms, their design and those who run them accountable?
Finally, as artificial intelligence increasingly shapes the character and economy of our digital public sphere, how are we going to bring these opaque systems into our laws, norms and regulations?
In my view, these difficult conversations, as opposed to what I think are the easier policies that can be implemented, should not be outsourced to the private sector. They need to be led by democratically accountable governments and their citizens, but this is going to require political will and policy leadership, precisely what I think this committee represents.
Thank you very much.
Brian Herman
View Brian Herman Profile
Brian Herman
2019-05-02 8:47
Thank you, Mr. Chairman. We thank you and the committee for allowing us to appear today.
You know my colleague David Matas, our senior legal counsel, who will speak to some of the detailed aspects of the thoughts that I'll be introducing. We won't go into some of the broad comments about the serious nature of online hate. The committee members are well aware of it, and we know from previous testimony that you've heard about the challenges in this space.
One year ago, B'nai Brith Canada called for a national action plan to deal with anti-Semitism—not a federal one but a national one—and combatting online anti-Semitism was part of that plan. This has become all the more important, given one key finding of our annual audit of anti-Semitic incidents in Canada, which we released the other day here in Ottawa. It found that of the 2,042 recorded incidents in 2018—an increase of 16.5% over 2017—80% of those anti-Semitic incidents took place via online platforms. This underscores the challenge for the Jewish community in Canada.
We started our work long ago. In October 2017, David Matas authored a paper on mobilizing Internet providers to combat anti-Semitism. In November 2017, we wrote to ministers of the government regarding the European Union's May 31, 2016, code of conduct on illegal online hate speech. We suggested at that time that Canada adopt the EU's “trusted flaggers” approach as one measure in addressing online hate. Both David and I can talk about that, and we can share both of those documents with the committee.
In December 2018, we submitted a policy paper to the government calling for Canada to develop an anti-hate strategy, a strategy that would include confronting online content that reflects anti-Semitism, Holocaust denial and Holocaust distortion.
In Canada, we know there is a need to foster public debate. The work of this committee will contribute to that end. The public needs to understand the challenges and the role they play in countering online hate, including disinformation. We feel strongly that action cannot just be left to governments, platforms and content providers. We're not calling for an online hate strategy from you. We know that we have to contribute to what the committee and the government do with specific ideas.
It's not for social media companies alone. At the recent meeting of G7 interior ministers, we noted that public safety minister Ralph Goodale said, “The clear message was they [social media companies] have to show us clear progress or governments will use their legislative and regulatory authorities.” We honestly feel that there is no need to reinvent the wheel if we can draw on useful work that is already under way.
Secondly, B'nai Brith Canada understands that in addressing online hate generally, we know that the scourge of anti-Semitism will be captured, as long as we mark anti-Semitism as a particular problem.
There were some thoughts that others offered last autumn. We don't claim authorship of them, but they are worthy of examination.
The federal government needs to compel social media companies to be more transparent about their content moderation, including their responses to harmful speech.
Governments, together with civil society and affected community organizations, foundations, companies and universities, must support more research to understand and respond to harmful speech.
There is an idea about the creation of a forum similar to the Canadian Broadcast Standards Council to convene social media companies, civil society and other stakeholders, including representatives of the Jewish community, to develop and implement codes of conduct.
We need to re-examine the need for a successor to section 13 of the Canadian Human Rights Act, and David will address that.
There are active measures that we can take. For example, in November last year, UNESCO and the World Jewish Congress launched a new website called “Facts About the Holocaust”, designed as an interactive online tool to counter the messages of Holocaust denial and distortion that are circulating on the Internet and social media. This is a useful tool that we think can be considered.
The United Kingdom, just a few weeks ago, released an online harms white paper, and we were very struck by a number of proposals in that document that set out guidelines to tackle content of concern. One proposal in that white paper is the idea of an independent regulator to enforce the rules.
The U.K. also now has a code of practice for providers of online social media platforms, which was published on April 8. These are all good ideas worth considering.
Here are some recommendations, just to summarize.
First, data is the key. The government should incentivize and encourage provincial, territorial and municipal law enforcement agencies to more comprehensively collect, report and share hate crimes data, as well as details of hate incidents. The online dimension needs to be addressed. We are, in fact, in dialogue with Statistics Canada's Canadian Centre for Justice Statistics, which has a consultation exercise under way to see whether or not there is a capacity to record data, not only on hate crimes but on hate incidents, including the online dimension.
Second is to strengthen the legal framework. We feel that Parliament has an opportunity to lead the fight against cyber-hate by increasing protections for targets, as well as penalties for perpetrators.
Third is improved training for law enforcement. Elsewhere, B'nai Brith Canada has argued for more hate crimes units in major cities, or at the least, clear hate crimes strategies and better training.
Fourth is robust governance from social media platforms. Elected leaders and government officials have an important role to play in encouraging social media platforms to institute robust and verifiable industry-wide self-governance. That's already been addressed, but that needs to be the first step, followed by others.
Then, there needs to be more international co-operation. Canada should ratify the 2002 additional protocol to the Council of Europe's Convention on Cybercrime.
There are a number of ideas that we've submitted to the clerk that go beyond what I've said. One of our partner agencies, the Anti-Defamation League in the United States, has done a considerable amount of work on the challenge of online hate, and we've passed to the clerk a number of specific proposals that the ADL has put forward for consideration by industry.
Thank you.
View Arif Virani Profile
Lib. (ON)
If you could provide the cybercrime protocol—the 2002 protocol—to this committee so it forms part of our evidentiary record, that would be helpful.
David Matas
View David Matas Profile
David Matas
2019-05-02 9:21
My colleague, Brian Herman, was pointing out to me that there was actually a bill in Parliament to implement that protocol, which only went into first reading. We'll get you that bill as well.
David Matas
View David Matas Profile
David Matas
2019-05-02 9:21
I think it was 2005.
Brian Herman
View Brian Herman Profile
Brian Herman
2019-05-02 9:21
I'm not sure, but the election might have gotten in the way of that. I noticed on LEGISinfo that it's there, and it just went through first reading. We certainly will provide the protocol.
Bob Hamilton
View Bob Hamilton Profile
Bob Hamilton
2019-02-28 11:21
Good morning, Mr. Chair.
Thank you for the opportunity to appear before the committee to present the CRA's 2018-19 supplementary estimates (B) and the 2019-20 interim estimates, and to answer any questions you may have on either of these funding requests.
I won't reintroduce everybody, as you have already done that, but I have with me today officials who can help me to answer any of the questions you have.
Mr. Chair, as you are aware, the CRA is an organization that serves Canadians and is responsible for the administration of federal tax programs and certain provincial and territorial tax programs, as well as the delivery of a number of benefit payment and other programs.
The CRA is committed to serving millions of Canadians each year through the administration of Canada’s taxes and benefits. The CRA works to administer taxes fairly and in a way that provides Canadians with the information they need to comply with their obligations. We strive to build Canadians’ trust and confidence in Canada’s tax and benefit administration.
The CRA is committed to improving its services and to providing Canadians with a world-class administration. I would note that recently we launched the current tax filing season. The CRA is working to ensure that Canadians know about new and updated services available to them to help them prepare their taxes.
Now I'll provide the committee with a brief overview of the supplementary estimates (B) and the interim estimates.
On the supplementary estimates (B), the agency previously tabled 2018-19 main estimates and supplementary estimates (A), including statutory estimates such as employee benefits. Through these estimates today, supplementary estimates (B), the Canada Revenue Agency is seeking an increase of $18.3 million in its voted and statutory authorities for the following four items:
First, the agency is seeking $13.1 million in incremental funding to administer the federal fuel charge, including the climate action incentive.
Second, the agency is also seeking $374,000 to respond to the recommendations of the Report of the Consultation Panel on the Political Activities of Charities, which includes support related to legislative changes to the Income Tax Act. This funding is primarily for outreach and communications with charities.
Third, the agency is seeking a transfer of $2.9 million from Global Affairs Canada for the global knowledge sharing platform for tax administrations. The platform supports capacity building in developing countries and will equip them to deal with the global challenge of international tax evasion and aggressive tax avoidance.
Since 2016, the agency has successfully operated a knowledge sharing platform prototype to ensure it responds to stakeholders' needs. The funding is required to develop an end-state platform modelled on the prototype. The platform will meet global expectations with respect to platform stability and information security, and serve to better support future growth and modifications. It will provide access to virtual classrooms, communities of practice and a broad library of global best practices in tax administration. This is an example of Canadian leadership and innovation.
Last, also included in the supplementary estimates is a statutory increase of $1.9 million for employee benefit costs associated with the additional funding from the Treasury Board submissions being sought through these estimates.
If approved, these supplementary estimates would increase the agency's 2018-19 authorities to $4.6 billion.
We are now moving on to the 2019-20 interim estimates. In order to begin the 2019-20 fiscal year, the Canada Revenue Agency is seeking a total of $869 million through the interim estimates. This represents the funding required to cover expected payments that will occur in the first quarter of the fiscal year for ongoing operations. The funding being requested as part of the interim estimates is roughly one-quarter of the voted appropriations that will be sought by CRA through the 2019-20 main estimates.
Approximately $4.4 billion in total funding is anticipated through the 2019-20 main estimates. Of this amount, $3.5 billion requires approval by Parliament. The remaining $967.4 million represents statutory forecasts that do not require additional approval, such as contributions to employee benefit plans and children's special allowance payments.
In closing, the resources sought through these estimates will allow the Canada Revenue Agency to continue to deliver on its mission to administer tax, benefits, and other programs, and to ensure compliance on behalf of governments across Canada.
At this time, Mr. Chair, I'd be pleased to respond to any questions the committee may have.
View Peter Fragiskatos Profile
Lib. (ON)
Thank you, Mr. Chair.
Thank you to the officials for being here. Thank you for serving the public. I'm sorry about this morning's delays, but that sometimes happens in this committee and in other committees.
In any case, I'm quite interested in the global knowledge sharing platform. Since I only have five minutes, I have just a couple of questions on that, though.
What was learned from the prototype that you mentioned, Mr. Hamilton? I take it that it was started in 2016, as you said, in small form, and from there obviously lessons have been gained as to how to expand it.
What is the broad purpose of the global knowledge sharing platform? In the end, what are we trying to achieve with it? It sounds like a great way to facilitate contact between Canadian officials working on tax and those in the developing world, but could you go into those two things?
Bob Hamilton
View Bob Hamilton Profile
Bob Hamilton
2019-02-28 11:28
Yes. Thank you, Mr. Chair.
I'll elaborate little bit on the knowledge sharing platform, which I might refer to as the KSP. It started a bit before I arrived at the agency. The people who started it were very foresightful in what they did. It's an example where the agency showed innovation and contributed to a broad, global agenda.
Currently, I am the vice-chair at the forum of tax administrators, which is a group of about 50 tax jurisdictions around the world. One project that we, as Canada, lead is building capacity in developing countries. On your question of what we are trying to achieve with this thing, there's a movement.... As we look at the base erosion and profit sharing initiative of the OECD and the countries involved, this is imposing demands on all jurisdictions—developed and developing—to be able to have systems and rules in place to be able to better coordinate our activity as we try to battle global tax evasion.
Developing countries can find particular challenges in this area. Sometimes their tax systems are not as sophisticated as ours. Being able to implement the systems and rules in their jurisdiction requires building up their capacity.
As the forum of tax administrators, we have tried to contribute in the ways we can to transfer knowledge and some of the best practices we have, and share those experiences with developing countries as they try to build up their tax systems. It is not that we're perfect, but we certainly have things to share that should be helpful to these countries.
It certainly has a benefit for developing countries to the extent that we, as donors, can help them build their tax systems. It also helps us, as developed countries, and Canada in particular. In order to run the global tax system, it works better to the extent that we have good information sharing and good coordination. The better all of the countries' tax systems are, the better able we are to battle tax evasion on the global front.
That is the purpose of it. It's something that Canada has shown leadership in.
The KSP in particular was started as a prototype by CRA. It's part of the broader effort. We at CRA took the initiative to build the prototype. We took it to the point we could. The experience is that there is a demand for it. Lots of countries are tapping into it in its current prototype form to see what training material we might have and how they can access the expertise, documents and guidance that we have in developed countries.
This goes on. It's a web-based platform. It's an innovative way to transfer knowledge that's different from sending a person. We could send someone from Canada to these countries. There is still a demand for that, but this is a way of electronically disseminating that information.
Bob Hamilton
View Bob Hamilton Profile
Bob Hamilton
2019-02-28 11:31
Countries like it and the demand is there. The pilot was successful, and now the next step is to grow it so that it's a more stable platform that can take on more countries and more content and be more useful globally. To do that, we have to invest in the systems and the infrastructure.
That's the statement I have.
View Peter Fragiskatos Profile
Lib. (ON)
I appreciate your passion. You're clearly very passionate about it. The program's merits speak for themselves.
You mentioned best practices. Could you elaborate on one or two best practices that will be shared, in terms of knowledge, with folks in developing countries?
Results: 1 - 15 of 245 | Page: 1 of 17

1
2
3
4
5
6
7
8
9
10
>
>|
Export As: XML CSV RSS

For more data options, please see Open Data