2018 elections put digital protection measures to the test

With protective measures being tested in Mexico, Sweden and elsewhere, Canada’s approach to privacy and disinformation has become increasingly regulative. Will any changes be enough, and in place by 2019? 

By: /
June 5, 2018
Facebook elections
An aide puts out examples of Facebook pages, as executives appear before the US House Intelligence Committee to answer questions related to Russian use of social media to influence US elections, November 1, 2017. REUTERS/Aaron P. Bernstein

Just last year, the prospect of the public regulation of Facebook in Canada looked tenuous. In October 2017, the federal government supported the launch of Facebook Canada’s Election Integrity Initiative, a program often understood as self-regulation. Framed as a response to a report from Canada’s Communications Security Establishment warning that foreign forces may digitally interfere with the 2019 federal election, the initiative includes additional transparency measures, a digital news literacy campaign, an emergency cyber threats hotline and the publication of Facebook’s Cyber Hygiene Guide for politicians and political parties. 

Touted by Karina Gould, Canada’s minister of democratic institutions, as “a step in the right direction […] in addressing the challenges of the digital era and the continued protection of the democratic process,” the government’s support for the program is a clear reflection of the government’s initial hands-off approach to digital policy. 

But its approach has taken a turn recently. In April, following the highly-publicized Cambridge Analytica data-mining scandal, members of Facebook’s executive team were grilled by the Standing Committee on Access to Information, Privacy and Ethics. The scandal represents a major turning point in how the federal government thinks about power held by technology companies and how users think about the protection of their personal data. Personal data from the Facebook accounts of more than 600,000 Canadians was alleged to have been collected by Cambridge Analytica, a political consulting firm hired by Donald Trump’s 2016 campaign, without users’ knowledge or permission. The data mine is purported to have impacted more than 87 million people worldwide and some of the collected information includes private messages. 

“What is alleged to have occurred is a huge breach of trust to our users,” Kevin Chan, Facebook’s global director and head of public policy, stated before the committee, “and for that we are very sorry.” 

The standing committee’s hearings have included various witnesses, such as Cambridge Analytica whistleblower Christopher Wylie and the UK’s information commissioner, Elizabeth Denham. Although it is unclear how many sessions the committee expects to run on these matters, with 10 meetings held over the past six weeks, it is certain the breach is being taken seriously. 

The hearings have addressed a range of issues, including privacy, disinformation, content moderation, data security, cyber threats and monopoly power. With the 2019 federal election looming ever closer, the discussion has often centred on the protection of robust electoral integrity in Canada — an issue that has gained prominence alongside the development of new digital technologies. 

For one, there is the threat of digital interference from foreign entities in the form of disinformation such as fake news and illegitimate political advertisements. Such scenarios are one element of the US’s Federal Bureau of Investigation probe into the Russian influence operation during the 2016 US presidential election campaign. Indeed, these are the risks that prompted the inauguration of Facebook Canada’s Election Integrity Initiative. 

There are also concerns about how political parties use personal data gathered from social media platforms and other data-driven services. This issue is particularly relevant to Canada’s electoral system. As Colin Bennett, professor of political science at the University of Victoria, pointed out to the standing committee, unlike many advanced democracies, political parties in Canada are largely unburdened by privacy legislation. 

Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), for instance, only covers commercial data collection, and political parties are excluded from Canada’s Privacy Act on the basis of their classification as ‘government institutions.’ This means that, unlike countries impacted by the European Union’s General Data Protection Regulation, political parties in Canada are not held to nearly as high a standard as commercial organizations. 

Lessons from Ireland, the US, Mexico and beyond

Concerns about the risks data-driven platforms pose to electoral integrity have gone global, a fact that has not gone unnoticed by Facebook and other technology giants. In advance of the recent abortion referendum in Ireland, Facebook blocked all related foreign advertisements. Google banned all referendum advertisements from its platform in the two weeks prior to the vote. However, blocking measures by these companies have so far been rare, with organizations instead opting for tools and resources such as those included in the Election Integrity Initiative. 

In April, Facebook co-founder and CEO Mark Zuckerberg announced that the social media company would soon require companies and people to request authorization prior to running political advertisements. The commitment would first take effect in the US, then roll out to Facebook users across the world. 

Facebook also committed to launching a tool allowing users to easily view all the advertisements a page is running and requiring identity verification from advertisers wishing to run political advertisements and individuals who run large Facebook pages. In a blog post outlining the tool, Zuckerberg wrote, “[t]hese steps by themselves won’t stop all people trying to game the system. But they will make it a lot harder for anyone to do what the Russians did during the 2016 election.” 

But, if these steps aren’t enough, what more can be done? And what about areas of electoral integrity linked to the collection and analysis of user data — those scenarios where breaches of privacy can result in the misuse of personal data for political ends?

Canada’s policymakers could look to the experience of two countries preparing for federal elections, Mexico and Sweden, whose votes are scheduled for July 1 and September 9, respectively. To be sure, both countries’ political and media contexts differ from Canada’s in significant ways. Mexico, for instance, is experiencing its most violent campaign period in recent history, with at least 173 aggressions against politically active individuals recorded. These events have implications for election integrity, both in terms of potential candidates’ willingness to run for office as well as voters’ capacity to cast their ballots freely on election day. 

Despite differences, however, lessons can be learned from the current efforts of the Mexican and Swedish governments to grapple with the complexities of electoral integrity amidst the upshot of the Cambridge Analytica scandal. Indeed, in this fast-changing technological landscape, these efforts are in many ways experiments, and ones whose challenges and successes are internationally relevant. 

"In this fast-changing technological landscape, these efforts are in many ways experiments."

In Mexico, the federal government has adopted a relatively hands-off approach to digital policy, opting to partner with technology companies rather than develop comprehensive public regulation. The country’s National Electoral Institute is working with Facebook, Twitter and Google to combat fake news and is running an educational campaign in Mexican newspapers. These actions have so far had limited impact, Poynter reports, as the program neither allows for collaboration with Mexican media groups nor creates a mechanism that informs individuals when they have been exposed to fake news online. To fill this gap, 60 publishers, academics and public interest groups in Mexico have banded together to fact-check statements made by presidential candidates. 

But, the implications of the Cambridge Analytica data mine have not gone unnoticed in Mexico. In March, Bloomberg revealed that Cambridge Analytica tried to secure work with Mexican presidential candidates and collect data from an app company called Pig.gi. While Pig.gi disassociated itself from the political consulting firm after the scandal received international attention, it is unclear whether Cambridge Analytica established ties with presidential candidates. What is apparent is that, like Canada, foreign and domestic misuses of digital technologies and the associated data collected are real risks to electoral integrity. 

In Sweden, policymakers have focused on the role of fake news from Russian and Islamist influences in how Swedish voters consume and engage in political discussion online. The Swedish Civil Contingencies Agency identified a list of media where disinformation may be shared and communicated these findings to the country’s security service, election authority and police. While it is not evident how exactly authorities plan to crack down on illegitimate content, a report from the Carnegie Endowment for International Peace suggests that the efforts are a serious attempt to “increase [the] capacity to identify vulnerabilities and counter any threats to the election process.” 

The country has even pursued related educational measures in Swedish schools, including media literacy courses. As well, Swedish government officials have met with executives from Google, Facebook and Twitter to discuss the protection of the country’s electoral integrity, amongst other items.

Sweden is also one of the European countries which passed its national General Data Protection Regulation (GDPR) legislation by the EU’s implementation deadline of May 25. The GDPR, a massive overhaul to the EU’s privacy regulations, gives people in Europe more control over what data is collected about them and how it is stored and used. The legislation has been criticized for excessive complexity, but has nonetheless forced companies like Facebook to adjust their business practices to ensure compliance. Hailed by many advocates as a win for privacy and transparency, the GDPR is a substantial step forward for the protection of European residents’ data rights.

Moreover, the legislation is a boon for the electoral integrity of these countries. As data-driven companies and political parties are forced to get explicit consent before they collect and use data, the risk that this information can be misused for political ends is mitigated. 

Sweden and Mexico’s dealings with Facebook and other technology companies illustrate two ways democratic governments have chosen to preserve electoral integrity in the digital age. In Mexico, where the federal government works mainly in partnership with technology companies, efforts to safeguard electoral integrity are perceived as largely ineffective. Accordingly, civic groups have opted to band together to fill in the gaps. In Sweden, efforts by domestic agencies to combat fake news and other risks to electoral integrity are bolstered by the European-wide GDPR. 

In both countries, policymakers and onlookers seem to be increasingly aware that the risks facing electoral integrity are not simply about the spread of disinformation but also the misuse of personal data for political ends. 

Implications for Canada

What do regulatory experiments abroad and recent events like the Cambridge Analytica scandal mean for Canada? A couple things. For one, they suggest that self-regulation by technology companies may not be enough. Second, they reveal that measures to protect voters’ privacy are as important as measures to combat disinformation. When individuals have greater control over their data and are provided with the tools to better understand what their data is being used for, it is less likely that this information will be exploited for political purposes. 

At a recent standing committee meeting, the privacy commissioner of Canada, Daniel Therrien, stated “[t]he time for self-regulation is over […] It is not enough to simply ask companies to live up to their responsibilities. Canadians need stronger privacy laws that will protect them when organizations fail to do so.” Therrien suggested that the privacy commission should have more power to ensure private sector organizations respect privacy laws and issue financial penalties to those who do not. 

New Democrat Member of Parliament Charlie Angus later followed up on this proposal, asking Chan whether Facebook would be supportive of a federal auditor for digital platforms. The Facebook executive replied, “I think there are many different instruments that already exist. I think the privacy commissioner is investigating.” 

The federal government’s recently tabled Bill C-76 addresses some elements of these concerns. Amongst other measures, the bill would disallow organizations from knowingly selling political advertising space to foreign entities. Political parties would also be required to submit privacy policies to Elections Canada and publicly outline how and what information is collected on voters, as well as the lengths taken to protect that data. 

However, it is uncertain whether the legislation will be in force during the 2019 federal campaign. In any case, it certainly won’t be ready for the June 7 provincial election, where the effectiveness of Elections Ontario’s modernization of the electoral process so that it “protects the integrity, security and privacy of our elections” is ambiguous. Moreover, experts note that Bill C-76 does not address some of the issues prompted by the Cambridge Analytica scandal. As Bennett, the Victoria-based professor, told iPolitics: “What’s not in there, is the personal information that may be purchased [by political parties].” 

It’s clear this bill is not the comprehensive piece of privacy legislation that the GDPR offers the European Union. What is clear? That people living in Canada need such protections, and their elections would be all the better for it.