Jail Time for Private Detective

Private detective who led a hacking operation against climate activists, short sellers and others sentenced

Several years ago, we investigated a sprawling hack-for-hire operation targeting a cross section of civil society, lawyers, journalists, activists, & short sellers. That investigation resulted in the 2020 Citizen Lab report, Dark Basin: Uncovering a Massive Hack-For-Hire Operation. Now, a key figure involved in that operation, a private investigator named Aviram Azari, has been sentenced to 80 months in U.S. prison.

Aviram Azari, Israel-based private detective, facing consequences for his role in a hack-for-hire operation exposed by the Citizen Lab

Here’s the story.

Starting around 2017, we were alerted to what appeared to us on first blush to be a phishing campaign targeting victims in the energy sector with some kind of Eurasian nexus. We had just finished a major report analyzing a Russian hack-and-leak operation, which was entitled “Tainted Leaks,” and some of the characteristics of this new campaign seemed to match tactics used by the perpetrators in that other investigation. (We suspected Russian threat actors, in other words). But as we began digging further and identifying more victims, we realized that what we were unearthing went far beyond Russia and Eurasia.

There was an ingenious investigatory technique then-Citizen Lab researcher Adam Hulcoop employed in the Tainted Leaks investigation that came in handy in the Dark Basin investigation. The hackers used a free, web-based link shortener to craft their phishing emails. Hulcoop figured out that we could systematically unravel those links which allowed us to identify and notify numerous additional victims. (It wasn’t the first mistake that the operators made – more on that later). The details on that particular technique are explained in more detail here.

One of the victims we worked with was Matthew Earl, a UK-based short-seller. A short seller is someone who bets on companies being overvalued and their stocks dropping. Earl was short selling a German-based financial processing firm called Wirecard, which seemed to him to be grossly overvalued and involved in some shady business dealings. He wrote about it at the time and they didn’t like it, so they went out and hired Azari, who in turn hired a hack-for-hire firm to hack into Earl’s email accounts and publish incriminating things about him. Earl faced all sorts of other types of harassment because of his dealings with Wirecard. You can listen to Earl and Citizen Lab researcher John Scott-Railton recounting our investigation in this riveting NPR episode.

Shortly after our report was published in June 2020, Wirecard filed for insolvency, exposed as a massive fraud. German authorities launched a criminal investigation. One of the principals of Wirecard, Jan Marselak, became a fugitive from law and fled to Russia where he appears to be living free under some kind of Russian security service protection. Meanwhile, the former CEO, Markus Braun, is facing numerous criminal charges in Germany, including fraud, breach of trust and accounting manipulation.

But Earl and Wirecard weren’t the only pieces to this puzzle. We identified numerous other victims of this phishing operation, including activists working on net neutrality in the United States and environmental and climate crisis advocates involved in the so-called ExxonKnew campaign – so-called, that is, because the campaigners alleged the oil giant was aware of fossil fuel’s contributions to the climate crisis but actively sought to bury their evidence in order not to hurt their bottom line.

Our sleuthing eventually ID’d a Delhi, India-based hack-for-hire firm called BellTroX, which was being contracted to hack the targets we were investigating. The hackers at BellTroX made a number of sloppy operational mistakes which allowed us to identify them, including boasting about their computer hacking skills on their LinkedIn profiles and using their own resumes as bait for test phishing messages.

In the end, some of the victims we notified were rightfully upset about the hacking they experienced and filed complaints with law enforcement. The U.S. Southern District of New York began an investigation. At the request of the victims, we cooperated with that investigation and turned over some of the relevant data we collected to the prosecutors. The SDNY investigation eventually led to the arrest and conviction of Azari.

So what’s Aziri’s role in all of this? Bottomline is he was a middleman. There are good details in the DOJ’s press release and you can read the U.S. government’s sentencing memorandum here, which has even more details.

Born in Israel, Azari was a former policeman who left the force and spun up a private investigation service, called “Aviram Hawk” or “Aviram Netz.” Clients contracted Azari when they had a problem to fix. He then hired BellTroX to hack peoples’ emails, which would then be used in whatever ways the clients wanted. We know from the U.S. conviction that Wirecard was one such client. But who contracted Azari to engineer the hacking of the ExxonKnew campaigners, the net neutrality advocates, or the other victims we identified in our Dark Basin investigation is not known since Azari kept his mouth shut (and paid the price in jail time). By the way, the U.S. authorities say Azari made about $4.8 million over this time running his schemes, which he’s now being required to turn over to the U.S. government.

There are some excellent investigations around the Indian hack-for-hire industry that have been undertaken since our Dark Basin investigation. Journalists Raphael Satter, Chris Bring, and their colleagues at Reuters spent years digging into the Indian hack-for-hire marketplace, showing how such firms are routinely employed in litigation battles, and just last week published a deep dive on one such firm, called Appin, veterans of which formed BellTroX. David Kirkpatrick of The New Yorker also published a very detailed profile of BellTrox and the Indian hack-for-hire industry. I highly recommend their stories for more on this sordid underbelly of global subversion.


What do we learn from all of these investigations? The private intelligence and hack-for-hire industry is spreading wildly, largely unregulated, and implicated in all sorts of abuses and criminal acts. Although we can chalk up a win for law enforcement in this particular case, the victory is partial since Azari was only a middleman. The ultimate perpetrators (i.e those who hired Azari) remain unpunished. And for every successful conviction like this, no doubt countless other schemes like it go undetected. That gap leaves many victims understandably frustrated. “While it’s satisfying to see Azari sentenced for these crimes committed many years ago,” explained Kert Davies of the Center for Climate Integrity, and a victim of the hacking scheme we exposed, “we would still love to know who paid him to target me and my climate activist and lawyer colleagues.”

Sadly, it is normal today for big corporations, law firms, private investigators and those they hire to brazenly violate the law and hack targets in order to stymie some bad publicity or interfere in political advocacy efforts. It’s a real mess and I think it’s going to get a lot worse before it gets better, unfortunately. However, it is also gratifying to know that our research efforts helped to bring about even a small measure of justice.


Media and Other Related Coverage

Financial Times


New York Times (2020)

Union of Concerned Scientists’ Statement

Citizen Lab senior researcher John Scott-Railton, who led our Dark Basin investigation, has an X thread on the sentencing here.



My statement on TikTok’s continuing references to Citizen Lab research

I am disappointed that TikTok executives continue citing the Citizen Lab’s research in their statements to governments as somehow exculpatory. 

I’ve called them out on this in the past, and it’s unfortunate that I have to do it again.

Two years ago we analyzed the TikTok app. Our analysis was restricted to the application, and the kinds of data it collected.  Broadly speaking, we found that it was similar to other social media apps: a vacuum cleaner of personal data. This is not a good thing.

We also highlighted additional concerns, including about latent functionality that could potentially be activated, and noted that TikTok contained some dormant code originally written for Douyn (TikTok’s Chinese counterpart, also owned by ByteDance).

Our analysis was explicit about having no visibility into what happened to user data once it was collected and transmitted back to TikTok’s servers.   Although we had no way to determine whether or not it had happened, we even speculated about possible mechanisms through which the Chinese government might use unconventional techniques to obtain TikTok user data via pressure on ByteDance. 

The conversation about potential privacy and national security concerns with TikTok should serve as a reminder that most social media apps are unacceptably invasive-by-design, treat users as raw material for personal data surveillance, and fall short on transparency about their data sharing practices. This is why comprehensive privacy legislation is desperately needed.

The Autocrat in your iPhone

I am very pleased to bring to your attention a new article I have published in the January/February 2023 issue of Foreign Affairs magazine, entitled “The Autocrat in your iPhone: How Mercenary Spyware Threatens Democracy.”

I had three objectives in writing this article:

  1. To publish an authoritative article on mercenary spyware. While there is a lot of news about Pegasus and spyware abuses, there was not yet a single up-to-date piece that summarizes the issues surrounding this industry for an international relations and foreign policy audience. My objective was to help fill that gap.
  2. To describe how spyware threatens the liberal order. I wanted to explain how this industry is helping to undermine the pillars of the liberal international order — including by disrupting free and fair elections, stifling independent journalism, violating attorney-client privileges, neutralizing independent investigations and oversight bodies, and stifling civil society advocacy. While there is a lot of attention to democratic backsliding and the spread of authoritarian practices, there has been relatively less attention paid to the role the unregulated spyware industry is having in these processes. I wanted to remedy that, by drawing on the numerous case studies on spyware undertaken by the Citizen Lab, our partner organizations, and journalists over the last ten years
  3. To outline solutions. Finally, I wanted to point to some ways in which legal and other restraints could help remedy the harms caused by the mercenary spyware industry. Solving this problem is going to be challenging, in particular because many governments are conflicted and benefit by the status quo. But the tide is turning, and governments, the private sector, and civil society are all taking action — hopefully before it is too late.

It is an honour to publish in Foreign Affairs. The journal has had a big impact on my career. I read it closely as an undergraduate, both current issues and the classics (e.g., George Kennan’s famous “X” article, titled “The Sources of Soviet Conduct,” which was published in the July 1947 issue and helped make the case for post-World War II containment strategies). Those articles inspired my interest in international relations and global security, which I ended up pursuing as a profession.

Read the full article here: https://www.foreignaffairs.com/world/autocrat-in-your-iphone-mercenary-spyware-ronald-deibert


About the Citizen Lab’s CatalanGate Report

The Citizen Lab’s latest research reported — called CatalanGate — has precipitated a major controversy in Spain about the widespread abuse of spyware and the apparent lack of oversight over Spain’s intelligence services.

 This week, the head of Spain’s CNI was dismissed from her post because of the fallout from our report. The European Parliament has also raised questions and has started an investigation into the surveillance of MEPs and others in Spain.

 Amidst these developments, we have observed persistent, misleading and false assertions and theories put forth about the Citizen Lab, our research, and our independence.

 A number of these questions were recently put to us in a letter by a group of Spanish MEPs. We are happy to respond that there is no basis for the concerns. 

Below, I have posted a copy of my letter that was sent on my behalf by the University of Toronto to Mr. Jordi Cañas, MEP.

New Citizen Lab Report: Pegasus vs Predator

A new Citizen Lab report was published yesterday, entitled “Pegasus vs Predator: Dissident’s Doubly Infected iPhone Reveals Cytrox Mercenary Spyware,” authored by Bill Marczak, John Scott-Railton, Bahr Abdul-Razzak, Noura Al-Jizawi, Siena Anstis, Kristin Berdan, and Ron Deibert.


In this report, we detail our investigation into the hacking of the devices of two Egyptians with mercenary spyware technology.

The first – Ayman Nour – is an exiled Egyptian politician now residing in Turkey. Our investigation determined his device was simultaneously hacked earlier this year with Pegasus, the spyware made by the notorious Israel-based NSO Group, and a second piece of spyware called Predator, used by a different government client and made by a less-well-known spyware firm called Cytrox.

The targeting of a single individual with two separate spyware products used by two different government clients shows just how bad the abuse problem around mercenary surveillance technology has become.

The second individual is an exiled Egyptian journalist who chooses to remain anonymous. We determined his phone was also hacked solely with Cytrox’s Predator spyware. We attribute the hacking of the two Egyptians’ phones with medium-high confidence to the government of Egypt.

While NSO Group has received a lot of publicity in recent months, Cytrox has far less exposure outside of the governments it serves. Our report is the first to identify the company’s spyware being abused in the wild by a government client.

We also undertook network scanning for active installations of Cytrox worldwide, and are able to disclose that Armenia, Egypt, Greece, Indonesia, Madagascar, Oman, Saudi Arabia, and Serbia are likely government clients.

Cytrox – part of the “Star Alliance” of surveillance firms

Our investigation also dug deeply into Cytrox’s extremely complicated corporate history, industry alliances, and registration records. We identified some of the key individuals involved in leadership and executive positions at the firm. Cytrox was reported to be part of Intellexa, the so-called “Star Alliance of spyware,” which was formed to compete with NSO Group. Intellexa describes itself with pride as “EU-based and regulated, with six sites and R&D labs throughout Europe.” That may turn out to be a problem for them.

Digging through these arrangements was like entering into a dark labyrinth of sketchy individuals dodged by various legal improprieties and dubious private intelligence and mercenary surveillance companies spread across several state jurisdictions.

These types of ownership obfuscation techniques – similar to those used by plutocrats and money launderers – make investigation, regulation, and public accountability efforts challenging. We should expect to see these techniques more widely practiced as the heat is turned up on the spyware industry (more on that below).

Vulnerability Disclosure and Meta (Facebook) Enforcement

In accordance with the Citizen Lab’s vulnerability disclosure policy, we shared Predator artefacts with Apple and they confirmed they are investigating. As substantial Predator targeting took place through WhatsApp, the Citizen Lab also shared artefacts with Meta’s security team (WhatsApp’s owners, and formerly known as Facebook).

We are happy to say that our report’s publication is co-timed with Meta’s announcement that it is taking extensive enforcement action against Cytrox, including removal of about 300 Facebook pages and Instagram accounts linked to the firm. Meta is also publishing detailed technical indicators linked to Cytrox’s operations, which greatly assists future efforts of security researchers.

Meta’s security team’s investigation also corroborates the Citizen Lab’s identification of Cytrox customers in Egypt, Armenia, Greece, Saudi Arabia, and Oman, and they add Colombia, Côte d’Ivoire, Vietnam, Philippines, and Germany (Germany!) to the list of other government clients. Meta also confirms abusive worldwide targeting of civil society by Cytrox’s customers. Looking at the list of them, it is no wonder.

In the absence of regulations to the contrary, despotic regimes and other illiberal government agencies will use surveillance technology to go after anyone that is construed as a threat to their malicious aims.

The Mercenary Spyware Industry

NSO Group is definitely on the ropes and wobbling badly…and deservedly so. But our latest report should remind us that the problems around mercenary spyware go well beyond a single company. As one goes down, others will bounce up to make a buck. Rebranding is also always a possibility for an industry that thrives in the shadows and employs many of the same corporate shell games as do the despots and dictators they serve.

This report shows that digital accountability researchers at the Citizen Lab, Amnesty International, and numerous threat teams – working alongside many NGO and investigative journalist partners worldwide – are collectively getting better at tracking and exposing these firms’ malfeasances. Sharing of indicators and other information is getting better too.

However, to avoid playing a constant game of “whack-a-mole”, we need governments to step up and act to control what has effectively become a kind of globally distributed “despotism-as-a-service.”

Fortunately, some government officials appear to be waking up to the threat, and are carving a regulatory path forward:

  • On 3 November 2021, the United State Department of Commerce announced that it was putting NSO Group, Candiru (another mercenary company Citizen Lab has reported on) and other spyware firms on its designated “entity list” for malicious cyber activities.
  • As part of the “Democracy Summit,” on 10 December 2021 the governments of  Australia, Denmark, Norway, and the United States announced a new export control and human rights initiative, noting that authoritarian governments “are using surveillance tools and other related technologies in connection with serious human rights abuses, both within their countries and across international borders.”
  • Last week, the head of GCHQ, Sir Jeremy Fleming, said that the UK government had spyware firms like NSO Group “under close review,” and that their sales were “completely beyond the pale”, adding, “countries or companies that promulgate [spyware technology] in an unconstrained way like that are damaging and should not be tolerated.”
  • And just yesterday, 15 December 2021, a group of U.S. lawmakers, led by Senator Ron Wyden and Representative Adam Schiff, sent a letter to the United States Secretary of the Treasury and the Secretary of State advocating that executives at NSO Group and several others surveillance firms should be sanctioned under the Global Magnitsky Act. That would be a very big problem for those executives: bank accounts frozen, travel disrupted, etc. The lawmaker’s letter specifically highlights the research of the Citizen Lab and our colleagues at Amnesty International.

Now, let’s see if other liberal democratic governments follow suit. (I’m looking at you, Canada).

Read the full report here: https://citizenlab.ca/2021/12/pegasus-vs-predator-dissidents-doubly-infected-iphone-reveals-cytrox-mercenary-spyware/

Read about Meta’s Enforcement Action: https://about.fb.com/news/2021/12/taking-action-against-surveillance-for-hire/

News Coverage

P.S. A final comment: this year (2021) mark’s the 20th anniversary of the founding of the Citizen Lab. We haven’t marked the occasion in any special way because we are all more interested in the future than past accomplishments. However, I’m very proud of what we have achieved over the last two decades together, and I feel very lucky to have been surrounded by so many accomplished and ethical researchers over the years.

Now, onto the next twenty!

Chasing Circles


We are publishing a new Citizen Lab report today, entitled “Running in Circles: Uncovering The Clients of Cyberespionage firm Circles,” authored by Bill Marczak, John Scott-Railton, Siddharth Rao, Siena Anstis, and Ron Deibert


The global telecommunications ecosystem upon which we are all heavily dependent was not invented from scratch with a single well-thought plan. Instead, it went through successive waves of evolution over decades, intensifying in more recent years as new digital and mobile technologies have been invented. Security has been ad hoc, fragmented and reactive as a result, leaving a hodge-podge of legacy standards and protocols in place some of which are still open to serious exploitation.

Arguably the most significant of these is something called SS7, a protocol developed in 1975 to handle interoperability among wireline telecommunications firms. Back in the ‘70s — prior to the deregulation and privatization measures that swept through the worldwide industry — the telco marketplace was a much different place. It was more like an old boy’s club (and in many respects, still is). There were far fewer firms, and most of those in existence were either state-owned, crown corporations or utility-like monopolies. (The UK’s telco at the time, for example, was entirely state-run and was quaintly called “Post Office Communications”).

Ironically, SS7 was rolled out in 1975 to solve a preexisting flaw in existing “in-band” interoperability protocols that were at the time being exploited by so-called “phone phreaks” using “blue boxes” (instructions for which they shared in popular magazines) to hack their way into free long-distance phone calls. (A young Steve Wozniak, co-founder of Apple, infamously used one such blue box to make a long distance phone call to the Vatican posing as Henry Kissinger and asking to speak to the Pope).

To solve this problem (and protect revenue) SS7 was created as a new “out-of-band” signal protocol. SS7 has remained in place ever since, principally because there’s a lot of older equipment and systems still out there that require some means to function properly. SS7 is still predominantly used in 2G and 3G mobile networks, and even later generation 4G / 5G networks are susceptible to security issues because they need to interconnect with SS7 networks to work for everyone. One of its central functions today is to handle billing and other services as subscribers roam from one network to another network when they travel internationally.

The SS7 protocol’s “authentication” (such as it is) has relied mostly on trust among a small group of insiders. But as the global telco market rapidly diversified and numerous companies of all shapes and sizes have entered into the arena, SS7 has become ripe for exploitation. Access to the SS7 network can allow a malicious actor to track virtually any target’s location, and intercept voice calls and text messages (which, incidentally, can also be used to intercept codes used for two-factor authentication sent via SMS). 

In 2017, a joint investigation undertaken by CBC News and Radio Canada, in cooperation with German security researchers, demonstrated an SS7 attack against a sitting Canadian member of parliament. With only a telephone number, the investigators were able to use SS7 vulnerabilities to track the MP’s movements and intercept his calls over two separate Canadian telco networks. 

Although high-end nation-state intelligence agencies have been quietly benefiting from SS7’s weaknesses for a long time (thanks to their cozy relationships with their national telcos), privatization and deregulation have opened the door to a whole new array of entrants into that club, including criminals and cyber-surveillance firms.


Our report focuses on one such firm, a company called “Circles,” which was reportedly founded in 2008, and is known for selling systems to government security services to exploit SS7 vulnerabilities. (The company was acquired in 2014 by private equity firm Francisco Partners, who merged it with NSO Group — another regular on the Citizen Lab’s research radar for surveillance abuses). 

Circles’ operations are difficult to investigate and track. Unlike some other types of targeted surveillance, exploiting SS7 vulnerabilities does not leave traces on a target’s device for investigators like ours to discover. Up until recently, what little was known about Circles came from leaked documents or investigating reporting on a few country clients, like Nigeria

Our report opens for the first time a very large window into Circles’ global customer base.

Led by Citizen Lab senior researcher, Bill Marczak, we discovered that Circles’ installations on customers premises leave a distinguishing fingerprint associated with the Check Point firewall that it employs. With that fingerprint as our starting point, we used internet scanning methods, and gathered data from various sources and feeds to identify specific country clients. 

In total, we are able to determine that 25 governments and 17 specific government agencies are likely Circles’ customers: 

Australia, Belgium, Botswana (Directorate of Intelligence and Security Services), Chile (Investigations Police), Denmark (Army Command), Ecuador, El Salvador, Estonia, Equatorial Guinea, Guatemala (General Directorate of Civil Intelligence), Honduras (National Directorate of Investigation and Intelligence), Indonesia, Israel, Kenya, Malaysia, Mexico (Mexican Navy; State of Durango), Morocco (Ministry of Interior), Nigeria (Defence Intelligence Agency), Peru (National Intelligence Directorate), Serbia (Security Information Agency), Thailand (Internal Security Operations Command; Military Intelligence Battalion; Narcotics Suppression Bureau), the United Arab Emirates (Supreme Council on National Security; Dubai Government; Royal Group), Vietnam, Zambia, and Zimbabwe.

A major theme of our work on the commercial surveillance marketplace is how a lack of controls around sales of these technologies to government clients with poor human rights and a lack of public accountability leads to major human rights abuses. Several of Circles’ government clients we identify above are especially disturbing in this regard. For example:

  • We determined that the Security Operations Command (ISOC) of the Royal Thai Army, a unit which has allegedly tortured detainees, is a Circles client.
  • We identified a Circles’ system operated by the Investigations Police of Chile (PDI). Chilean police have a checkered history around extra-legal surveillance against journalists and political opposition. 
  • We identified a single Circles system in Guatemala that appears to be operated by the General Directorate of Civil Intelligence (DIGICI). The DIGICI has used surveillance equipment to conduct illegal surveillance against journalists, businesspeople, and political opponents of the government. Guatemala is presently in the midst of large public protests against government corruption.
  • We identified ten Circles’ deployments in Mexico. Citizen Lab’s prior research has shown Mexico’s government has serially abused NSO Group’s Pegasus spyware to target reporters, human rights defenders, and the families of individuals killed & disappeared by cartels.
  • We identified a Circles’ installation in Nigeria that is likely operated by that country’s Defence Intelligence Agency (DIA). A recent report by Front Line Defenders concluded that Nigeria’s government “has conducted mass surveillance of citizens’ telecommunications.”
  • Our scanning identified what appear to be three active clients in the UAE: the UAE Supreme Council on National Security (SCNS) (المجلس الأعلى للأمن الوطني), the Dubai Government, and a client that may be linked to both Sheikh Tahnoon bin Zayed al-Nahyan’s Royal Group and former Fatah strongman Mohammed Dahlan.

It should be emphasized that Circles’ technology can be deployed against targets both domestically and abroad. In other words, the international reach afforded by Circles’ services allows despots and autocrats to silently target political opposition who may have gone into exile in foreign jurisdictions — a continuation of disturbing trends around transnational repression the Citizen Lab’s research is closely following. Some of the government clients we identified have been suspected of organizing extraterritorial targeted killings of dissidents and political opposition figures.

Unfortunately SS7 exploits are very difficult to guard against. In our report, we urge lawmakers, industry groups, and telecommunications companies to take immediate and meaningful steps to mitigate the long-standing technical weaknesses in SS7. We also urge high risk individuals associated with any of the countries listed above to migrate away from SMS-based two factor authentication immediately for all accounts where it is possible.

Read the full report here: https://citizenlab.ca/2020/12/running-in-circles-uncovering-the-clients-of-cyberespionage-firm-circles/

Globe and Mail Opinion: COVID and Tech Insecurity

I published an opinion piece in the Globe and Mail. The link is here and I’m pasting the entire article below.

The pandemic has made us even more dependent on a highly invasive technological ecosystem




Ronald J. Deibert is director of the Citizen Lab at the University of Toronto’s Munk School, the 2020 CBC Massey lecturer and the author of Reset: Reclaiming the Internet for Civil Society.

My son is an undergraduate student at the University of British Columbia. Like many of his peers, he has seen his classes move online – and so have their exams.

Students in his program were recently required to consent to a remote exam invigilation software platform manufactured by a company called Proctorio. As with most tech companies, work-from-home measures and social isolation have been a boon to Proctorio: more than 2.5 million exams were proctored by the company in April, 2020, alone, a stunning 900-per-cent increase compared with April, 2019. Other companies in this space – such as ExamSoft, Examity and ProctorU – are enjoying similar surges in demand.

Once installed on a student’s device, applications like Proctorio can monitor students’ keystrokes, capture and record anything on their screens, track their web browsing, and even turn on cameras and microphones to record students’ faces, their surroundings and ambient sounds for evidence of cheating. Proctorio’s proprietary algorithms flag what it detects as “suspicious behavior” to faculty or teaching assistants (TAs) for follow up.

My son said using Proctorio made him feel “creeped out” and uncomfortable. Who can blame him?

It’s one thing to have a TA strolling up and down the aisles of an exam room. It’s quite another to force students to install spyware that tracks everything from their keystrokes to retina movements, sending that data down a mysterious black hole. Imagine having an omniscient, invisible robot looking over your shoulder, staring into your eyeballs, scrutinizing every movement, and scanning your bedroom – the entire time you’re taking an exam. Who could concentrate in those conditions? And yet, he had no choice: The course makes it mandatory.

As it turns out, my son is relatively fortunate. He is white, male, has a good WiFi connection, has no disabilities and lives alone. Many other students are not so fortunate, and pay a high price for it. As I dug deeper into Proctorio and other remote surveillance exam software platforms like it, I unearthed a litany of horror stories – most of them affecting students that were already disadvantaged or marginalized.

For example, Black students and other students of colour have reported authentication delays and even outright rejection by remote proctoring applications because of “poor lighting” or “camera position.” Flaws in the software’s facial recognition systems – technology that is notoriously bad at recognizing dark skin tones – is the more likely answer.

Students who experience facial tics have reported anxiety about being flagged for cheating, having spent the entire exam trying to suppress involuntary movements. Even those without disabilities have trouble with tools like Proctorio. On social media, students – some in tears – describe failing exams because the software flagged them for mouthing the questions or looking around their room while thinking, behaviour the system interprets as talking to someone offscreen.

Low-income students with shared living spaces and caregivers recount similar feelings of anxiety, worried that they’ll be flagged because of nearby noise. One student did her best to tune out her 12- and eight-year-old siblings banging on the door for the duration of the exam, siblings for whom she is the primary caregiver.

Beyond these discriminatory effects are even more serious concerns. A system that spies on students’ homes and bedrooms will almost certainly amplify risks of stalking and sexual harassment. It would never be appropriate for a professor or TA to roam around a student’s bedroom, but Proctorio and the like invite them in … by design.

Digital proctoring platforms (as with so much else of our big tech world) seem to presume their users are mostly white, affluent people with high-speed internet and free from crowded houses or caregiving responsibilities. They assume that they can protect students from abuse by making users subscribe to its terms and conditions. All assumptions that are proving to be highly questionable.

When it comes to digital technologies and COVID-19, by far the vast majority of discussion has focused on contact-tracing applications. Although important, this narrow focus has obscured more fundamental and far-reaching effects at the intersection of digital technology, surveillance and pandemic response. While we fixate on the merits of this or that app, we’ve been missing out on an entire landscape shifting beneath our feet.

Largely without public debate – and absent any new safeguards – we’ve become even more dependent on a technological ecosystem that is notoriously insecure, poorly regulated, highly invasive and prone to serial abuse. It’s like building a second floor addition on our homes without fixing the rotting infrastructure. Eventually, it will all come crumbling down – or slowly make us sick.

Consider Amazon. The company’s success amid COVID-19 becomes obvious just by staring out the window at the invasion of delivery vans perched on sidewalks and parked in bike lanes, delivering packages to makeshift home offices.

Once a startup reseller of books and DVDs, Amazon has become the corporate embodiment of globalization and surveillance capitalism. With shops and malls mostly shuttered, Amazon’s online services have exploded. Chief executive Jeff Bezos, the world’s richest person, saw his personal wealth grow by US$50-billion in the first six months of the pandemic alone.

But that sudden growth only exacerbates Amazon’s existing pathologies. They include the company’s predatory pricing, monopolistic practices, and dismal labour conditions for its warehouse and delivery personnel. These so-called “flex” workers lack health care or other benefits and are subject to extensive surveillance, including navigation software, wristbands, security and thermal cameras (likely as much for the prevention of union organizing as for “security”). The company is also responsible for the proliferation of Ring, a notorious home security system associated with racial profiling and warrantless data-sharing with law enforcement, as well as its vast, wasteful and highly polluting “data farms” (which power streaming video services such as Prime and Netflix) – graded “F” for energy transparency by Greenpeace in 2017.

Meanwhile, tech startups of all shapes and sizes – often with dubious qualifications – are trying to capitalize on the growing demand for technology to spy on workers at home (dubbed “bossware”), bust unions, enforce productivity, monitor symptoms, police physical distancing, detect emotions and, yes, invigilate exams remotely. This bewildering array of new digital bracelets, electronic ankle monitors, immunity passport apps, fever detection goggles, drones, thermal cameras, and video- and audio-capture systems are not subject to extensive auditing or built to protect privacy. Instead, many are the digital equivalent of “snake oil” looking to make quick cash on the rapidly expanding surveillance economy.

This explosion of pandemic-era applications will invariably amplify the defects of the mobile marketing and location tracking industry – a sector made up mostly of bottom-feeder companies whose business model relies on collecting billions of user-generated data points, later sold and repackaged to advertisers, law enforcement, the military, customs and border agencies, and private security services (not to mention bounty hunters and other dubious characters). A shocking number of entrepreneurs and policy makers are nonetheless turning to this cesspool of parasitic firms – poorly regulated and highly prone to abuses – as a proposed pandemic solution.

The entire ecosystem presents a bonanza for petty criminals, ransomware opportunists, spyware firms and highly sophisticated nation-state spies alike. Meanwhile, law enforcement and other state agencies – already growing accustomed to reaping a harvest of digital data with weak judicial oversight – will enjoy a bounty of new and revealing information about citizens without any new safeguards to prevent abuse of that power.

Some argue that this COVID-19-era innovation cycle will pass once there is a vaccine. But the more we embrace and habituate to these new applications, the deeper their tentacles reach into our everyday lives and the harder it will be to walk it all back. The “new normal” that will emerge after COVID-19 is not a one-off, bespoke contact-tracing app. Rather, it is a world that normalizes remote surveillance tools such as Proctorio, where private homes are transformed into ubiquitously monitored workplaces and where shady biometric startups and data analytics companies feed off the footloose biosurveillance economy.

There undoubtedly are positive aspects of digital technologies. But, as presently constituted, those benefits are far outweighed by the many long-term negative consequences that we are risking without serious public debate, and which will almost certainly come back to haunt us as this “new normal” settles in.

What’s the alternative? A different approach would use the historic crisis that the pandemic presents as an opportunity for a reset and to rethink the entire technological ecosystem from the ground up. For far too long, data surveillance companies have been largely insulated from government regulations. But this has come at a major social cost and with numerous unintended consequences. With digital technologies now a “lifeline” and an essential service for many who must adapt to both working and living at home, consumers and citizens have a right to demand more. What might those demands entail?

First, and most importantly, we need to clean up the cesspool of free-wheeling data broker, advertisement and location-tracking companies. New laws should be passed to give users real power and restrain how tech companies gather, process and handle their personal information. This includes meaningful and easy to understand “opt-in” measures, rules to minimize the type of data collected to specific and justifiable purposes, and better avenues for users to sue companies that transgress those laws. In particular, legislation should permit consumers to restrict the use of geolocation data by third parties, prohibiting targeted advertising to users visiting therapists, clinics and other “none-of-your-business” activities.

Second, the rights of “flex” workers, independent contractors and other “gig economy” workers need meaningful legal protection. Big tech platforms and other businesses should not be able to use COVID-19 as an excuse to spy on workers in warehouses, factories, rental cars and homes, or to clandestinely monitor their social-media feeds to disrupt labour organizing. Big tech CEOs and their shareholders should also be compelled to use the newfound prosperity they are reaping thanks to COVID-19 to improve their employee’s lives. It is both grotesque and unethical that the Jeff Bezoses of the world can lap up skyrocketing personal wealth while their front-line workers experience layoffs, longer hours, fewer benefits, disproportionate health risks and dehumanizing surveillance measures.

Third, tech platforms should be legally required to open up their algorithms and other proprietary technology to outside scrutiny and public interest audits. The giants of surveillance capitalism hold enormous power over our lives, including over the choices we make, the products we purchase, the news we see and the people with whom we associate. These platforms have increasingly become essential to just about everything we do, and they should no longer be able to operate as “black boxes” whose inner workings are obscured to all but a select few. It’s time to pry open the lid of the technologies that surround us.

Lastly, there’s something all of us can do. We’ve all become habituated to seeking technical solutions for complex social and political issues. And while technologies can produce enormous benefits, we’ll need a lot more than a few new gadgets to solve the problems of our time. We must resist the temptation to reflexively look to “apps” and “platforms” when there may be other more traditional and ultimately more enriching ways to organize our lives, respond to social problems and accomplish our goals.

Unlike many other industrial sectors, the tech platforms have emerged from the pandemic stronger and are already positioning themselves as indispensable to public health. It’s time to hold them, and all of us, to account for it.

RESET: Reclaiming the Internet for Civil Society


I am really excited to be the 2020 CBC Massey LecturerIt was a great honour to be invited and be among the great authors and thinkers who have inspired me over the years, including MargaretAtwood, Ursula Franklin, Jane Jacobs, Charles Taylor, and so many others.

The lectures will be virtual this year, broadcast on CBC Ideas, November 9-13,  with the final episode airing November 16th, with host Nahlah Ayed. (November 17th update: you can listen to all six lectures here: https://www.cbc.ca/radio/ideas/reset-reclaiming-the-internet-for-civil-society-1.5795345).

I had two principal aims in writing RESET: the first was to summarize what I see as an emerging consensus about the many pathologies of social media and the organization of our entire communications ecosystem; the second was to lay out a principled framework for what to do about them.

RESET is published in the United States and Canada with House of Anansi Press, and in the United Kingdom by September Publishing. Thanks to Misha Glenny, Ziya Tong, Marietje Schaake, Cory Doctorow, and Edward Snowden for the very generous reviews!

“No one has done more than Ron Deibert and his lab to expose the enemies of the internet — shadowy companies whose sole business is to make it unsafe for all of us. No one is better placed to explain the intersection of law and technology that makes these abuses possible — and how we can put an end to them. Reset is the definitive narrative of where we went wrong and a last chance to make things right.” — Edward Snowden

“Tech is at a crossroads between oppression and liberation, and Ronald J. Deibert is our leading expert on the forces steering it in either direction. Reset is a road map revealing the secret alleys and byways that brought us to this juncture, and the ways ahead that we could navigate to a better future.” — Cory Doctorow, bestselling author of Radicalized and Walkaway

“One thing is for sure: your phone knows a lot more about you than you know about it. Ronald J. Deibert expertly cracks open our gadgets and electronics to reveal the who, what, and why behind our communications infrastructure. From digital espionage to big-data policing, Reset is a timely and critical look at how cutting-edge surveillance technologies are being weaponized against civil society. With the rise of authoritarianism around the world, Deibert’s book is a must-read for all who want to ensure that dark power stays in check.” — Ziya Tong, science broadcaster and author of The Reality Bubble

“Ronald J. Deibert is a rare hybrid who combines an advanced understanding of computer technology with a rich background in political science. He is also already a legend in security and tech circles because of his work as the founder and director of Citizen Lab . . . In Reset, Deibert outlines with tremendous economy and verve the major threats that face us as a consequence of our rapidly growing dependency on internet technologies, AI, robotics, and, further down the line, machine-to-machine learning and quantum computing. The clarity of his writing enables Deibert to categorize each aspect of the threat on a profound level that will nonetheless be accessible to any reader . . . Covid-19 has made it clear that our globalized world faces fundamental challenges to the survival of our species, along with most others. If we listen to Ron Deibert, we are still in position to head off another of those threats.” — Misha Glenny, bestselling author of McMafia and DarkMarket

“A reset is needed in the relation between privately run technologies and the public interest. Ron Deibert sketches what meaningful change looks like. Ron has been at the heart of analyzing the harms of technology to human rights, and increasingly to the human condition, for decades. His deep research and clear moral compass make his plea for a ‘reset’ an urgent one. To technology experts this book shines a clear light forward beyond current headline-grabbing incidents. To readers new to the depth of effects of the online information ecosystem, it is essential reading to gain clarity on where our values are at stake, and how we may preserve them.” — Marietje Schaake, International Policy Director of the Cyber Policy Center, Stanford University, and President of the CyberPeace Institute

Reset is a shocking call to action and a persuasively argued book. It is the sort of text one hopes will be read widely … After all, a reset of the basic infrastructure of life will only come through a profound political reckoning — and like the foment of 1968, it may just be a reconceptualization of what we want and why we want it that finally drives change.” — Quill & Quire

Testimony Given to the House of Commons on Parliamentary Duties and the COVID-19 Pandemic

The following is testimony provided by Ronald Deibert to the Standing Committee on Procedure and House Affairs (PROC) on April 29, 2020.

I am Ron Deibert, Professor of Political Science and founder and director of the Citizen Lab at the University of Toronto’s Munk School of Global Affairs & Public Policy. Our research at Citizen Lab includes investigating digital espionage against civil society, documenting Internet filtering and other technologies and practices that impact freedom of expression online, analyzing privacy, security, and information controls of popular applications, and examining transparency and accountability mechanisms relevant to the relationship between corporations and state agencies regarding personal data and other surveillance activities. I submit these comments in a professional capacity representing my views and those of the Citizen Lab.

As much of the world moves into work-from-home rules and self-isolation, technology has become an essential lifeline. However, this sudden dependence on remote networking has opened up a whole new assortment of security and privacy risks. In light of these sudden shifts in practices, it is essential that the tools relied on for sensitive and high risk communications be subjected to careful scrutiny.

In what follows, I first provide a summary of the Citizen Lab’s recent investigation into the security of Zoom’s video conferencing application, and the company’s responses. I then discuss a broader range of digital security risks that are relevant to the work-from-home routines that MPs and their staff are following. Finally, I conclude with six recommendations.1

Citizen Lab Research on Zoom Security

On April 3, 2020, the Citizen Lab published a report on a technical analysis of the confidentiality of communications on the popular video chat application Zoom.2 On April 8, we released a followup report with details of a security vulnerability in Zoom’s waiting room feature.3

Our initial report found that the encryption in Zoom did not seem to have been well-designed or effectively implemented, and that its public documentation made several misleading claims about Zoom’s encryption protocols that did not match what we observed in our analysis. I invite those with interest to see the full details as outlined in our report.4

We also found potential security issues with Zoom’s generation and storage cryptographic information. While based in Silicon Valley, Zoom owns three companies in China where its engineers develop the Zoom software. In some of our tests, our researchers observed encryption keys being distributed through Zoom servers in China, even when all meeting participants were outside of China. A company primarily catering to North American clients that distributes encryption keys through servers in China is very concerning, given that Zoom may be legally obligated to disclose these keys to authorities in China.

In our report published on April 3, we noted that we also discovered a security issue with Zoom’s “waiting room” feature. Specifically, we found Zoom servers provided both the encryption keys and a live video stream of the Zoom meeting to all users in the meeting’s waiting room, even if the waiting users had not been approved to join the meeting. This issue would enable an arbitrary, unauthorized Zoom user in a waiting room to intercept and decrypt the “encrypted” video content.

In response to our research and concerns raised by other parties, Zoom has taken a number of actions regarding security.5 Zoom has committed to a 90-day process to identify and fix security issues, including a third-party security review, enhancing their bug bounty program and preparing a transparency report.

In direct response to our research, Zoom acknowledged the concerns we raised about their use of non-industry standard encryption and committed to making improvements, including working towards the implementation of end-to-end encryption. Zoom also acknowledged that some Zoom users based outside of China would have connected to data centres within China, and indicated they had immediately put in place measures to prevent that from happening.

On April 8th, Zoom released a new version of their client that added additional security features. Zoom CEO Eric Yuan indicated in a video webinar that this new version fixed the waiting room security issue we identified.6 He also announced that Zoom had established a CISO Council and Advisory Board to assist with their privacy and security practices, and had hired former Facebook Chief Security Officer Alex Stamos as an advisor.

It is important to underscore that we did not test Zoom’s HIPAA/PIPEDA-compliant healthcare plan, or the ZoomGov software that is used by some government agencies. These platforms would require additional analysis.

While it is encouraging that Zoom is working to improve their product, the sudden reliance by a very large number of people on a platform that was never designed for highly-sensitive communications is symptomatic of a much larger set of problems related to work-from-home routines.7 It is imperative that we evaluate all of the risks associated with this sudden change in routines, and not just those associated with one particular application.

Security Risks Related to Work-From-Home Environments

Legislators working from home are connecting using devices, accounts and applications through widely differing home network setups, as are their staff. These networks may be shared with roommates and family members, whose own digital security practices could collaterally affect their own security, and the devices which are being used are likely loaded with applications that can access large volumes of sensitive information. Whereas in the pre-COVID era, these devices were routinely brought back into the government’s security perimeter where sensors might detect aberrant network behavior, this will no longer be the case. Consequently, adversaries might linger on networks and devices indefinitely, and obtain more data from targets than in a pre-COVID world.

The communications systems that we rely on have rarely been designed with security in mind. Security has either routinely been regarded as slowing the speed of innovation or impossible to impose on essential systems that have chronic failings and which would require total redevelopment of communications infrastructures to become “secured.” The consequence is that there is a vast array of unpatched systems that leave persistent vulnerabilities for malicious actors to exploit. These risks extend right down into the most fundamental layers of our shared infrastructure. For example, telecommunications and cell phone networks still rely on a decades-old information exchange protocol, called SS7, that has been shown to be highly insecure and prone to abuse and illegal surveillance, including when sending second-factor authentications over mobile phone networks.8

Meanwhile, governments and criminal enterprises have dramatically increased their capabilities to exploit this ecosystem for a variety of purposes. Almost all nation-states now have at least some “cyber espionage” capabilities, with many in the top-tier being exceedingly well-resourced and routinely spending billions of dollars on clandestine influence and intelligence-gathering operations. There is a vast and poorly regulated private market for cyber security that includes numerous companies that provide “off-the-shelf” targeted espionage and mass surveillance services.9 Citizen Lab’s research has shown that the market for commercial spyware in particular is proflierating widely, and is highly prone to abuse (including being linked to targeted killings),10 with sophisticated hacking tools ending up in the hands of despots and dictators.11 These relationships may well open the door to the same tools being deployed against legislators and their staff in jurisdictions like Canada. As a result, the government must be wary of seemingly less competent adversaries punching well above their weight by using private and commercial hacking tools.

At the best of times, these problems present extraordinary challenges for network defenders. But parliamentarians and their staff are now at even greater risk. Not surprisingly, threat actors are already capitalizing on this new environment. Phishing and malware attacks have targeted and disrupted hospitals in the Czech Republic, the U.S. Department of Health and Human Services, and the World Health Organization. On April 14, a leading U.S. cybersecurity firm revealed that a “Canadian government health organization actively engaged in COVID-19 response efforts, and a Canadian university conducting COVID-19 research,” had been victims of ransomware attacks.12 These reports are likely only scratching at the surface.

While it is laudable that a platform like Zoom has received a lot of attention about security risks, we should not lose sight of the fact that our entire communications ecosystem contains numerous insecurities, and that there are a multitude of bad actors searching for and seeking to exploit them.

Recommendation #1: Where possible extend digital security resources developed for the House of Commons (HoC) to all Canadians

Remote work for the HoC will require a significant investment in additional digital security support, resources, and capacity. These teams were already engaged in actively protecting members of the HoC and are now dealing with a significantly broader set of home network and device setups, while simultaneously defending against a tsunami of targeted malware and other attacks that are outside of the government’s formal security perimeter.

To partially combat new threats, the CSE’s Canadian Centre for Cyber Security has begun sharing information with infrastructure providers to reduce the likelihood of phishing or malware successfully exploiting devices and systems.13However, the details of this program (and others like it) presently lack public accountability or transparency, and it has not been independently audited. If these are rolled out without proper safeguards, such systems can end up undermining free expression, privacy, and other rights. Wherever possible, the HoC and the rest of government could share mitigation techniques or signatures to Canadian infrastructure owners in a transparent and accountable way to both improve the home security of MPs and HoC staff, as well as all other residents of Canada.

Additionally, distributing and encouraging the use of educational tools to all parliamentarians, their staff, and all residents of Canada could help boost awareness and help mitigate risks.14

Recommendation #2: Evaluate and issue guidance on work-from-home best practices, including those for video conferencing applications.

The Government of Canada should issue detailed guidance on work-from-home best practices that includes a detailed evaluation of video conferencing applications. The latter could include recommendations on scenarios for use of some applications for specific purposes but not others. Such guidance could be made available to Canadians to assist medium and small businesses, as well as individual residents of Canada, make decisions that are informed by security expertise from the government. Although some guidance has been issued already,15,16these are dated, and largely insufficient to the tasks at hand.

By way of contrast, the U.S.’s NSA has issued public guidance that identifies various criteria to consider when using a video conferencing service.17 These criteria include, inter alia, whether the service uses end-to-end encryption; whether they share data with third parties; and whether or not the service’s source code has been shared publicly. Other assessments consider questions of transparency and privacy, for example whether firms issue transparency reports or have clear privacy policies.18

Recommendation #3: Support independent research on digital security and the promotion of secure communications tools.

At a time when daily life significantly depends on technological systems, there should be more high quality, independent research that scrutinizes these systems for privacy and security risks. To assure Canadians that the digital appliances and networks upon which they depend are secure, researchers must have the ability to dig beneath the surface of those systems, including into proprietary algorithms, without fear of reprisal.

Presently, researchers can come under legal threat when they conduct this research, to the detriment of improving security for all users, including MPs and their staff who are at home. As such, we recommend that the Government of Canada pass legislation which explicitly recognizes a public interest right to engage in security research, and prohibits organizations or individuals from legally threatening residents of Canada who are involved in such public interest research.

Recommendation #4: Implement a Vulnerability Disclosure Process for Government Agencies, including the House of Commons

Vulnerabilities disclosure policies (VDPs) establish terms and processes by which researchers can communicate the presence of vulnerabilities in organizations’ systems or networks without fearing legal repercussions. American institutions, such as the Department of Defense,19 have already adopted a VDP and additional American institutions are developing them. Canada should follow this model, so that researchers can identify and work with the government of Canada to mitigate vulnerabilities, instead of declining to communicate them out of fear they may experience legal (or other) threats. This recommendation is in line with a report issued by the HoC Public Safety and National Security Committee in 2019, where it recommended that “the Government of Canada support responsible vulnerability disclosure programs.”20

Recommendation #5: Transparent and Accountable Vulnerabilities Equities Process

The Communications Security Establishment (CSE) currently has a process by which it evaluates whether to conceal the presence of computer software vulnerabilities for use in its own intelligence operations, or to disclose a given vulnerability to ensure that all devices are made secure from it. However, the CSE is formally alone in making decisions over whether to retain or disclose a vulnerability.

We recommend that the Government of Canada broaden the stakeholder institutions who adjudicate whether vulnerabilities are retained or disclosed, especially in light of the enhanced risk that all government workers are at given their work-from-home situation. We also recommend that the Government of Canada follow international best practice and release a full vulnerabilities equities process policy, so that residents of Canada can rest assured that the CSE and its government will not retain vulnerabilities that could seriously compromise the security of Canadians.

Recommendation #6: Support for Strong Encryption

In 2019, the HoC Public Safety and National Security Committee recommended that “the Government of Canada reject approaches to lawful access that would weaken cybersecurity.”21 Given the potential for adversaries to take advantage or poorly-secured devices and systems, we recommend that the Government of Canada support the availability of strong encryption so that MPs, their staffs, and residents of Canada can be assured that the Government is not secretly weakening this life-saving and commerce-enabling technology, to the detriment of all Canadians and our allies.

  1. Thanks to Christopher Parsons, Lex Gill, and Josh Gold for comments and assistance.
  2. Bill Marczak & John Scott-Railton, “Move Fast and Roll Your Own Crypto: A Quick Look at the Confidentiality of Zoom Meetings,” The Citizen Lab, April 3, 2020,https://citizenlab.ca/2020/04/move-fast-roll-your-own-crypto-a-quick-look-at-the-confidentiality-of-zoom-meetings/
  3. Bill Marczak & John Scott-Railton, “Zoom’s Waiting Room Vulnerability,” The Citizen Lab, April 8, 2020, https://citizenlab.ca/2020/04/zooms-waiting-room-vulnerability/
  4. In our report of April 3, we found that Zoom documentation claimed that the app uses “AES-256” encryption for meetings where possible. However, in our testing, a single AES-128 key was used in ECB mode by all meeting participants to encrypt and decrypt audio and video. The use of ECB mode is not recommended because patterns present in the plaintext are preserved during encryption. What this finding means is that the encryption in Zoom does not seem to have been well-designed or implemented.
  5. Colleen Rodriguez, “Zoom Hits Milestone on 90-Day Security Plan, Releases Zoom 5.0,” Zoom Blog, April 22, 2020, https://blog.zoom.us/wordpress/2020/04/22/zoom-hits-milestone-on-90-day-security-plan-releases-zoom-5-0/
  6. “Ask Eric Anything,” (YouTube Video), Zoom, April 8, 2020, https://www.youtube.com/watch?v=TeohYK-hsO4
  7. See John Scott-Railton, “Another Critical COVID-19 Shortage: Digital Security,” Medium. March 23, 2020, https://medium.com/@_jsr/another-critical-covid-19-shortage-digital-security-374b1617fea7
  8. Stephanie Kirchgaessner, “Revealed: Saudis suspected of phone spying campaign in US,” The Guardian, March 29, 2020, https://www.theguardian.com/world/2020/mar/29/revealed-saudis-suspected-of-phone-spying-campaign-in-us
  9. For further detail, see testimony by Ron Deibert on this subject to the Senate of Canada on November 30, 2016, here: https://sencanada.ca/en/Content/Sen/committee/421/ridr/52951-e.
  10. Research by The Citizen Lab has revealed several cases of targeted killings linked to targeted espionage and surveillance software, including the murder of Saudi journalist Jamal Kashoggi. For further information on this, and other cases, see for example: Miles Kenyon, “Dubious Denials & Scripted Spin: Spyware Company NSO Group Goes on 60 Minutes,” The Citizen Lab, April 1, 2019,https://citizenlab.ca/2019/04/dubious-denials-scripted-spin-spyware-company-nso-group-goes-on-60-minutes/.
  11. Bill Marczak, John Scott-Railton, Sarah McKune, Bahr Abdul Razzak, and Ron Deibert, “Hide and Seek: Tracking NSO Group’s Pegasus Spyware to Operations in 45 Countries,” The Citizen Lab, September 18, 2018, https://citizenlab.ca/2018/09/hide-and-seek-tracking-nso-groups-pegasus-spyware-to-operations-in-45-countries/.
  12. James McLeod, “Canadian coronavirus response workers targeted in ransomware attack, says U.S. cybersecurity report,” Financial Post, April 14, 2020, https://business.financialpost.com/technology/canadian-coronavirus-response-workers-targeted-in-ransomware-attack-u-s-firm
  13. Canadian Centre for Cyber Security, “Canadian Shield – Sharing the Cyber Centre’s Threat Intelligence to Protect Canadians During the COVID-19 Pandemic,” April 23, 2020, https://www.cyber.gc.ca/en/canadian-shield-sharing-cyber-centres-threat-intelligence-protect-canadians-during-covid-19.
  14. Some resources to consider include the Citizen Lab’s Security Planner (https://securityplanner.org/) and the Electronic Frontier Foundation’s Surveillance Self Defense project (https://ssd.eff.org/en).
  15. Canadian Centre for Cyber Security, “Considerations when using video-teleconference products and services,” April 3, 2020 (amended April 14), https://cyber.gc.ca/en/alerts/considerations-when-using-video-teleconference-products-and-services.
  16. Canadian Centre for Cyber Security, “Cyber Hygiene for COVID-19,” March 18, 2020, https://cyber.gc.ca/en/guidance/cyber-hygiene-covid-19.
  17. Existing assessments of various video teleconferencing applications could be built on. See, for example, guidance from the US National Security Agency issued on April 24, 2020: (https://media.defense.gov/2020/Apr/24/2002288652/-1/-1/0/CSI-SELECTING-AND-USING-COLLABORATION-SERVICES-SECURELY-LONG-FINAL.PDF).
  18. See, for example, assessments by Freedom of the Press (https://freedom.press/training/blog/videoconferencing-tools/) and Google engineer Gary Belvin (https://medium.com/@gdbelvin/covid-19-and-cybersecurity-e9ee5cba6de7)
  19. Department of Defense Cyber Crime Center, “DoD Vulnerability Disclosure Program (VDP), November, 2016, https://www.dc3.mil/vulnerability-disclosure.
  20. SECU, “Report 38: Cybersecurity in the Financial Sector as a National Security Issue”, Adopted by the Committee June 17, 2019, https://www.ourcommons.ca/Committees/en/SECU/StudyActivity?studyActivityId=10450263. See recommendation 7, page 38.
  21. SECU, “Report 38: Cybersecurity in the Financial Sector as a National Security Issue”, Adopted by the Committee June 17, 2019, https://www.ourcommons.ca/Committees/en/SECU/StudyActivity?studyActivityId=10450263. See recommendation 8, page 39.
  22. Lotus Ruan, Jeffrey Knockel, and Masashi Crete-Nishihata, “Censored Contagion: How Information on the Coronavirus is Managed on Chinese Social Media,” The Citizen Lab, March 3, 2020, https://citizenlab.ca/2020/03/censored-contagion-how-information-on-the-coronavirus-is-managed-on-chinese-social-media/.

Endless Mayfly: an invasive species in the social media ecosystem

Bring up the topic of social media and state-sponsored disinformation, and most people think reflexively of Russian interference in the 2016 U.S. election. As the Mueller report recently affirmed, Russian entities operated a sweeping and systematic social media “active measures” campaign designed to sow division and support Donald Trump leading up to the election.

But what may be less appreciated is just how many other actors in countries and regions all over the world are now undertaking social media influence operations, each with their own unique objectives, flavour, and style. In India, for example, citizens “are bombarded with fake news and divisive propaganda on a near-constant basis from a wide range of sources.” In Myanmar, it is now widely acknowledged that Facebook was used to incite genocide. Throughout Africa, hoaxes, disinformation, and spoofed articles circulate so widely that they are now commonplace; one study found that an alarming 38% of Kenyans, 28% of Nigerians, and 35% of South Africans surveyed acknowledged having shared stories which they knew to be fake.

Indeed, it is fair to say that social media has quickly become what Citizen Lab’s John Scott-Railton has described as a giant “disinformation laboratory.” Multiple actors in just about every region of the world are now experimenting with new techniques to sow disinformation, spread inauthentic narratives, project power and influence, and undermine adversaries. Given this new reality, it is imperative that researchers carefully dissect as many different disinformation operations as can be found to better understand the innovations in tactics, techniques and procedures in this quickly evolving terrain.

Enter “Endless Mayfly.” Endless Mayfly is the name we have given to “an Iran-aligned network of inauthentic personas and social media accounts that spreads falsehoods and amplifies narratives critical of Saudi Arabia, the United States, and Israel.”

Endless Mayfly is but one among many invasive species in the social media ecosystem. What distinguishes it from others, however, is a technique we dubbed “ephemeral disinformation.” Endless Mayfly publishes content on websites they create that impersonate legitimate media outlets, like Le Soir, or the Guardian, using a variety of typosquatting and domain spoofing techniques (e.g., bloomberq[.]com instead of bloomberg[.]com).

Inauthentic personas managed by Endless Mayfly, with names such as “Brian Hayden” or “Mona A. Rahman,” then attempt to amplify the content over social media, by circulating them on their own, or by privately and publicly engaging journalists and others over social media.

But Endless Mayfly’s real innovation comes in the form of its use of ephemerality. Once Endless Mayfly’s carefully constructed content achieves some degree of social media pickup, the spoofed articles are permanently deleted and the links are altered to redirect to the legitimate domain being impersonated.

Click on the link to one of Endless Mayfly’s inauthentic Guardian articles, for example, and after a period of time a user is taken to the legitimate Guardian website instead.

What happened to the original article? “Perhaps it’s the Guardian’s fault?” one might wonder. Who’s to say? In our data-saturated, always-on world, who has the time to find out? Endless Mayfly’s operators appear to be banking on social media users’ short attention spans and our inclination to trust headlines associated with what appear to be credible sources, rather than dig deeper to verify facts from the ground up ourselves.

In total, we found Endless Mayfly created 72 of these fake domains, many of which were used to host 135 of their inauthentic articles. Some of these domains the operators appear to have kept in reserve for future operations, like theglobalandmail[.]org (instead of .com), which was registered by Endless Mayfly but not employed in a specific campaign.

Did it work? It is difficult to measure whether this technique had much of an impact. Quantitatively, engagement with the links to their various articles, accounts, and personas was modest at best. But on several occasions, Endless Mayfly’s inauthentic content was picked up by mainstream media, creating significant confusion. In one instance, for example, Washington Post columnist Anne Applebaum stumbled upon part of Endless Mayfly’s operation and wrongly attributed it to yet more Russian malfeasance.

In terms of our own attribution, we determine with moderate confidence that Endless Mayfly is linked to Iran. This level of confidence is based on “the overall framing of the campaign, the narratives used, and indicators from overlapping data in other reports.” In terms of the latter, in August 2018 accounts and pages associated with Endless Mayfly were deactivated by Facebook in coordination with FireEye, and FireEye traced back registration information and other indicators to Iranian origins. But beyond that circumstantial evidence, we have no “smoking gun” that proves Endless Mayfly is an operation run by the Iranian state itself.

The technique of ephemerality pioneered by Endless Mayfly presents major challenges to researchers, policymakers, and others hoping to investigate and mitigate disinformation operations. Deliberately hiding one’s tracks in this way makes it harder to pin down, analyze, and trace the origins of a malicious campaign, let alone verify the truth-claims and other content that may be getting social media traction. If it becomes a popular tool in the disinformation toolkit, it could sow serious short-term confusion in social media spaces.

In the end, Endless Mayfly’s biggest accomplishment may not be around its principal objective, which was apparently to undermine Iran’s adversaries. It may have more to do with contributing in yet one more way to the ongoing poisoning of our social media public sphere.  

When it comes to cyber security, it is usually the technological layer that gets the most attention, like risks to critical infrastructure and other technical systems. But what about the social and cultural layer? In fact, it may be in this layer where the most intense geopolitical struggles and malicious experimentations are taking place. Given the properties of social media — which as presently constituted favor lewd, salacious, and shocking information — it may also be the layer that is most challenging to defend.

We have no simple remedy to the problems that operations like Endless Mayfly poses, other than to undertake more research, refine our methods, and collaborate with others to better understand the evolving terrain of social media disinformation. To that end, alongside our report, we are publishing a major disinformation research bibliography compiled and annotated by Citizen Lab fellow Gabrielle Lim.

Read the main report here: https://citizenlab.ca/2019/05/burned-after-reading-endless-mayflys-ephemeral-disinformation-campaign

Our annotated bibliography of disinformation research is here: https://citizenlab.ca/wp-content/uploads/2019/05/Disinformation-Bibliography.pdf